3 Tips for optimizing Docker images

Written by Software Engineer

August 27, 2025
3 Tips for optimizing Docker images

Docker has become the go-to standard for packaging and deploying applications across different environments.

But just because your app runs in a container doesn’t mean it’s efficient. Bloated Docker images can lead to slower builds, longer deployment times, increased storage costs, and even security risks due to unnecessary packages and layers.

This guide explains practical, real-world techniques for optimizing Docker images. Its examples and explanations apply whether you're working with Node.js, Python, or any other stack.

20%

💰 EXTRA 20% OFF ALL VERPEX HOSTING PLANS FOR WORDPRESS

with the discount code

AWESOME

Grab the Discount

1. Choose the Right Base Image


The base image is the foundation of your container. It’s the first line in your Dockerfile, and it sets the tone for how big, fast, and secure your final image will be.

Many developers start with an official image like node:20 or python:3.12, which is fine — but these images include full operating system environments and tooling that your app may never use. That bloats your final image size, slows down build and transfer times, and increases the surface area for vulnerabilities.

A better approach is to start with a minimal base. Every official image usually has a smaller counterpart. For instance:

  • node:20-alpine instead of node:20
  • python:3.12-slim instead of python:3.12
  • debian:bookworm-slim instead of debian:bookworm

These smaller variants strip out unnecessary tools, locales, and debugging packages, leaving you with just enough to run your app.

For example, switching from node:20 to node:20-alpine can reduce your image from nearly 1GB to under 100MB:

# Bigger image (~980MB)
FROM node:20

# Smaller image (~60MB)
FROM node:20-alpine

But it’s not always as simple as swapping the base and calling it a day. Alpine uses musl instead of glibc, which can break Node packages with native dependencies or Python packages that compile C extensions.

If you have problems installing modules like Sharp, Bcrypt, or anything that needs compiling, you may need to install extra build tools like make, gcc, and python3 or switch to a slim variant that’s a bit heavier but more compatible.

Still, the gains from switching to a smaller base are usually worth it. It’s a tiny change that can shave hundreds of MB off your image and significantly speed up deployments and CI runs. The key is to pick the lightest image that works reliably with your stack — and avoid reaching for the full-fat base image unless you truly need it.

2. Order Layers for Better Caching


One of the easiest ways to waste time with Docker is to ignore how it builds images. Docker doesn’t rebuild your image from scratch every time — it builds in layers, and it caches those layers.

If you change a layer, Docker rebuilds it and everything after it. If you don’t, it uses the cached version and skips to the end. That’s a huge performance boost, but only if you structure your Dockerfile right.

Let’s say you have a Node.js app. A common mistake is to do this:

COPY . .
RUN npm install

This looks fine at first, but it causes pain later. Why? Because every time you change even one source file — say, you tweak a line of HTML or fix a typo in your README — Docker will invalidate the cache for the COPY . . step.

And once that’s invalidated, it also re-runs npm install even if your package.json hasn't changed. That’s unnecessary, and in bigger projects, it can cost you several minutes per build.

Now compare that with this version:

COPY package*.json ./
RUN npm ci
COPY . .

Here’s Docker only re-runs npm ci if your package files change. If you're just changing source code, it skips straight to COPY . ., keeping the dependency install step cached.

That can save you massive time in local dev loops, CI pipelines, or deployment builds. This also helps with multi-developer teams. Imagine you’re on a team where ten devs push daily. A well-structured Dockerfile with proper layer ordering means you don’t waste cloud compute re-installing packages that haven’t changed. Multiply that savings by daily builds and team size, and it’s not small.

Same logic applies across stacks. In Python, you’d copy requirements.txt first, install dependencies, and only then copy the rest:

COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .

It’s the same principle: put slow-changing files at the top, fast-changing files at the bottom.

There’s also another angle: cache busting. Sometimes people try to be “smart” and combine multiple steps in a single RUN to reduce layers. That’s fine, but you lose caching granularity.

For example:

RUN apt-get update && apt-get install -y curl && npm install

If anything in that line changes, you lose the cache for the whole thing. A better approach is to isolate logically separate steps:

RUN apt-get update && apt-get install -y curl
RUN npm install

This lets Docker cache each part individually.

So yeah, ordering isn’t just some nitpicky trick. The key is to understand your change frequency. Put stable stuff first. Put the things that change often (like source code) last. It’ll feel like magic once you stop reinstalling everything for no reason.

3. Skip devDependencies in Production


When you're building a container for local dev, it makes sense to install everything like linters, test runners, TypeScript, and whatever tools you use to build or check your code. But none of that belongs in production.

Here's why this matters:

1. Size: Dev dependencies can easily double or triple your image size. That means slower pulls, slower deploys, and bloated storage.

2. Security: Every package you include is another potential attack surface. Dev-only tools often don’t get the same scrutiny as production packages, and leaving them in your final image can expose unnecessary risks.

3. Performance: Some dev tools run background processes or introduce side effects. You want your prod container to do exactly one thing: run your app.

Let’s take a Node.js app as an example.

If you're doing this:

RUN npm install

You’re pulling in everything, including stuff from devDependencies. What you should be doing instead is:

RUN npm ci --only=production

That skips dev dependencies completely and installs only what's needed to run the app.

But the real power move here is multi-stage builds.

Instead of manually trimming things down, you should separate your Dockerfile into two parts: one for building and one for running. Here’s a simple example:

# Stage 1 -- build
FROM node:20-alpine AS builder
WORKDIR /app
COPY . .
RUN npm ci && npm run build

# Stage 2 -- production
FROM node:20-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/package*.json ./
RUN npm ci --only=production
CMD ["node", "dist/index.js"]

What’s happening here is clean:

  • First stage: installs everything, builds the app

  • Second stage: starts fresh, pulls in only the built code and the production dependencies

This lets you use any tools you want during the build without dragging all of that into the final image. And because the second stage is clean, you don’t even carry over build-time junk like.ts files or test fixtures. Just your app, ready to run.

90%

💰 90% OFF YOUR FIRST MONTH WITH ALL VERPEX RESELLER HOSTING PLANS

with the discount code

MOVEME

Use Code Now

Wrapping Up


Optimizing Docker images isn’t about perfection. Start with a lightweight base. Be intentional about layer order to make caching work. And never ship what you don’t need.

Most importantly, treat your Dockerfile like code that deserves care. If it runs in production, it deserves to be fast, clean, and secure.

And if you ever catch yourself waiting for another 1.2GB image to build or pull, remember: you can fix that.

Frequently Asked Questions

Can I prune specific types of resources with Docker?

Yes, Docker provides different prune commands to target specific types of resources, such as images, containers, volumes, networks, etc. This allows you to selectively clean up the resources you want to manage.

How can I automate the removal of unused Docker components?

You can automate the removal of unused Docker components by creating scripts or using tools that execute Docker prune commands regularly. Be cautious with automation, and ensure you validate the resources to be deleted to avoid accidental removal of important components.

Can I recover a Docker image, volume, or container once it has been deleted?

No, Docker deletion is irreversible. Once you remove an image, volume, or container, its data is permanently lost. Always ensure that you have backups or alternative sources if you might need the resources again.

Why might I encounter an error when trying to remove a Docker image, volume, or container?

You might encounter errors when trying to remove Docker components if they are in use, or if you do not have the necessary permissions to perform the removal operation. Ensure that you stop running containers before removing them and use appropriate user privileges when executing Docker commands.

Discount

💰 EXTRA 20% OFF ALL VERPEX SHARED WEB HOSTING PLANS

with the discount code

AWESOME

Save Now
Jivo Live Chat