Logo Blockpit

Blockpit

Startup

Building Docker Images

Description

Julian Handl von Blockpit zeigt in seinem devjobs.at TechTalk, warum manchmal Docker Images viel zu viel Speicherplatz brauchen und gibt Tipps was man dagegen tun kann.

By playing the video, you agree to data transfer to YouTube and acknowledge the privacy policy.

Video Summary

In Building Docker Images, Julian Handl explains Docker layers and caching, showing how naïve Dockerfiles cause slow builds and bloated images and how selective COPY of package.json/package-lock.json with early npm install maximizes cache hits. He compares base images (node:16 vs node:16-alpine) and demonstrates multi-stage builds that shrink an Angular app from 1.3 GB to 670 MB to 113 MB, highlights file pitfalls (moves/deletes duplicating across layers, COPY vs ADD), and advises using .dockerignore. He concludes with from-scratch images using statically compiled binaries (a 130 KB hello-world and a 5.01 MB Angular image with DHCPT), enabling viewers to cut build time, storage, and network usage.

Building Docker Images: Practical strategies from Julian Handl (Blockpit) for smaller, faster containers

Why this talk mattered: time, power, space, traffic

At DevJobs.at, we watched “Building Docker Images” by Julian Handl (Blockpit) and immediately recognized a familiar pain point: Docker images that are bigger than they should be and builds that take far too long. Handl frames the motivation in refreshingly concrete terms:

“I like to save time … I like to save power … I want to save space … and I want to save traffic.”

Those four goals translate directly into day-to-day developer experience and operational cost. If a brand-new laptop can only hold a couple hundred images or a cloud node only a few dozen, efficiency stops being a nice-to-have. The core message of the session: it’s easy to build Docker images, and just as easy to build them inefficiently. The fix is a handful of simple, repeatable patterns.

Layers and cache: the upside—and the catch

Docker builds images one instruction at a time; each instruction creates a new layer, and all of those layers end up in your final image. The upside is layer caching. The catch is that caching only holds until something changes at a given step—once a layer’s inputs change, every instruction after that point must be rebuilt.

Handl walks through the classic Node example. In the “naive” approach, you copy the entire app into the container, run npm install, and then build. That early full-copy step becomes the cache invalidation point: change a single source file and the cache is busted before npm install, so the expensive dependency install re-runs on every build—even if package.json hasn’t changed.

The fix is to be more deliberate with the sequence of steps. Copy package.json and package-lock.json first, run npm install immediately, and only then copy in the application source (for example, src). In most projects, application code changes frequently, while dependency manifests change far less often. Keeping npm install behind a stable layer turns it into a reliable cache hit, saving compute and bandwidth on rebuilds.

The core practice

  • Put late-changing inputs later in the Dockerfile; keep early layers stable.
  • Process dependencies before copying the rest of the app.
  • Avoid broad, early “copy everything” steps; prefer targeted COPYs.

Save space and bandwidth: start with the base image

One of the biggest levers is the base layer itself. Handl compares two familiar Node bases:

  • node:16 at around 332 MB compressed.
  • node:16-alpine at around 39 MB.

Simply switching to the alpine variant trims hundreds of megabytes for every pull, push, and rollout. Of course, compatibility matters; where it fits, alpine shows just how much bloat we can avoid by default.

Multi-stage builds: leave the heavy parts behind

A centerpiece of “Building Docker Images” is multi-stage builds. The idea is to split the build and the runtime into separate stages. Tooling and weight live in the builder stage; the final runtime stage is as small as possible, containing only what’s needed to run.

Handl sets up a builder stage on node:16 to install dependencies and build the project. He then starts a second stage on node:16-alpine and copies only the build artifacts (for example, the dist folder) from the builder into this minimal runtime stage. Crucially, everything before the last FROM is left behind; development tools, temporary files, and build-only dependencies never make it into production.

Numbers that stick

He illustrates the win with a single-page Angular app:

  • Single-stage with node:16: about 1.3 GB.
  • Switching the base to node:16-alpine only: about 670 MB.
  • Two-stage (builder node:16, runtime node:16-alpine): about 113 MB.

The base swap alone cuts size by roughly half; the multi-stage approach shrinks it further to a small fraction of the original. For teams that ship images over networks frequently, this is a tangible optimization.

Files and layers: beware of duplication and “fake cleanup”

One of the most practical segments of the talk focuses on how files behave across layers—and how easy it is to unintentionally duplicate them:

  • Copy only what you need, and use .dockerignore alongside .gitignore to keep the build context small.
  • Moving files duplicates them across layers. A “rename” in your mental model is, for Docker, a new file in a new layer, while the old one persists in the previous layer.
  • Deleting files late doesn’t actually remove them from the image. A large archive copied early and “removed at the end” still exists intact in earlier layers.
  • Changing permissions or metadata creates new file representations in new layers; the old versions remain underneath.

The takeaway: every layer is part of the final image. You can’t “clean up” a previous layer by deleting files later. To truly leave build-time artifacts behind, you need multi-stage builds and a slim final stage that never contained the heavy files in the first place.

From scratch: the ultraminimal runtime for static binaries

Handl goes beyond alpine to the extreme end of the spectrum with scratch: a tiny base that ships with Docker and is as empty as it gets—no filesystem beyond root. scratch is suitable for statically compiled Linux binaries.

Two practical points stand out:

  • COPY vs. ADD: With ADD, a tarball can be extracted on the fly, which can be useful when you need to populate a minimal image with predefined filesystem contents.
  • A minimal “Hello World” example: build a small program in a builder stage (for example, on Alpine), then copy only the statically linked executable into a final scratch image. The end result is a working container at roughly 130 kilobytes—essentially just the binary itself.

This is the logical endpoint of the multi-stage mindset: do all the heavy lifting in a builder, and ship a runtime that contains nothing but your executable.

A tiny Angular runtime: three stages, a static server, 5.01 MB uncompressed

The second standout example in the talk is a “smallest Docker image” that serves a complete Angular application. The pipeline consists of three stages:

  1. Build the Angular app.
  2. Compile a very simple web server (DHCPT) into a static binary.
  3. Assemble a final scratch image that contains the static server and the built Angular assets, and run the server.

Key details from the session:

  • This approach works “to be fair” only with Angular hash routing because the server does not support URL rewrites.
  • The uncompressed final image is about 5.01 MB including the entire Angular app.
  • The Angular app accounts for roughly 4.8 MB of that total; the surrounding runtime is extremely small.
  • Credit to Florian Lippan for the idea of compiling DHCPT into a static binary; Handl extended the idea with the Angular pipeline.

The outcome is a self-contained image that needs no volumes and no external runtime components—just the binary and the static assets.

The checklist Handl leaves us with

The closing advice is deliberately short, and that’s its strength. These are the rules we keep top of mind when authoring Dockerfiles:

  • Copy only what you need.
  • Choose the right base image.
  • Really, really use multi-stage builds.

These principles reinforce one another. If you don’t copy it, you don’t have to clean it up. If you need it only to build, keep it in the builder stage. If you need it to run, move it into the final runtime stage. And the base you choose determines how much weight you carry by default.

What we learned to watch for in everyday Dockerfiles

The talk pushes us to evaluate Dockerfiles through a few simple lenses:

  • Cache boundaries: identify where change happens and push those steps later. Keep dependency installation behind stable layers to leverage caching across rebuilds.
  • Runtime vs. build-time concerns: isolate with multi-stage builds so big toolchains and temporary files never reach production.
  • Context discipline: trim the build context with .dockerignore and precise COPY instructions.
  • No post-hoc cleanup illusions: once a large file is in a lower layer, it’s in your image. The only fix is “don’t include it in the runtime stage.”
  • Minimal runtimes for static workloads: where possible, use a tiny server or a static binary on scratch to ship only what’s necessary.

Memorable lines and ideas

A few lines from Handl resonate because they summarize the mechanics succinctly:

  • “Docker images are actually easy to build, but it’s also easy to build them in a very inefficient way.”
  • “We can only cache to the point where something changes.”
  • “Everything before the last FROM is really left behind and doesn’t end up in your final image.”
  • “To really leave stuff behind, you have to use multi-stage builds.”

These are the heuristics that keep our Dockerfiles fast and our images lean.

Conclusion: simple patterns, outsized impact

“Building Docker Images” by Julian Handl (Blockpit) shows that meaningful optimization doesn’t require exotic tooling—just a clear mental model of layers, cache, and stages.

  • Switching to a lighter base image produces immediate, substantial size reductions.
  • Multi-stage builds cleanly separate build-time heft from runtime needs.
  • Careful file handling avoids silent duplication and ineffective cleanup steps.
  • For static apps, a tiny HTTP server—or an entirely static binary on scratch—can yield runtimes in the kilobyte to low-megabyte range.

The result is an image that’s faster to build, cheaper to move, and thriftier with storage—checking all four boxes Handl started with: time, power, space, and traffic.

More Dev Stories