If you’re preparing for a Cloud Engineer or DevOps Engineer interview in 2025, Docker is non-negotiable — it comes up every single time.

Today, we’re going through 10 Docker questions you’re very likely to face — and how to answer them clearly and confidently.


Before We Dive In: What Is Docker?

Docker is a platform that lets you run applications in isolated environments called containers.

These containers can run consistently across different environments — your laptop, a test server, production — without running into problems that only appear outside your local setup.

In real-world projects and DevOps roles, Docker is almost everywhere — for app development, testing, CI/CD pipelines, and cloud deployments.


Question 1: What’s the difference between an image and a container? And what happens if you delete the image?

A Docker image is like a frozen recipe for your application — it includes everything needed to run it: code, libraries, environment settings.

A container is what you get when Docker takes that image and actually runs it — like turning the recipe into a real, running dish.

The image is read-only and reusable; the container has its own writable layer and lives as an isolated process. If you delete the image but the container is already running, it keeps working — because the image is already loaded into memory.

But if you stop that container and try to restart it, Docker won’t find the image — and the restart will fail.

🚀 Pro Tip:

Tag your images properly (my-app:latestmy-app:v1.2) and regularly prune unused ones — but avoid deleting images used by running containers.


Question 2: What actually happens when you run docker run?

When I run docker run, Docker first checks if the image exists locally.

If not — it pulls it from a registry like Docker Hub. Then it creates a container — which is basically an isolated process with its own filesystem, networking, and resources.

Docker sets up isolation using Linux namespaces and cgroups.

Finally, it executes the default command defined in the image.

📌 Example:

Running docker run nginx pulls the nginx image (if it’s not local), creates a container, and starts the Nginx server inside that isolated environment.

🚀 Pro Tip:

Use --rm with docker run to automatically clean up the container after it exits — so your system doesn’t get cluttered with stopped containers.


Question 3: COPY vs ADD — what’s the difference, and when would you use each?

Both COPY and ADD move files into a Docker image.

  • COPY simply copies files and folders — no surprises.
  • ADD does more: it can unpack compressed files like .tar.gz and download files from URLs.

Because ADD has extra behavior that’s not always obvious, I prefer COPY for clarity and only use ADD when I specifically need those extra features.

📌 Example:

# Preferred
COPY ./app /app

# Only if needed (auto-extracts)
ADD archive.tar.gz /app/

🚀 Pro Tip:

Simplicity is security — prefer COPY unless you have a very good reason.


Question 4: What’s a multi-stage build, and when would you use it?

A multi-stage build is when you use multiple FROM statements in your Dockerfile.

  • The first stage does heavy work — installing dependencies, building code.
  • The final stage copies only the necessary artifacts into a clean, minimal base image.

This reduces image size and eliminates unnecessary files and tools from production images.

📌 Example:

# Stage 1 — build
FROM node:20 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

# Stage 2 — production
FROM node:20-slim
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./
CMD ["node", "dist/index.js"]

🚀 Pro Tip:

Smaller images = faster deployments + fewer vulnerabilities.


Question 5: CMD vs ENTRYPOINT — how do they work together?

ENTRYPOINT defines what runs — the main executable.

CMD gives it default arguments.

When you use docker run, the arguments you pass override CMD — but not ENTRYPOINT.

Together, they give you a flexible default setup that you can still tweak at runtime.

📌 Example:

FROM nginx:1.28.0-alpine-slim

ENTRYPOINT ["nginx"]
CMD ["-g", "daemon off;"]

🚀 Pro Tip:

Want full control at runtime? Use only CMD — and leave out ENTRYPOINT.


Question 6: Volumes vs Bind Mounts — which would you use in production, and why?

In production, I prefer Docker-managed volumes. Volumes are isolated from the host OS, easier to back up, and more stable over time.

Bind mounts depend on host paths — which makes them fragile and environment-specific. Bind mounts are great for local development, not for production.

📌 Example:

services:
  db:
    image: postgres
    volumes:
      - pg-data:/var/lib/postgresql/data

volumes:
  pg-data:

🚀 Pro Tip:

Volumes are portable, reliable, and more future-proof across hosts and clusters.


Question 7: How does networking between containers work?

By default, Docker connects containers to a bridge network, where they get private IP addresses.

However, they can only reach each other by IP unless you create a user-defined bridge network.

In a user-defined bridge network, containers can communicate by name, which simplifies service discovery.

📌 Example:

services:
  app:
    image: my-app
    environment:
      DB_HOST: db
      DB_USER: appuser
      DB_PASSWORD: secret
    networks:
      - app-net

  db:
    image: postgres
    environment:
      POSTGRES_DB: mydb
      POSTGRES_USER: appuser
      POSTGRES_PASSWORD: secret
    networks:
      - app-net

networks:
  app-net:

🚀 Pro Tip:

Always use user-defined networks — the default bridge network doesn’t support automatic name-based discovery and can lead to frustrating connection issues.


Question 8: How do you reduce Docker image size, and why should you care?

Reducing image size speeds up builds, deployments, and improves security.

  • I start with a minimal base image — like alpine:3.21.3python:3.12-slim-bookworm, or node:20.13.1-alpine.
  • I combine RUN instructions to reduce layers, and clean up any temp files or caches.
  • I always use a .dockerignore file to exclude things like .gitnode_modules, and test data.

And for complex builds, I use multi-stage builds to separate build tools from the final runtime.

📌 Example:

A Node.js app image can shrink from 1.2 GB to under 200 MB by switching to node:20.13.1-alpine and cleaning up properly.

🚀 Pro Tip:

Smaller images = faster pipelines, quicker deploys, and fewer security issues.


Question 9: What is BuildKit, and why use it?

BuildKit is Docker’s modern build engine that significantly improves performance, security, and flexibility.

It enables parallel execution of independent build steps, advanced caching mechanisms, and secure handling of secrets during builds. BuildKit also supports features like multi-platform builds and efficient storage management.

In modern Docker versions, BuildKit is enabled by default — so you don’t need to configure anything. But if you’re on an older setup, you can still enable it by setting DOCKER_BUILDKIT=1 or updating the Docker daemon config.

📌 Example:

With BuildKit, multi-stage builds can run steps in parallel, reducing build times significantly.

🚀 Pro Tip:

Always leverage BuildKit for faster, more secure, and efficient Docker builds.


Question 10: How do you handle sensitive data (like secrets) in Docker?

I never hardcode secrets into Dockerfiles or images.

For local runs, I pass secrets as environment variables — either inline or via a .env file. If I use a .env file, I make sure to add it to .dockerignore — so it doesn’t accidentally get copied into the image.

In production, I use secret management tools — like HashiCorp Vault, AWS Secrets Manager, or Kubernetes Secrets — to inject secrets at runtime.

Secrets should always come from outside the image, not baked inside

📌 Example:

Pass database passwords using environment variables in staging, and HashiCorp Vault in production.

🚀 Pro Tip:

Good secret management = safer deployments and easier key rotations.

Thanks for reading! Be sure to watch the video version for extra insights and helpful visuals.


Tools I Personally Trust

If you want to make your digital life a little calmer — here are two tools I use every day:

🛸 Proton VPN – A trusted VPN that secures your Wi-Fi, hides your IP, and blocks trackers. Even in that no-password café Wi-Fi, you’re safe.

🔑 Proton Pass – A password manager with on-device encryption. Passwords, logins, 2FA — always with you, and only for you.

These are partner links — you won’t pay a cent more, but you’ll be supporting DevOps.Pink. Thank you — it really means a lot 💖


Refill My Coffee Supplies

💖 PayPal
🏆 Patreon
💎 GitHub
🥤 BuyMeaCoffee
🍪 Ko-fi


Follow Me

🎬 YouTube
🐦 X / Twitter
🎨 Instagram
🐘 Mastodon
🧵 Threads
🎸 Facebook
🧊 Bluesky
🎥 TikTok
💻 LinkedIn
🐈 GitHub


Is this content AI-generated?

Absolutely not! Every article is written by me, driven by a genuine passion for Docker and backed by decades of experience in IT. I do use AI tools to polish grammar and enhance clarity, but the ideas, strategies, and technical insights are entirely my own. While this might occasionally trigger AI detection tools, rest assured—the knowledge and experience behind the content are 100% real and personal.

Tatiana Mikhaleva
I’m Tatiana Mikhaleva — Docker Captain, DevOps engineer, and creator of DevOps.Pink. I help engineers build scalable cloud systems, master containers, and fall in love with automation — especially beginners and women in tech.

Tools I Personally Trust

My daily tools — tested, trusted & loved 💖

🛸 Proton VPN – secure & private connection

🔐 Proton Pass – encrypted password manager

*Partner links — you support DevOps.Pink 💕

DevOps Community

Hey hey! 💖 Need a hand with install or setup? Just give me or our awesome crew a shout: