How to Build Docker Image

How to Build Docker Image Docker has revolutionized the way applications are developed, tested, and deployed. At the heart of Docker’s power lies the ability to create lightweight, portable, and reproducible containers through Docker images. A Docker image is a read-only template that contains the instructions to create a Docker container. Whether you’re deploying a web application, a microservice

Nov 6, 2025 - 10:08
Nov 6, 2025 - 10:08
 0

How to Build Docker Image

Docker has revolutionized the way applications are developed, tested, and deployed. At the heart of Dockers power lies the ability to create lightweight, portable, and reproducible containers through Docker images. A Docker image is a read-only template that contains the instructions to create a Docker container. Whether youre deploying a web application, a microservice, or a database, building a Docker image is the essential first step toward consistent, scalable, and efficient software delivery.

Building a Docker image might seem intimidating at first, especially for those new to containerization. However, with a clear understanding of the process and adherence to best practices, anyone can create optimized, secure, and production-ready images. This comprehensive guide walks you through every aspect of building a Docker imagefrom writing your first Dockerfile to optimizing your final build. Youll learn practical techniques, industry-standard tools, real-world examples, and answers to frequently asked questionsall designed to turn you into a confident Docker image builder.

Step-by-Step Guide

Prerequisites

Before you begin building Docker images, ensure you have the following installed and configured on your system:

  • Docker Engine: Download and install Docker Desktop (for macOS and Windows) or Docker Engine (for Linux) from docs.docker.com.
  • A text editor: Use VS Code, Sublime Text, or any editor that supports plain text files.
  • Basic command-line knowledge: You should be comfortable navigating directories and running terminal commands.

Once Docker is installed, verify the installation by opening a terminal and running:

docker --version

You should see output similar to:

Docker version 24.0.7, build afdd53b

If Docker is not recognized, restart your terminal or reinstall Docker.

Step 1: Create a Project Directory

Start by creating a dedicated directory for your project. This keeps your Dockerfile and application files organized. For example:

mkdir my-node-app

cd my-node-app

This directory will serve as the build contextthe folder Docker uses to find files needed to build the image. Everything inside this folder will be accessible during the build process.

Step 2: Write Your Application Code

For this example, lets create a simple Node.js application. Inside your project directory, create a file named app.js:

const express = require('express');

const app = express();

const port = 3000;

app.get('/', (req, res) => {

res.send('Hello, Docker!');

});

app.listen(port, () => {

console.log(App running at http://localhost:${port});

});

Next, initialize a Node.js project and install Express:

npm init -y

npm install express

This creates a package.json file listing your dependencies. Your project structure should now look like this:

my-node-app/

??? app.js

??? package.json

??? node_modules/

Step 3: Create a Dockerfile

The Dockerfile is the blueprint for your Docker image. It contains a series of instructions that Docker executes to build the image. Create a file named Dockerfile (no extension) in your project root:

FROM node:18-alpine

WORKDIR /app

COPY package*.json ./

RUN npm install --only=production

COPY . .

EXPOSE 3000

CMD ["node", "app.js"]

Lets break down each instruction:

  • FROM node:18-alpine: This specifies the base image. Were using Node.js 18 running on Alpine Linuxa minimal Linux distribution that keeps the image size small.
  • WORKDIR /app: Sets the working directory inside the container to /app. All subsequent commands will run from this location.
  • COPY package*.json ./: Copies package.json and package-lock.json (if present) into the container. This step is optimized to leverage Dockers layer cachingchanges to source code wont trigger a re-install of dependencies if package files havent changed.
  • RUN npm install --only=production: Installs only production dependencies, avoiding development tools like test runners or linters.
  • COPY . .: Copies all remaining files from your local directory into the containers /app directory.
  • EXPOSE 3000: Informs Docker that the container listens on port 3000. This is documentation for users; it doesnt publish the port.
  • CMD ["node", "app.js"]: Defines the default command to run when the container starts. This is the application entry point.

Step 4: Build the Docker Image

With your Dockerfile ready, its time to build the image. In your terminal, run:

docker build -t my-node-app .

The -t flag tags the image with a name (my-node-app). The dot (.) at the end specifies the build contextthe current directory where the Dockerfile is located.

Docker will now execute each instruction in the Dockerfile sequentially. Youll see output like:

Sending build context to Docker daemon  5.12kB

Step 1/7 : FROM node:18-alpine

---> 5a7989634d41

Step 2/7 : WORKDIR /app

---> Running in 1b3f1e8d4a2e

Removing intermediate container 1b3f1e8d4a2e

---> 8c2f9d4e1b3a

Step 3/7 : COPY package*.json ./

---> 2a1c7d9e0f2b

Step 4/7 : RUN npm install --only=production

---> Running in 8f3e5d7a2c1b

added 54 packages in 4s

Removing intermediate container 8f3e5d7a2c1b

---> 7e4a3d9b1c2f

Step 5/7 : COPY . .

---> 9d8e7f6a5b4c

Step 6/7 : EXPOSE 3000

---> Running in 1e2d3f4a5b6c

Removing intermediate container 1e2d3f4a5b6c

---> 6f8a7d9e0c1b

Step 7/7 : CMD ["node", "app.js"]

---> Running in 3d4e5f6a7b8c

Removing intermediate container 3d4e5f6a7b8c

---> 9a1b2c3d4e5f

Successfully built 9a1b2c3d4e5f

Successfully tagged my-node-app:latest

At the end, Docker outputs a unique image ID and confirms the tag. You can verify the image was created by running:

docker images

You should see your image listed:

REPOSITORY       TAG       IMAGE ID       CREATED         SIZE

my-node-app latest 9a1b2c3d4e5f 2 minutes ago 142MB

Step 5: Run the Docker Container

Now that the image is built, you can launch a container from it:

docker run -p 3000:3000 my-node-app

The -p 3000:3000 flag maps port 3000 on your host machine to port 3000 in the container. Open your browser and navigate to http://localhost:3000. You should see:

Hello, Docker!

Congratulations! Youve successfully built and run a Dockerized application.

Step 6: Push the Image to a Registry (Optional)

To share your image with others or deploy it to cloud platforms, push it to a container registry like Docker Hub, GitHub Container Registry, or Amazon ECR.

First, log in to Docker Hub:

docker login

Tag your image with your Docker Hub username:

docker tag my-node-app your-dockerhub-username/my-node-app:latest

Then push it:

docker push your-dockerhub-username/my-node-app:latest

After pushing, your image will be publicly (or privately) available for anyone to pull and run with:

docker pull your-dockerhub-username/my-node-app:latest

Best Practices

Use Minimal Base Images

Always prefer lightweight base images. Alpine Linux variants (e.g., node:18-alpine) are significantly smaller than full Linux distributions. A smaller image reduces download time, attack surface, and storage overhead. Avoid using node:latest or ubuntu:latest in productionalways pin to a specific version to ensure reproducibility.

Minimize Layers and Combine Commands

Each instruction in a Dockerfile creates a new layer. Too many layers increase image size and build time. Combine related commands using && and line continuations (\):

RUN apt-get update && apt-get install -y \

curl \

wget \

&& rm -rf /var/lib/apt/lists/*

This approach installs packages and cleans up temporary files in a single layer, reducing bloat.

Use .dockerignore

Just as .gitignore excludes files from version control, .dockerignore excludes files from the build context. Create a .dockerignore file in your project root:

.git

node_modules

npm-debug.log

.DS_Store

README.md

This prevents unnecessary files from being copied into the image, speeding up builds and reducing image size.

Multi-Stage Builds for Production Optimization

Multi-stage builds allow you to use multiple FROM statements in a single Dockerfile. Each stage can have its own base image and instructions. The final stage copies only whats needed from previous stages, eliminating build tools and dependencies.

Heres an optimized version of the Node.js example using multi-stage builds:

Stage 1: Build

FROM node:18-alpine AS builder

WORKDIR /app

COPY package*.json ./

RUN npm install --only=production

Stage 2: Production

FROM node:18-alpine

WORKDIR /app

COPY --from=builder /app/node_modules ./node_modules

COPY . .

EXPOSE 3000

CMD ["node", "app.js"]

In this example, the first stage installs dependencies, and the second stage copies only the node_modules folder and source codeno build tools or dev dependencies are included. The resulting image is much smaller and more secure.

Set Non-Root User

Running containers as root is a security risk. Create a non-root user inside the container:

FROM node:18-alpine

WORKDIR /app

RUN addgroup -g 1001 -S nodejs

RUN adduser -u 1001 -S nodejs -m

USER nodejs

COPY --chown=nodejs:nodejs package*.json ./

RUN npm install --only=production

COPY --chown=nodejs:nodejs . .

EXPOSE 3000

CMD ["node", "app.js"]

The USER instruction switches to the non-root user. The --chown flag ensures copied files are owned by the correct user.

Label Your Images

Use labels to add metadata to your images. This helps with auditing, automation, and documentation:

LABEL maintainer="yourname@example.com"

LABEL version="1.0.0"

LABEL description="A simple Node.js web app"

View labels with:

docker inspect your-image-name

Cache Dependencies Strategically

Docker caches layers. To maximize caching efficiency, copy files in an order that changes least frequently first:

  • Copy package.json and package-lock.json first
  • Run npm install
  • Copy application code

This way, if you change your source code but not dependencies, Docker reuses the cached node_modules layer, avoiding a full reinstall.

Scan Images for Vulnerabilities

Regularly scan your images for security vulnerabilities. Docker provides built-in scanning with docker scan:

docker scan my-node-app

Alternatively, use tools like Trivy, Snyk, or Clair for deeper analysis. Integrate scanning into your CI/CD pipeline to catch issues early.

Tools and Resources

Core Docker Tools

  • Docker Desktop: The official GUI and CLI tool for macOS, Windows, and Linux. Includes Docker Engine, Docker Compose, and Kubernetes.
  • Docker CLI: The command-line interface for building, running, and managing containers. Essential for automation and scripting.
  • Docker Compose: Used to define and run multi-container applications. Ideal for development environments with databases, caches, and APIs.

Image Optimization Tools

  • Dive: A tool for exploring each layer in a Docker image, analyzing size, and identifying bloat. Install via: curl -s https://api.github.com/repos/wagoodman/dive/releases/latest | grep browser_download_url | grep linux | cut -d '"' -f 4 | wget -qi -
  • Trivy: An open-source vulnerability scanner for containers. Integrates with CI/CD and supports OS packages, language dependencies, and configuration issues.
  • Hadolint: A linter for Dockerfiles that checks for common mistakes and best practices. Use it in your editor or CI pipeline.

Container Registries

  • Docker Hub: The largest public registry. Free tier available for public images.
  • GitHub Container Registry (GHCR): Integrated with GitHub Actions. Ideal for open-source and private repositories.
  • Amazon ECR: Fully managed container registry for AWS users. Offers fine-grained IAM permissions.
  • Google Container Registry (GCR): Google Clouds container registry, now largely superseded by Artifact Registry.

CI/CD Integration

Automate Docker image builds in your CI/CD pipeline using:

  • GitHub Actions: Use the official docker/build-push-action to build and push images on every push or pull request.
  • GitLab CI: Leverage Docker-in-Docker (DinD) or buildkit to build images in runners.
  • CircleCI: Use Docker executor and the docker CLI to build and push images.

Example GitHub Actions workflow:

name: Build and Push Docker Image

on:

push:

branches: [ main ]

jobs:

build:

runs-on: ubuntu-latest

steps:

- uses: actions/checkout@v4

- name: Set up Docker Buildx

uses: docker/setup-buildx-action@v3

- name: Login to Docker Hub

uses: docker/login-action@v3

with:

username: ${{ secrets.DOCKER_USERNAME }}

password: ${{ secrets.DOCKER_PASSWORD }}

- name: Build and push

uses: docker/build-push-action@v5

with:

context: .

file: ./Dockerfile

tags: your-dockerhub-username/my-node-app:latest

push: true

Learning Resources

Real Examples

Example 1: Python Flask Application

Lets build a Docker image for a simple Python Flask app.

File: app.py

from flask import Flask

app = Flask(__name__)

@app.route('/')

def hello():

return 'Hello from Flask in Docker!'

if __name__ == '__main__':

app.run(host='0.0.0.0', port=5000)

File: requirements.txt

Flask==2.3.3

File: Dockerfile

FROM python:3.11-slim

WORKDIR /app

COPY requirements.txt .

RUN pip install --no-cache-dir -r requirements.txt

COPY . .

EXPOSE 5000

CMD ["gunicorn", "--bind", "0.0.0.0:5000", "--workers", "1", "app:app"]

Build and run:

docker build -t flask-app .

docker run -p 5000:5000 flask-app

Visit http://localhost:5000 to see your app.

Example 2: React Frontend with Nginx

React apps are static. Serve them with Nginx for better performance.

Build your React app:

npm run build

This creates a build/ folder with static files.

File: Dockerfile

Stage 1: Build React App

FROM node:18-alpine AS builder

WORKDIR /app

COPY package*.json ./

RUN npm install

COPY . .

RUN npm run build

Stage 2: Serve with Nginx

FROM nginx:alpine

COPY --from=builder /app/build /usr/share/nginx/html

EXPOSE 80

CMD ["nginx", "-g", "daemon off;"]

Build and run:

docker build -t react-app .

docker run -p 8080:80 react-app

Visit http://localhost:8080 to view your React app.

Example 3: Multi-Service App with Docker Compose

For applications with multiple services (e.g., frontend, backend, database), use Docker Compose.

File: docker-compose.yml

version: '3.8'

services:

web:

build: ./web

ports:

- "3000:3000"

depends_on:

- api

environment:

- REACT_APP_API_URL=http://api:5000

api:

build: ./api

ports:

- "5000:5000"

depends_on:

- db

db:

image: postgres:15

environment:

POSTGRES_DB: myapp

POSTGRES_USER: user

POSTGRES_PASSWORD: password

volumes:

- pgdata:/var/lib/postgresql/data

volumes:

pgdata:

Run:

docker-compose up --build

Docker Compose builds and starts all services, connecting them via internal networks. This is ideal for local development and testing.

FAQs

What is the difference between a Docker image and a container?

A Docker image is a static, read-only template that contains the application code, libraries, and configuration. A container is a running instance of an image. You can have multiple containers running from the same image, each with its own isolated environment.

Can I build Docker images on Windows and Linux?

Yes. Docker Desktop supports Windows and macOS, while Docker Engine runs natively on Linux. Images built on one platform can run on anotherDocker abstracts the underlying OS. However, base images must be compatible (e.g., Windows containers cannot run on Linux hosts).

Why is my Docker image so large?

Large images are often caused by:

  • Using full OS base images (e.g., ubuntu:latest) instead of slim variants
  • Installing development tools or unnecessary packages
  • Not cleaning up temporary files
  • Not using multi-stage builds

Use docker history your-image-name to inspect layer sizes and identify bloat.

How do I update a Docker image after making code changes?

Rebuild the image:

docker build -t your-image-name .

Then stop and remove the old container, and start a new one:

docker stop your-container

docker rm your-container

docker run -p 3000:3000 your-image-name

For development, consider using volume mounts to sync code changes without rebuilding.

Is it safe to run Docker as root?

Running the Docker daemon as root is necessary on Linux, but containers should run as non-root users. Never use USER root in production images. Always follow the principle of least privilege.

Can I build Docker images without a Docker daemon?

Yes. Tools like BuildKit and Podman allow building images without a traditional Docker daemon. BuildKit is now the default builder in Docker. Podman is daemonless and rootless, making it ideal for secure environments.

How do I version Docker images?

Use semantic versioning in tags: myapp:v1.2.0. Avoid using latest in production. Tag images with git commit hashes for traceability:

docker build -t myapp:$(git rev-parse --short HEAD) .

What happens if I dont specify a tag in docker build?

If you omit the -t flag, Docker assigns the image a default tag of latest. While convenient for testing, this practice is discouraged in production because it makes rollbacks and audits difficult.

Can I build Docker images in the cloud?

Absolutely. Cloud providers like GitHub Actions, GitLab CI, AWS CodeBuild, and Google Cloud Build support Docker image builds. Many offer built-in caching and integration with container registries.

Conclusion

Building Docker images is a foundational skill for modern software development. From simple Node.js apps to complex microservices architectures, Docker enables consistency, scalability, and portability across environments. By following the step-by-step guide in this tutorial, youve learned how to write effective Dockerfiles, optimize image size, secure your containers, and integrate Docker into your workflow.

Remember: the key to successful Docker adoption lies not just in building images, but in building them well. Use minimal base images, leverage multi-stage builds, scan for vulnerabilities, and automate your builds. These practices ensure your images are not only functional but also secure, efficient, and maintainable.

As you continue your journey with Docker, explore advanced topics like Kubernetes orchestration, image signing with Notary, and policy enforcement with Open Policy Agent (OPA). The ecosystem around containerization is vast and evolvingbut with the solid foundation youve built here, youre well-equipped to navigate it.

Now that you know how to build Docker images, the next step is to deploy them. Whether youre running on a local machine, a cloud server, or a managed Kubernetes cluster, your applications are now ready to be containerized, scaled, and delivered with confidence.