How to Use Docker Compose

How to Use Docker Compose Docker Compose is a powerful tool that simplifies the management of multi-container Docker applications. While Docker allows you to run individual containers, real-world applications often require multiple services—such as a web server, database, cache, and message broker—to work together. Managing each container manually with separate docker run commands becomes complex,

Nov 6, 2025 - 10:09
Nov 6, 2025 - 10:09
 1

How to Use Docker Compose

Docker Compose is a powerful tool that simplifies the management of multi-container Docker applications. While Docker allows you to run individual containers, real-world applications often require multiple servicessuch as a web server, database, cache, and message brokerto work together. Managing each container manually with separate docker run commands becomes complex, error-prone, and unsustainable. This is where Docker Compose shines. It enables you to define and orchestrate all the services that make up your application in a single YAML file, allowing you to start, stop, and manage your entire stack with just a few commands.

Originally developed by Docker Inc., Docker Compose has become an industry-standard tool for development, testing, and even lightweight production environments. Whether you're a developer building a local environment, a DevOps engineer deploying microservices, or a student learning containerization, mastering Docker Compose is essential. It reduces configuration overhead, ensures environment consistency across machines, and accelerates deployment cycles.

In this comprehensive guide, well walk you through everything you need to know to use Docker Compose effectivelyfrom installation and basic syntax to advanced configurations, best practices, real-world examples, and troubleshooting. By the end, youll be able to define, deploy, and manage complex multi-container applications with confidence and efficiency.

Step-by-Step Guide

Prerequisites

Before you begin using Docker Compose, ensure your system meets the following requirements:

  • Docker Engine installed (version 17.05 or higher)
  • Linux, macOS, or Windows 10/11 (with WSL2 on Windows)
  • Basic familiarity with the command line

You can verify Docker is installed by running:

docker --version

If Docker is installed correctly, youll see output like Docker version 24.0.7, build afdd53b. If not, download and install Docker Desktop from docker.com.

Docker Compose is included by default in Docker Desktop for Windows and macOS. On Linux, you may need to install it separately. To check if Docker Compose is available, run:

docker compose version

If you see a version number (e.g., v2.20.3), youre ready. If not, install Docker Compose on Linux using:

sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

sudo chmod +x /usr/local/bin/docker-compose

Understanding the docker-compose.yml File

The heart of Docker Compose is the docker-compose.yml filea YAML-formatted configuration file that defines your applications services, networks, and volumes. YAML (Yet Another Markup Language) is human-readable and uses indentation (spaces, not tabs) to denote structure.

A minimal docker-compose.yml file might look like this:

version: '3.8'

services:

web:

image: nginx:latest

ports:

- "80:80"

Lets break this down:

  • version: Specifies the Compose file format version. Version 3.x is recommended for modern Docker deployments.
  • services: Defines the containers that make up your application. Each service corresponds to one container.
  • web: The name of the service (you can choose any name).
  • image: The Docker image to use. Here, were using the official Nginx image from Docker Hub.
  • ports: Maps port 80 on the host to port 80 in the container, making the web server accessible via http://localhost.

Creating Your First Docker Compose Project

Lets create a simple web application with a frontend (Nginx) and a backend (Python Flask).

1. Create a new directory for your project:

mkdir my-flask-app

cd my-flask-app

2. Create a Python Flask app. Make a file called app.py:

from flask import Flask

app = Flask(__name__)

@app.route('/')

def hello():

return "Hello from Docker Compose!"

if __name__ == '__main__':

app.run(host='0.0.0.0', port=5000)

3. Create a requirements.txt file:

Flask==2.3.3

4. Create a Dockerfile for the Python service:

FROM python:3.11-slim

WORKDIR /app

COPY requirements.txt .

RUN pip install --no-cache-dir -r requirements.txt

COPY . .

CMD ["gunicorn", "--bind", "0.0.0.0:5000", "--workers", "2", "app:app"]

Were using Gunicorn as a production-grade WSGI server instead of Flasks built-in server for better performance.

5. Create the docker-compose.yml file:

version: '3.8'

services:

web:

build: .

ports:

- "5000:5000"

volumes:

- .:/app

environment:

- FLASK_ENV=development

nginx:

image: nginx:alpine

ports:

- "80:80"

volumes:

- ./nginx.conf:/etc/nginx/conf.d/default.conf

depends_on:

- web

6. Create an Nginx configuration file nginx.conf:

server {

listen 80;

server_name localhost;

location / {

proxy_pass http://web:5000;

proxy_set_header Host $host;

proxy_set_header X-Real-IP $remote_addr;

}

}

7. Build and start the services:

docker compose up

This command will:

  • Build the custom image for the web service using the Dockerfile
  • Pull the nginx:alpine image
  • Start both containers
  • Mount the current directory as a volume in the web container for live code reloading
  • Connect the nginx container to the web container via the internal network

Open your browser and navigate to http://localhost. You should see Hello from Docker Compose!

To stop the services, press Ctrl+C in the terminal, then run:

docker compose down

This removes containers, networks, and volumes defined in the compose file (unless explicitly preserved).

Managing Multiple Environments

Most applications require different configurations for development, staging, and production. Docker Compose supports this via multiple YAML files and the -f flag.

Create a base file: docker-compose.yml

version: '3.8'

services:

web:

image: my-flask-app

ports:

- "5000:5000"

environment:

- DATABASE_URL=sqlite:///app.db

Create a development override: docker-compose.dev.yml

version: '3.8'

services:

web:

volumes:

- .:/app

environment:

- FLASK_ENV=development

Create a production override: docker-compose.prod.yml

version: '3.8'

services:

web:

environment:

- FLASK_ENV=production

deploy:

replicas: 3

To use the development setup:

docker compose -f docker-compose.yml -f docker-compose.dev.yml up

To use production:

docker compose -f docker-compose.yml -f docker-compose.prod.yml up

Alternatively, use the COMPOSE_FILE environment variable:

export COMPOSE_FILE="docker-compose.yml:docker-compose.prod.yml"

docker compose up

Working with Volumes and Networks

By default, Docker Compose creates a default network for all services, allowing them to communicate using service names as hostnames. You can also define custom networks and volumes for better control.

Example with custom network and named volume:

version: '3.8'

services:

db:

image: postgres:15

volumes:

- pgdata:/var/lib/postgresql/data

environment:

POSTGRES_DB: myapp

POSTGRES_USER: user

POSTGRES_PASSWORD: password

networks:

- app-network

web:

build: .

ports:

- "5000:5000"

depends_on:

- db

networks:

- app-network

volumes:

pgdata:

networks:

app-network:

driver: bridge

In this example:

  • pgdata is a named volume that persists PostgreSQL data even after containers are removed.
  • app-network is a custom bridge network that isolates the web and db services from other containers.
  • depends_on ensures the database starts before the web app, though it doesnt wait for the DB to be fully readysee the section on health checks for better dependency handling.

Health Checks and Dependency Management

Using depends_on alone doesnt guarantee a service is ready to accept connections. For example, PostgreSQL might still be initializing when the web app tries to connect.

Add a health check to the database service:

db:

image: postgres:15

healthcheck:

test: ["CMD-SHELL", "pg_isready -U user -d myapp"]

interval: 10s

timeout: 5s

retries: 5

start_period: 40s

volumes:

- pgdata:/var/lib/postgresql/data

environment:

POSTGRES_DB: myapp

POSTGRES_USER: user

POSTGRES_PASSWORD: password

Now, use condition: service_healthy in depends_on:

web:

build: .

ports:

- "5000:5000"

depends_on:

db:

condition: service_healthy

This ensures the web service only starts once the database reports a healthy status.

Scaling Services

Docker Compose allows you to scale services horizontally. For example, to run three instances of your web service:

docker compose up --scale web=3

Each instance will have a unique container name (e.g., my-flask-app-web-1, my-flask-app-web-2, etc.).

Important: Scaling works best with stateless services. If your service writes to local storage or uses in-memory sessions, scaling may cause inconsistencies. Use external storage (e.g., Redis, database) for shared state.

Best Practices

Use .dockerignore

Just as you use .gitignore to exclude files from version control, use a .dockerignore file to exclude unnecessary files from being copied into your Docker images. This improves build speed and reduces image size.

Example .dockerignore:

.git

node_modules

__pycache__

.env

docker-compose.yml

README.md

*.log

Minimize Image Layers and Use Multi-Stage Builds

Each instruction in a Dockerfile creates a layer. Too many layers increase image size and build time. Combine related commands using &&:

RUN apt-get update && apt-get install -y \

python3-pip \

python3-dev \

&& rm -rf /var/lib/apt/lists/*

Use multi-stage builds to separate build-time dependencies from runtime dependencies:

FROM python:3.11-slim as builder

WORKDIR /app

COPY requirements.txt .

RUN pip install --user --no-cache-dir -r requirements.txt

FROM python:3.11-slim

WORKDIR /app

COPY --from=builder /root/.local /root/.local

COPY . .

ENV PATH=/root/.local/bin:$PATH

CMD ["gunicorn", "--bind", "0.0.0.0:5000", "--workers", "2", "app:app"]

This reduces the final image size by excluding pip, compilers, and development headers.

Use Environment Variables for Configuration

Never hardcode secrets or environment-specific values in your docker-compose.yml. Use environment variables and load them via a .env file.

Create a .env file:

DB_PASSWORD=mysecretpassword

REDIS_PORT=6379

APP_ENV=production

Reference them in docker-compose.yml:

services:

db:

image: postgres:15

environment:

POSTGRES_PASSWORD: ${DB_PASSWORD}

Docker Compose automatically loads variables from .env in the same directory. You can also specify a custom file:

docker compose --env-file ./config/prod.env up

Avoid Running Containers as Root

Running containers as the root user is a security risk. Create a non-root user in your Dockerfile:

FROM python:3.11-slim

RUN addgroup -g 1001 -S appuser

RUN adduser -u 1001 -S appuser -d /home/appuser

USER appuser

WORKDIR /home/appuser

COPY --chown=appuser:appuser requirements.txt .

RUN pip install --user --no-cache-dir -r requirements.txt

COPY --chown=appuser:appuser . .

CMD ["gunicorn", "--bind", "0.0.0.0:5000", "--workers", "2", "app:app"]

Use Specific Image Tags, Not latest

Using image: nginx:latest can cause unpredictable behavior during deployments. The latest tag changes without warning, breaking your application.

Always pin versions:

image: nginx:1.25-alpine

image: postgres:15.4

This ensures reproducible builds and makes rollbacks easier.

Organize Projects with Compose Profiles

Docker Compose supports profiles to conditionally include services based on context. This is ideal for services like monitoring tools, debuggers, or test databases that you only need during development.

services:

web:

build: .

ports:

- "5000:5000"

db:

image: postgres:15

profiles:

- dev

redis:

image: redis:alpine

profiles:

- dev

prometheus:

image: prom/prometheus

profiles:

- monitoring

Start only the web service:

docker compose up

Start dev services:

docker compose --profile dev up

Start monitoring:

docker compose --profile monitoring up

Log Management and Monitoring

By default, Docker Compose logs output to the terminal. For production use, configure logging drivers to send logs to centralized systems like ELK, Loki, or Splunk.

services:

web:

image: my-app

logging:

driver: "json-file"

options:

max-size: "10m"

max-file: "3"

Or use syslog:

logging:

driver: syslog

options:

syslog-address: "tcp://192.168.1.10:514"

Tools and Resources

Official Documentation

The authoritative source for Docker Compose is the Docker Compose documentation. It includes detailed reference material for every version, syntax, and directive.

Compose File Validator

Use the Docker Compose CLI to validate your YAML files:

docker compose config

This command parses your compose file and outputs the resolved configuration, helping you debug variable interpolation, overrides, and service dependencies.

Visual Editors

While YAML is human-readable, complex files benefit from visual editors:

  • Visual Studio Code with the Docker extension provides syntax highlighting, linting, and auto-completion.
  • JetBrains IDEs (PyCharm, WebStorm) offer built-in Docker Compose support.
  • Compose Editor by Docker: A web-based tool for generating compose files visually (experimental).

Template Repositories

Start with proven templates:

CI/CD Integration

Docker Compose integrates seamlessly with CI/CD pipelines:

  • GitHub Actions: Use docker/setup-docker-compose-action to install Compose in workflows.
  • GitLab CI: Use the Docker-in-Docker service to run docker compose up for integration tests.
  • CircleCI: Use the docker executor and install Compose via pip install docker-compose.

Example GitHub Actions workflow:

name: Test App

on: [push]

jobs:

test:

runs-on: ubuntu-latest

steps:

- uses: actions/checkout@v4

- name: Set up Docker Compose

uses: docker/setup-compose-action@v2

- name: Start services

run: docker compose up -d

- name: Run tests

run: docker compose exec web python -m pytest

- name: Stop services

run: docker compose down

Monitoring and Debugging Tools

  • Portainer: A web UI for managing Docker containers and Compose stacks.
  • Docker Stats: Monitor resource usage with docker compose stats.
  • Logspout: Routes container logs to external systems.
  • Watchtower: Automatically updates containers when new images are pushed.

Real Examples

Example 1: WordPress with MySQL and Redis

A common production-ready stack for WordPress:

version: '3.8'

services:

db:

image: mysql:8.0

volumes:

- db_data:/var/lib/mysql

environment:

MYSQL_ROOT_PASSWORD: ${DB_ROOT_PASSWORD}

MYSQL_DATABASE: wordpress

MYSQL_USER: wordpress

MYSQL_PASSWORD: wordpress

networks:

- wp_network

wordpress:

image: wordpress:latest

ports:

- "8000:80"

environment:

WORDPRESS_DB_HOST: db:3306

WORDPRESS_DB_USER: wordpress

WORDPRESS_DB_PASSWORD: wordpress

WORDPRESS_DB_NAME: wordpress

volumes:

- wp_data:/var/www/html

depends_on:

db:

condition: service_healthy

networks:

- wp_network

healthcheck:

test: ["CMD", "curl", "-f", "http://localhost"]

interval: 30s

timeout: 10s

retries: 3

start_period: 40s

redis:

image: redis:alpine

networks:

- wp_network

volumes:

db_data:

wp_data:

networks:

wp_network:

driver: bridge

Use docker compose up -d to run in detached mode. Access WordPress at http://localhost:8000.

Example 2: Microservice Architecture with Node.js, Python, and RabbitMQ

Three services communicating via message queue:

version: '3.8'

services:

api-node:

build: ./api-node

ports:

- "3000:3000"

environment:

- RABBITMQ_URL=amqp://rabbitmq

depends_on:

rabbitmq:

condition: service_healthy

networks:

- microservice_net

processor-python:

build: ./processor-python

environment:

- RABBITMQ_URL=amqp://rabbitmq

depends_on:

rabbitmq:

condition: service_healthy

networks:

- microservice_net

rabbitmq:

image: rabbitmq:3.11-management

ports:

- "15672:15672"

- "5672:5672"

healthcheck:

test: ["CMD", "rabbitmq-diagnostics", "-q", "status"]

interval: 10s

timeout: 5s

retries: 5

networks:

- microservice_net

networks:

microservice_net:

driver: bridge

The Node.js API accepts requests and publishes messages to RabbitMQ. The Python service consumes those messages and processes them. This decoupled architecture is scalable and fault-tolerant.

Example 3: Local Development with MongoDB, Admin UI, and Seed Data

For developers working with MongoDB:

version: '3.8'

services:

mongodb:

image: mongo:6.0

ports:

- "27017:27017"

volumes:

- mongo_data:/data/db

environment:

MONGO_INITDB_ROOT_USERNAME: admin

MONGO_INITDB_ROOT_PASSWORD: password

networks:

- dev_net

mongo-express:

image: mongo-express

ports:

- "8081:8081"

environment:

ME_CONFIG_MONGODB_ADMINUSERNAME: admin

ME_CONFIG_MONGODB_ADMINPASSWORD: password

ME_CONFIG_MONGODB_SERVER: mongodb

depends_on:

- mongodb

networks:

- dev_net

seed-data:

image: node:18-alpine

volumes:

- ./seed:/seed

command: >

sh -c "sleep 10 && node /seed/seed.js"

depends_on:

mongodb:

condition: service_healthy

networks:

- dev_net

volumes:

mongo_data:

networks:

dev_net:

driver: bridge

The seed-data container waits for MongoDB to be ready, then runs a script to populate the database with sample data. Access the admin UI at http://localhost:8081.

FAQs

What is the difference between Docker and Docker Compose?

Docker is the core platform that allows you to build, run, and manage individual containers. Docker Compose is a higher-level tool that orchestrates multiple containers defined in a YAML file, automating their startup, networking, and lifecycle management.

Can Docker Compose be used in production?

Yes, but with caveats. Docker Compose is excellent for small-scale, single-host production deployments. For larger, distributed systems, consider Kubernetes, Nomad, or ECS. Compose lacks built-in auto-scaling, rolling updates, and service discovery features found in orchestration platforms.

Why is my container restarting continuously?

Check logs with docker compose logs <service>. Common causes include:

  • Missing environment variables
  • Port conflicts
  • Application crashes due to misconfiguration
  • Health check failures

How do I update a service without downtime?

Docker Compose doesnt natively support zero-downtime deployments. To minimize disruption:

  • Use docker compose pull to fetch the new image
  • Use docker compose up -d to recreate containers one at a time
  • Ensure your application supports graceful shutdown and health checks

For true zero-downtime, use Kubernetes or a load balancer with multiple replicas.

Can I use Docker Compose with Windows containers?

Yes, but you must switch Docker Desktop to Windows container mode. The syntax remains the same, but images must be Windows-based (e.g., mcr.microsoft.com/windows/servercore:ltsc2022).

How do I backup Docker Compose data?

Backup named volumes using:

docker run --rm -v <volume_name>:/volume -v $(pwd):/backup alpine tar czf /backup/backup.tar.gz -C /volume .

For databases, use native backup tools (e.g., pg_dump, mongodump) inside the container.

What happens if I delete the docker-compose.yml file?

Deleting the file doesnt affect running containers. However, youll lose the configuration needed to recreate or manage them. Always commit your docker-compose.yml to version control.

How do I access a containers shell?

Use:

docker compose exec <service> sh

or for bash:

docker compose exec <service> bash

Conclusion

Docker Compose is an indispensable tool for modern software development. It transforms the chaotic process of managing multiple containers into a streamlined, repeatable, and version-controlled workflow. By defining your applications infrastructure as code in a simple YAML file, you ensure consistency across development, testing, and production environments. Whether youre building a local development environment, running integration tests, or deploying a small-scale microservice, Docker Compose reduces complexity and accelerates delivery.

This guide has walked you through everything from installing Docker Compose and writing your first docker-compose.yml file to implementing best practices, using real-world examples, and troubleshooting common issues. You now understand how to leverage volumes, networks, health checks, profiles, and environment variables to build robust, scalable, and maintainable multi-container applications.

As you continue your journey, remember: the key to mastering Docker Compose lies in practice. Start smallcontainerize a simple app. Then expandadd a database, a cache, a message queue. Experiment with overrides, scaling, and CI/CD integrations. The more you use it, the more intuitive it becomes.

Docker Compose isnt just a toolits a mindset. It encourages infrastructure as code, environment parity, and automation. These principles are foundational to DevOps and cloud-native development. By internalizing them, youre not just learning how to run containersyoure learning how to build resilient, modern software systems.