How to Run Containers

How to Run Containers Containers have revolutionized the way software is developed, deployed, and scaled. Whether you're a developer, DevOps engineer, or system administrator, understanding how to run containers is no longer optional—it's essential. Containers provide a lightweight, portable, and consistent environment for applications, ensuring they run reliably across different computing environ

Nov 6, 2025 - 10:07
Nov 6, 2025 - 10:07
 0

How to Run Containers

Containers have revolutionized the way software is developed, deployed, and scaled. Whether you're a developer, DevOps engineer, or system administrator, understanding how to run containers is no longer optionalit's essential. Containers provide a lightweight, portable, and consistent environment for applications, ensuring they run reliably across different computing environments. From local development machines to cloud-native production clusters, containers abstract away the underlying infrastructure, allowing teams to focus on building features rather than managing dependencies.

This tutorial offers a comprehensive, step-by-step guide on how to run containers effectively. Well walk you through the fundamentals of containerization, demonstrate practical execution using industry-standard tools like Docker and Podman, explore best practices for security and performance, highlight essential tools and resources, and provide real-world examples you can replicate. By the end of this guide, youll have the knowledge and confidence to run containers in any environment, from your laptop to enterprise-grade orchestration platforms.

Step-by-Step Guide

Understanding Containerization Basics

Before running containers, its critical to understand what they are and how they differ from traditional virtual machines (VMs). A container is a standardized unit of software that packages code and all its dependencieslibraries, system tools, configuration files, and runtimeinto a single, portable bundle. Unlike VMs, which virtualize the entire operating system, containers share the host OS kernel and isolate processes using namespaces and cgroups. This makes containers significantly faster to start, more resource-efficient, and easier to scale.

The most widely adopted containerization platform is Docker, though alternatives like Podman, LXC, and containerd are gaining traction. For this guide, well focus on Docker as the primary tool, with notes on Podman where relevant. Docker simplifies container lifecycle management with a clean CLI, a vast ecosystem of pre-built images, and robust documentation.

Prerequisites

To follow along, ensure your system meets the following requirements:

  • A modern operating system: Linux (Ubuntu 20.04+, CentOS 8+, etc.), macOS (10.15+), or Windows 10/11 Pro or Enterprise (with WSL2 enabled)
  • At least 4GB of RAM and 10GB of free disk space
  • Internet connectivity for downloading container images

For Windows users, enable WSL2 (Windows Subsystem for Linux 2) and install a Linux distribution from the Microsoft Store (e.g., Ubuntu). For macOS, Docker Desktop is the recommended solution. On Linux, you can install Docker Engine directly via package managers.

Installing Docker

Installing Docker varies slightly by platform. Below are the commands for the most common environments.

On Ubuntu/Debian

Update your package index and install required dependencies:

sudo apt update

sudo apt install apt-transport-https ca-certificates curl software-properties-common

Add Dockers official GPG key:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

Add the Docker repository:

echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Install Docker Engine:

sudo apt update

sudo apt install docker-ce docker-ce-cli containerd.io

Verify the installation:

sudo docker --version

On macOS

Download Docker Desktop for Mac from docker.com. Install the .dmg file and launch Docker Desktop from your Applications folder. The application will automatically configure the environment and start the Docker daemon. Verify with:

docker --version

On Windows (with WSL2)

Install Docker Desktop for Windows from docker.com. During installation, ensure Use WSL 2 instead of Hyper-V is selected. After installation, restart your system. Open PowerShell and run:

docker --version

Running Your First Container

Now that Docker is installed, lets run your first container. The classic Hello, World! example is an excellent starting point.

docker run hello-world

Docker will check if the hello-world image exists locally. If not, it will pull it from Docker Huba public registry of container images. Once downloaded, Docker creates a container from the image and runs it. The container executes a simple program that prints a welcome message and then exits.

Youll see output similar to:

Unable to find image 'hello-world:latest' locally

latest: Pulling from library/hello-world

2db29710123e: Pull complete

Digest: sha256:1a523af650137b7accdaed3626b6575876496914468836151757498932926205

Status: Downloaded newer image for hello-world:latest

Hello from Docker!

This message shows that your installation appears to be working correctly.

...

This confirms Docker is properly installed and functional.

Running Interactive Containers

To interact with a container, use the -it flags. For example, run a Ubuntu container with an interactive shell:

docker run -it ubuntu /bin/bash

This command:

  • docker run: Starts a new container
  • -it: Enables interactive mode (i = interactive, t = allocate a pseudo-TTY)
  • ubuntu: Specifies the image to use (from Docker Hub)
  • /bin/bash: The command to execute inside the container

Youll now be inside a Bash shell inside the Ubuntu container. Try running commands like ls, cat /etc/os-release, or apt update. When youre done, type exit to leave the container.

Important: The container stops when the main process (in this case, Bash) exits. To restart it later, youll need to use docker start and docker attach.

Running Background (Detached) Containers

For long-running services like web servers or databases, youll want to run containers in detached mode using the -d flag.

Lets run an Nginx web server:

docker run -d -p 8080:80 --name my-nginx nginx

Breakdown:

  • -d: Run container in detached mode (in the background)
  • -p 8080:80: Map port 8080 on the host to port 80 in the container
  • --name my-nginx: Assign a custom name to the container
  • nginx: The image to use

Verify the container is running:

docker ps

You should see output listing your my-nginx container with its status as Up. Open your browser and navigate to http://localhost:8080. Youll see the default Nginx welcome page.

Managing Container Lifecycle

Once containers are running, youll need to manage them effectively. Here are the most essential commands:

  • docker ps: List running containers
  • docker ps -a: List all containers (including stopped ones)
  • docker stop <container_name_or_id>: Stop a running container
  • docker start <container_name_or_id>: Start a stopped container
  • docker restart <container_name_or_id>: Restart a container
  • docker rm <container_name_or_id>: Remove a stopped container
  • docker rmi <image_name>: Remove a local image
  • docker logs <container_name_or_id>: View container logs
  • docker exec -it <container_name_or_id> /bin/bash: Open a shell in a running container

For example, to stop and remove the Nginx container:

docker stop my-nginx

docker rm my-nginx

To remove the image entirely:

docker rmi nginx

Building Your Own Container Image

While pre-built images are convenient, youll eventually need to create custom images for your applications. This is done using a Dockerfilea text file containing instructions to build an image.

Create a new directory for your project:

mkdir my-app

cd my-app

Create a file named Dockerfile (no extension):

FROM python:3.11-slim

WORKDIR /app

COPY requirements.txt .

RUN pip install --no-cache-dir -r requirements.txt

COPY . .

CMD ["python", "app.py"]

Now create a simple Python app. Create app.py:

print("Hello from a custom container!")

while True:

pass

Create requirements.txt:

Flask==2.3.3

Build the image:

docker build -t my-python-app .

The -t flag tags the image with a name. Once built, run it:

docker run -it my-python-app

Youll see Hello from a custom container! printed. This demonstrates how to package an application with its dependencies into a reusable, portable container.

Using Docker Compose for Multi-Container Applications

Most real-world applications consist of multiple services: a web server, a database, a cache, etc. Docker Compose allows you to define and run multi-container applications using a single YAML file.

Create a docker-compose.yml file:

version: '3.8'

services:

web:

build: .

ports:

- "5000:5000"

depends_on:

- redis

redis:

image: "redis:alpine"

Update app.py to use Redis:

from flask import Flask

import redis

app = Flask(__name__)

cache = redis.Redis(host='redis', port=6379)

@app.route('/')

def hello():

count = cache.incr('hits')

return f'Hello! This page has been viewed {count} times.'

if __name__ == "__main__":

app.run(host="0.0.0.0", port=5000)

Install Flask in requirements.txt if not already present.

Start the services:

docker-compose up

Docker Compose will build the web image, pull Redis, and start both containers. Access the app at http://localhost:5000. Refresh the pagethe hit counter increases, proving Redis is working.

Best Practices

Use Minimal Base Images

Always prefer slim or alpine variants of base images (e.g., python:3.11-slim over python:3.11). Smaller images reduce download times, minimize attack surface, and improve security. Alpine Linux images are particularly popular due to their tiny size (often under 5MB).

Minimize Image Layers

Each instruction in a Dockerfile creates a new layer. Combine related commands using && to reduce layers. For example:

RUN apt-get update && apt-get install -y curl && rm -rf /var/lib/apt/lists/*

This installs curl and cleans up package metadata in a single layer, avoiding unnecessary bloat.

Use .dockerignore

Just as you use .gitignore to exclude files from version control, use .dockerignore to exclude files from the build context. This improves build speed and prevents sensitive files (like .env, node_modules, or log files) from being copied into the image.

Example .dockerignore:

.env

node_modules

__pycache__

*.log

.DS_Store

Dont Run as Root

By default, containers run as the root user. This is a major security risk. Create a non-root user inside the container:

FROM python:3.11-slim

RUN addgroup -g 1001 -S appuser && adduser -u 1001 -S appuser -g appuser

USER appuser

WORKDIR /home/appuser

COPY --chown=appuser:appuser . .

CMD ["python", "app.py"]

This reduces the impact of potential container escapes or privilege escalation attacks.

Set Resource Limits

Prevent containers from consuming excessive CPU or memory. Use flags like --memory and --cpus when running containers:

docker run -d --memory=256m --cpus=0.5 nginx

With Docker Compose:

services:

web:

image: nginx

deploy:

resources:

limits:

memory: 256M

cpus: '0.5'

Use Environment Variables for Configuration

Never hardcode secrets or configuration values in images. Use environment variables instead:

docker run -e DB_HOST=db.example.com -e DB_PORT=5432 my-app

Or with Docker Compose:

environment:

- DB_HOST=db

- DB_PORT=5432

- API_KEY=${API_KEY}

Load sensitive values from a file or system environment using ${VAR} syntax.

Scan Images for Vulnerabilities

Regularly scan your images for known security vulnerabilities. Docker has built-in scanning via Docker Hub, or use tools like Trivy, Clair, or Snyk:

trivy image my-python-app

Fix vulnerabilities by updating base images or patching dependencies.

Label Your Images

Add metadata to your images using labels for better traceability:

docker build -t my-app:v1.2.3 \

--label "maintainer=dev-team@example.com" \

--label "version=1.2.3" \

--label "build-date=$(date -u +%Y-%m-%dT%H:%M:%SZ)" .

Tools and Resources

Essential Tools

  • Docker Desktop The most user-friendly way to run containers on macOS and Windows. Includes Docker Engine, Docker Compose, and Kubernetes integration.
  • Podman A Docker-compatible container engine that runs without a daemon. Ideal for rootless containers and environments where security is paramount.
  • Docker Compose For defining and running multi-container applications. Built into Docker Desktop; available separately on Linux.
  • Trivy Open-source vulnerability scanner for containers and infrastructure as code.
  • Portainer A lightweight GUI for managing Docker and Kubernetes environments. Great for visualizing containers, logs, and networks.
  • BuildKit A modern backend for Docker builds with improved performance, caching, and security features. Enable with DOCKER_BUILDKIT=1.
  • Skopeo A tool for copying, inspecting, and managing container images across registries without requiring Docker.

Public Container Registries

  • Docker Hub The largest public registry, hosting official images from software vendors (e.g., nginx, postgres, redis).
  • GitHub Container Registry (GHCR) Integrated with GitHub Actions and repositories. Ideal for CI/CD pipelines.
  • Google Container Registry (GCR) Google Clouds private container registry.
  • Azure Container Registry (ACR) Microsofts managed container registry service.
  • Amazon Elastic Container Registry (ECR) AWSs secure, scalable container registry.

Learning Resources

Community and Support

Engage with active communities for troubleshooting and learning:

  • Docker Community Forums forums.docker.com
  • Stack Overflow Search for tags like [docker] and [container]
  • Reddit: r/docker Active discussions and real-world use cases
  • GitHub Issues Report bugs or request features for Docker and related tools

Real Examples

Example 1: Running a PostgreSQL Database

Deploying a database in a container is a common use case. Heres how to run PostgreSQL with persistent storage:

docker run -d \

--name postgres-db \

-e POSTGRES_DB=myapp \

-e POSTGRES_USER=admin \

-e POSTGRES_PASSWORD=securepassword123 \

-v postgres_data:/var/lib/postgresql/data \

-p 5432:5432 \

postgres:15

  • -v postgres_data:/var/lib/postgresql/data Mounts a named volume to persist data beyond container lifecycle
  • -p 5432:5432 Exposes the database port

Connect to the database using a client like psql or DBeaver:

docker exec -it postgres-db psql -U admin -d myapp

Example 2: Deploying a Node.js App with Nginx Reverse Proxy

Create a docker-compose.yml file:

version: '3.8'

services:

node-app:

build: ./node-app

expose:

- 3000

environment:

- NODE_ENV=production

networks:

- app-network

nginx:

image: nginx:alpine

ports:

- "80:80"

volumes:

- ./nginx/default.conf:/etc/nginx/conf.d/default.conf

depends_on:

- node-app

networks:

- app-network

networks:

app-network:

driver: bridge

Configure nginx/default.conf:

server {

listen 80;

location / {

proxy_pass http://node-app:3000;

proxy_http_version 1.1;

proxy_set_header Upgrade $http_upgrade;

proxy_set_header Connection 'upgrade';

proxy_set_header Host $host;

proxy_cache_bypass $http_upgrade;

}

}

Build and run:

docker-compose up --build

Your Node.js app is now accessible via Nginx on port 80, with traffic properly routed.

Example 3: CI/CD Pipeline with GitHub Actions

Automate container builds and pushes using GitHub Actions. Create .github/workflows/build-and-push.yml:

name: Build and Push Docker Image

on:

push:

branches: [ main ]

jobs:

build:

runs-on: ubuntu-latest

steps:

- uses: actions/checkout@v4

- name: Login to GitHub Container Registry

uses: docker/login-action@v3

with:

registry: ghcr.io

username: ${{ github.actor }}

password: ${{ secrets.GITHUB_TOKEN }}

- name: Build and Push

uses: docker/build-push-action@v5

with:

context: .

file: ./Dockerfile

push: true

tags: ghcr.io/${{ github.repository }}:latest

When you push to the main branch, GitHub Actions builds your image and pushes it to GitHub Container Registry automatically.

FAQs

Whats the difference between a Docker image and a container?

An image is a read-only template with instructions for creating a container. Think of it as a class in object-oriented programming. A container is a running instance of that imagelike an object instantiated from the class. You can create multiple containers from a single image, each with its own state and resources.

Can I run containers on Windows without Docker Desktop?

Yes, but with limitations. You can use Windows Server containers with Docker Engine on Windows Server OS. For Windows 10/11, Docker Desktop (with WSL2) is the standard and recommended approach. Alternatively, use Podman with WSL2 for a daemonless experience.

How do I update a running container?

You cannot update a running container directly. Instead, stop and remove the old container, then pull the latest image and start a new one:

docker stop my-app

docker rm my-app

docker pull my-app:latest

docker run -d --name my-app my-app:latest

For production environments, use orchestration tools like Kubernetes or Docker Swarm to perform rolling updates with zero downtime.

Are containers secure?

Containers are secure when configured properly. They provide process isolation, but they share the host kernel, making them less isolated than VMs. To improve security: run as non-root, use minimal images, scan for vulnerabilities, limit resource usage, and avoid exposing unnecessary ports. Always follow the principle of least privilege.

Can containers replace virtual machines?

Containers are not a direct replacement for VMs. VMs are better for running multiple operating systems or when strong isolation is required (e.g., multi-tenant environments). Containers are ideal for microservices, stateless apps, and development workflows. Many organizations use both: containers on VMs for added security and scalability.

How much disk space do containers use?

Container images vary in size. A minimal Alpine image may be under 5MB, while a full Ubuntu image is around 70MB. Running containers add a thin writable layer on top of the image. Multiple containers sharing the same base image use less space due to layer sharing. Use docker system df to check disk usage.

What happens to data when a container stops?

Data written inside a containers filesystem is lost when the container is removed unless its stored in a volume or bind mount. Use Docker volumes (-v or volumes: in Compose) for persistent data like databases, logs, or user uploads.

How do I debug a failing container?

Use docker logs <container> to view output. Use docker inspect <container> to check configuration, network settings, and mount points. Use docker exec -it <container> sh to enter the container and run diagnostic commands manually.

Conclusion

Running containers is no longer a niche skillits a foundational capability for modern software development and operations. From simple single-container apps to complex microservices architectures, containers provide consistency, efficiency, and scalability that traditional deployment methods simply cannot match. This guide has walked you through the entire lifecycle: from installation and basic execution to building custom images, managing multi-container applications, and applying security best practices.

Remember, the power of containers lies not just in their ability to run applications, but in how they enable collaboration, automation, and reliability across teams and environments. Whether you're deploying a static website, a machine learning model, or a distributed microservice system, containers give you the flexibility to do so with confidence.

As you continue your journey, explore orchestration platforms like Kubernetes, integrate containers into CI/CD pipelines, and experiment with serverless container platforms like AWS Fargate or Google Cloud Run. The ecosystem is vast, evolving rapidly, and full of opportunity.

Start small, build consistently, and prioritize security and efficiency. The future of software delivery is containerizedand now, youre equipped to lead it.