Docker Compose: Managing Multi-Container Applications

Hey there, thanks for stopping by! If you’ve been following my journey (and my previous project on microservices architecture might ring a bell), you know that juggling multiple containers in production can quickly turn into a logistical nightmare. Today, I want to share how Docker Compose can make this process a lot simpler, turning that chaos into a well-orchestrated symphony of services.

Why Docker Compose? A Bit of Backstory

In my previous project, I built a web application that required a web server, a PostgreSQL database, and a caching service to work together seamlessly. Initially, I managed each container using individual Docker commands, manually setting environment variables and linking containers. It was a nightmare, especially when I had to recreate the environment on a colleague’s machine or when managing multiple environments. In my case, we had separate environments for development, staging, UAT, and production. I kept thinking, “There has to be an easier way to manage these interdependent services.” That’s when I discovered Docker Compose.

Docker Compose allows you to define and run multi-container applications with a single YAML file. It’s like having a blueprint for your entire stack that can be version-controlled and shared with your team. Plus, it removes the headache of remembering countless CLI options each time you want to bring your project up or tear it down.

Getting Started: The Basics

Before we dive into a complete example, let’s set the stage. Docker Compose uses a YAML file (usually named docker-compose.yml) where you describe all your services, networks, and volumes. Once defined, you can start your entire application with a single command.

A Simple Directory Layout

For our demonstration, let’s assume a directory structure like this:

Bash
my-app/
├── docker-compose.yml
├── web/
   ├── Dockerfile
   └── index.html
├── db/
   └── init.sql
└── cache/
    └── (optional configuration files)

Each folder represents a service. In this case, we have:

  • Web: A simple web server (using Nginx) to serve static content.
  • DB: A PostgreSQL database with an initialization script.
  • Cache: (Optional) For example, a Redis cache that could be added later.
Writing the Compose File

Let’s start with a basic docker-compose.yml file. Open your favorite text editor and create this file in your project’s root directory:

YAML
version: '3.8'

services:
  web:
    build: ./web
    ports:
      - "8080:80"
    depends_on:
      - db
      - cache
    environment:
      - ENV=development
    networks:
      - app-network

  db:
    image: postgres:13
    environment:
      POSTGRES_USER: myuser
      POSTGRES_PASSWORD: mypassword
      POSTGRES_DB: mydb
    volumes:
      - db-data:/var/lib/postgresql/data
      - ./db/init.sql:/docker-entrypoint-initdb.d/init.sql
    networks:
      - app-network

  cache:
    image: redis:alpine
    ports:
      - "6379:6379"
    networks:
      - app-network

volumes:
  db-data:

networks:
  app-network:

In this configuration:

  • The web service is built from the ./web directory. It maps port 80 inside the container to port 8080 on the host.
  • The db service uses an official PostgreSQL image, with environment variables to set up the database and a volume to persist data.
  • The cache service uses Redis. I included it here to show how you can add additional services without too much fuss.
  • All services share a common network, app-network, ensuring they can communicate with one another easily.

Building the Web Service

Let’s set up the web service. Create a Dockerfile inside the web/ folder:

Dockerfile
# Use the official Nginx image as the base image
FROM nginx:latest

# Copy our custom index.html into the container’s default web directory
COPY index.html /usr/share/nginx/html/index.html

Now, create an index.html file in the same folder:

HTML
<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8">
  <title>Welcome to My App</title>
</head>
<body>
  <h1>Hello, Docker Compose!</h1>
  <p>This demo is a part of my journey into managing multi-container applications. Enjoy your stay!</p>
</body>
</html>

This simple setup will allow you to see your Nginx server in action as soon as everything is up and running.


Deep Dive: How Docker Compose Simplifies Workflows

One Command to Rule Them All

One of the best things about Docker Compose is its ability to bring up your entire environment with a single command:

Bash
docker-compose up --build

This command does a lot:

  • Builds your custom images (if necessary).
  • Starts your services in the correct order.
  • Creates networks and volumes as defined in your YAML file.
  • Attaches logs from each service to your terminal, so you can see what’s happening in real time.

It’s a massive improvement over running multiple docker run commands and juggling different configuration settings.

Handling Dependencies

Notice the depends_on keyword in our YAML file. This ensures that the db and cache services start before the web service. However, a word of caution: depends_on only handles container start order, it does not ensure that the dependent services are “ready” before the web service tries to connect. For a more robust solution, consider adding health checks.

Adding a Health Check Example

Let’s update the db service in our docker-compose.yml to include a basic health check:

YAML
  db:
    image: postgres:13
    environment:
      POSTGRES_USER: myuser
      POSTGRES_PASSWORD: mypassword
      POSTGRES_DB: mydb
    volumes:
      - db-data:/var/lib/postgresql/data
      - ./db/init.sql:/docker-entrypoint-initdb.d/init.sql
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U myuser"]
      interval: 10s
      timeout: 5s
      retries: 5
    networks:
      - app-network

This health check uses PostgreSQL’s pg_isready command to verify that the database is accepting connections. If the database isn’t ready, Docker Compose will flag the service as unhealthy, allowing you to incorporate logic in your application to handle retries or delays.


Advanced Docker Compose Features

Environment Files

For sensitive data like passwords and API keys, hardcoding values isn’t the best practice. Instead, Docker Compose can load environment variables from an .env file:

1. Create an .env file in your project root:
YAML
POSTGRES_USER=myuser
POSTGRES_PASSWORD=mypassword
POSTGRES_DB=mydb
2. Update your “docker-compose.yml” to reference these variables:
YAML
db:
  image: postgres:13
  environment:
    - POSTGRES_USER=${POSTGRES_USER}
    - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
    - POSTGRES_DB=${POSTGRES_DB}
  volumes:
    - db-data:/var/lib/postgresql/data
    - ./db/init.sql:/docker-entrypoint-initdb.d/init.sql
  networks:
    - app-network

Using an .env file not only secures your credentials but also makes it easier to adjust configurations across different environments (development, staging, production).

Scaling Services

Another handy feature is the ability to scale a service horizontally. Suppose you wanted to run multiple instances of your web service to handle increased load. You can do this with a single command:

YAML
docker-compose up --scale web=3

This command instructs Docker Compose to create three instances of the web service. It’s especially useful for testing load balancing and simulating real-world traffic scenarios.

Override Files for Local Development

Sometimes, you may want to tweak your configuration for local development without affecting the production settings. Docker Compose supports an override file—typically named docker-compose.override.yml. This file automatically gets merged with the main configuration.

For example, you might have this in docker-compose.override.yml:

YAML
services:
  web:
    environment:
      - ENV=development
    volumes:
      - ./web:/usr/share/nginx/html

Here, you override the ENV variable and mount your local web directory as a volume for live code updates. This makes development a lot more agile.


Common Pitfalls and Troubleshooting

While Docker Compose is incredibly powerful, I’ve run into a few common issues along the way. Here are some tips to help you avoid or solve them:

1. Port Conflicts

If you’re running multiple projects that use Docker Compose, be mindful of port conflicts. For instance, if two projects try to map their web service to port 8080 on your host, you’ll run into problems. The solution is to either change the port mapping in the YAML file or stop the conflicting service.

2. Data Persistence

For services like databases, losing data after a container restarts can be a nightmare. Always use named volumes (as shown with db-data) to ensure your data persists. If you’re in the early stages of development, you might not notice data loss immediately, but it can become a major headache when you least expect it.

3. Dependency Timing

As mentioned earlier, depends_on only ensures container start order—it does not check service readiness. To handle this, consider implementing retry logic in your application or utilizing health checks, which can significantly reduce startup issues.

4. Logging Overload

When running multiple services, the aggregated logs in your terminal can become overwhelming. To manage this, Docker Compose offers logging options where you can set log levels or redirect output to files. You can also use tools like Docker Compose Logs to filter logs by service.


Real-World Use Cases and Lessons Learned

Working with Docker Compose has been a game-changer, especially when collaborating with teams. Here are a few scenarios where Docker Compose really shines:

Collaborative Development

When every team member uses the same docker-compose.yml, it minimizes the “it works on my machine” problem. You can version-control your Compose file along with your code, ensuring that everyone is on the same page.

Continuous Integration (CI)

In CI/CD pipelines, Docker Compose can spin up the entire application stack for integration tests. This ensures that your tests run in an environment that closely mirrors production, leading to more reliable results.

Experimentation and Prototyping

If you’re testing out a new feature or integrating a new service, Docker Compose makes it easy to add, remove, or modify services without a major overhaul. This flexibility encourages experimentation and speeds up the prototyping phase.

Multi-Tier Architectures

From microservices to monolithic applications with distinct service layers (like a separate API, web front-end, and database), Docker Compose handles it all. It enables you to break down complex systems into manageable pieces, each running in its own container but working together seamlessly.


Docker Compose has been a vital part of my development workflow, and I hope this deep dive shows you just how powerful and flexible it can be. Whether you’re managing a few containers or orchestrating a multi-service architecture, Docker Compose takes the pain out of container management, allowing you to focus more on writing great code.

In this post, we covered:

  • The basics of Docker Compose and its benefits.
  • A step-by-step example of setting up a multi-container application.
  • Advanced features like environment files, service scaling, and override files.
  • Common pitfalls and troubleshooting tips.
  • Real-world use cases to illustrate its versatility in collaborative, CI/CD, and prototyping scenarios.

I encourage you to try it out on your next project. Tinker with your own docker-compose.yml, explore the possibilities of scaling and health checks, and see how it transforms your development experience.

Assi Arai
Assi Arai
Articles: 37