Deploying scalable and resilient web applications is essential in today’s fast-paced digital environment. One effective way to achieve this is by using Docker Swarm for orchestration and Nginx for load balancing. In this blog post, we will walk through deploying a Python web application using Docker Swarm, creating multiple replicas, and distributing incoming traffic using Nginx in a round-robin fashion.

Overview Link to heading

We will build a simple Python web application that returns a message along with the hostname to demonstrate load balancing in action. The web application will be containerized using Docker, and Docker Swarm will manage multiple instances of the application (replicas). Nginx will act as a reverse proxy to distribute incoming requests across these replicas.

This setup can be adapted to any web application framework or language of your choice, making it a versatile solution for building scalable and highly available web services.

Prerequisites Link to heading

Before we start, ensure you have the following installed on your system:

  • Docker and Docker Compose
  • Docker Swarm initialized on your machine
  • Basic knowledge of Docker, Nginx, and Python (or the web framework of your choice)

Project Structure Link to heading

Let’s start by creating the project structure. Our project will include the following files:

├──    Dockerfile
├──    app.py
├──    docker-compose.yml
└──    nginx/
│  └────    nginx.conf

1. Web Application Code Link to heading

Below is the code for a simple Python web application that will return a message and the hostname of the container handling the request.

app.py:

import socket
from flask import Flask  # Example using Flask, but you can use any framework

app = Flask(__name__)

@app.route("/")
def hello():
    hostname = socket.gethostname()
    return {"message": "server is running", "hostname": hostname}

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5000)

2. Dockerfile Link to heading

The Dockerfile defines the environment for running our web application. It uses an official Python image and installs the required packages.

Dockerfile:

# Use the official Python image
FROM python:3.11-slim

# Set the working directory
WORKDIR /app

# Copy the current directory contents into the container at /app
COPY . /app

# Install the necessary packages
RUN pip install Flask

# Make port 5000 available to the world outside this container
EXPOSE 5000

# Run app.py when the container launches
CMD ["python", "app.py"]

3. Nginx Configuration Link to heading

The Nginx configuration file (nginx.conf) will define a reverse proxy server that distributes requests to the different replicas of our web application.

nginx/nginx.conf:

events {}

http {
    upstream flask-app {
        server flask-app:5000;
    }

    server {
        listen 80;
        listen [::]:80;
        server_name 127.0.0.1;

        location / {
            proxy_pass http://flask-app;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }
    }
}

In this configuration, Nginx listens on port 80 and proxies requests to the upstream servers, which are replicas of our web application.

4. Docker Compose Configuration Link to heading

The Docker Compose file defines and runs multi-container Docker applications. We’ll use it to define our services, including the web application and Nginx.

docker-compose.yml:

version: '3.8'

services:
  flask-app:
    image: mdaalam22/flask_app:1.0
    deploy:
      replicas: 3
      update_config:
        parallelism: 2
        delay: 10s
      restart_policy:
        condition: on-failure
    ports:
      - "5000"
    networks:
      - webnet

  nginx:
    image: nginx:latest
    ports:
      - "80:80"
    deploy:
      placement:
        constraints:
          - node.role == manager
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf
    networks:
      - webnet

networks:
  webnet:

Explanation of the Docker Compose Deployment Section Link to heading

  • flask-app service:

    • This service runs the Python web application.
    • replicas: 3: Specifies that Docker Swarm should run three replicas (instances) of the web application. This ensures high availability and load distribution.
    • update_config: Specifies how updates should be rolled out. The parallelism: 2 parameter means that two replicas can be updated in parallel, while delay: 10s adds a 10-second delay between updates.
    • restart_policy: Configures the container to restart if it fails (on-failure).
  • nginx service:

    • This service runs Nginx, which acts as a reverse proxy to distribute incoming requests to the flask-app replicas.
    • ports: "80:80": Maps port 80 on the host machine to port 80 on the Nginx container.
    • volumes: Mounts the local nginx.conf configuration file into the Nginx container to ensure it uses our custom settings for load balancing.
    • deploy: The placement constraint ensures that this service only runs on manager nodes, which is necessary when you have multiple nodes and want to control where the services are deployed.
  • networks: Defines a custom network called webnet that allows the flask-app and nginx services to communicate internally.

Initializing Docker Swarm Link to heading

Before deploying our stack, we need to initialize Docker Swarm on the host machine. This step is crucial as it turns your Docker Engine into a Swarm manager, allowing it to manage and orchestrate multiple containers across different nodes.

Run the following command to initialize Docker Swarm:

docker swarm init

After running this command, Docker Swarm is set up and ready to manage services and deploy stacks.

Deploying to Docker Swarm Link to heading

To deploy this stack to Docker Swarm, use the following command:

docker stack deploy -c docker-compose.yml mystack

This command creates a stack named mystack with the defined services. Docker Swarm automatically manages the replicas and distributes the load across them using the Nginx load balancer in a round-robin fashion. Nginx is configured to use the round-robin load balancing method, which is the default. This means that each incoming request is passed to the next available server in the list. This simple yet effective method ensures a fairly even distribution of requests among the available replicas.

Checking Deployed Services Link to heading

After deploying the stack, you can check the status of the services and ensure everything is running as expected using the following command:

docker service ls

This command lists all the services running in your Docker Swarm, along with their current state and replica count. It’s a useful way to monitor and verify that your services are correctly deployed and running smoothly.

Stopping and Removing the Docker Swarm Stack Link to heading

If you ever need to stop and remove the entire stack, you can use the following command:

docker stack rm mystack

This command removes all services defined in the mystack stack, effectively stopping and cleaning up all resources associated with it.

Testing Load Balancing Link to heading

Once deployed, you can test load balancing by sending multiple requests to the Nginx server:

curl http://localhost:80

Each request should return a different hostname, indicating that the requests are being balanced across different replicas.

Handling Rolling Updates with Docker Swarm Link to heading

Docker Swarm provides native support for rolling updates, ensuring zero downtime when deploying updates. For example, when you want to update your Docker image or configuration, you simply update the image tag and re-deploy:

docker service update --image mdaalam22/flask_app:1.1 mystack_flask-app

Swarm will replace the old replicas with new ones in a controlled manner, as specified in the update_config section of the docker-compose.yml file. The parallelism and delay settings ensure that not all replicas are replaced at once, preventing downtime.

Conclusion Link to heading

By using Docker Swarm and Nginx, you can easily scale your web applications and achieve high availability with minimal effort. This setup is adaptable to any web application framework and provides a robust, production-ready deployment solution. Whether you’re running a small project or a large-scale application, Docker Swarm and Nginx offer the flexibility and power needed to handle modern web traffic efficiently.

Deploying applications with Docker Swarm not only simplifies scaling but also enhances reliability with its rolling update mechanism. And with Nginx serving as a powerful load balancer, your application can handle more traffic seamlessly, providing a better user experience.