Docker Swarm and Microservices

Docker Swarm and Microservices

This blog is in continuation with Docker Walkthrough. To follow along with this blog, just consider reading my previous getting started with docker walkthrough. Anyway, if you have basic understanding of containers and docker you are good to go. We will cover the below listed topics :

  • Multicontainer apps with docker compose

  • docker swarm fundamentals

  • building and working with swarm

  • Multicontainer apps with docker swarm

Basic fundamental :

Microservice: Monolithic app is divided into many smaller apps based on their functionality which can be scaled and maintained seperately. These dynamic small components runs on nodes.

Cloud Naive: Self Healing, auto-scaling and healing updates.

Multicontainer Apps

Compose.yml file

We define all volumes, all networks and all components for the app in a single file itself.

Remember docker file tells how to build image but this file does all managing work.

In the example we have:

  • network volume

  • counter volume

  • front-end

  • backend (reddis)

networks:
  counter-net:

volumes:
  counter-vol:

services:
  web-fe:
    build: .
    command: python app.py
    ports:
      - target: 8080
        published: 5001
    networks:
      - counter-net
    volumes:
      - type: volume
        source: counter-vol
        target: /app
  redis:
    image: "redis:alpine"
    networks:
      counter-net:

Run this one command to run the app in a detached from terminal fashion.

docker compose up --detach

Now, we have two containers running

docker list containers
Output:
redis-alphine
compose-web-fe

Bring the app down

docker compose down --volumes

Docker Swarm

We will be creating nodes as light-weight virtual machines as Nodes. Think of nodes as infrastructure.

Split Brain : Always design odd instances of manager nodes avoiding read-only mode of nodes when whole system is break into equal halves.

Generally spread 3/5 managers over different availability zones and as many workers as needed.

multipass is the tool in docker desktop for making multiple nodes. (cloud style ubuntu virtual machines). Just install multipass before proceeding from here. However, you can look into and follow along later on.

multipass find
---
---
docker : container with portainer & related tools.
---

Launch 5 nodes like this. Just replace n1 with n2, n3, n4 and n5.

multipass launch docker --name n1

List all nodes created.

multipass list

Name Status  Ipv4     Image
n1   running 10.0.0.2 ubuntu22.4 LTS
...

multipass info n1

Go to the n1 shell (inside the vm terminal). We will enter the node using shell command and our terminal. After you entered the node please follow along.

Now we have created infrastructure node and now our target is to make our app run on these nodes.

docker swarm init -advertise-addr <ipaddress of n1>

ipaddress of n1 : used for cluster communication

Now just follow along the instruction hopping from nodes to nodes initializing managers and workers.

docker swarm join-token manager
<output>
<output> run on n2 n3

List of managers

docker node ls

Similar process for adding workers

docker swarm join-token worker
<output>
run it on n4, n5

Command to keep managers clean i.e to state that no processes will be running on them and they will only manage the cluster.

docker node update --availability drain n1,n2,n3

Important

  • Instead of mapping these microservices to individual container-spec, we map them to service objects. Think of service objects as an encapsulation and we talk to these instead of directly interacting with container-spec. Feel free to be more curious to google it.

We are creating a service to run 3 identical image replicas to be distributed across the nodes.

docker service create --name web port 8000:8080 replicas 3 <usr>/<repo>:<img_name>

To see the 3 replicas running on different nodes, check

docker service ps web

output:
web.1 node4
web.2 node5
web.3 node4

Note: docker container ls will show nil from manager node.

We can literally hit any node with port 8080 and land on our app page.

multipass list
output:
n1 123.5.6.7 running -> hit 123.5.6.7/8080
n2 234.5.6.7 running -> hit 234.5.6.7/8080
...

You guessed it right. Even manager nodes IP address are working. Wow, although our app is running on worker nodes. (Load Balancing)

Scale the replicas that shows self healing.

docker service scale web=10

Multi-container App with Docker Swarm

Stack: jargin for multicontainer app run using docker swarm

Image for the app needs to be pre-built. Can't build at the time of deploy.

Locally git clone inside the node and build docker image from here itself. Push this image to dockerhub.

Now,

docker stack deploy -c compose.yml <app_name>

-c : to tell we are deploying from compose file.

Look at the compose.yml for a second.

# Un-comment the 'version' field if you get an 'unsupported Compose file version: 1.0' ERROR
version: '3.8'
networks:
  counter-net:

volumes:
  counter-vol:

services:
  web-fe:
    image: nigelpoulton/gsd:swarm2023
    ## already build this image here.
    command: python app.py
    deploy:
      replicas: 10
    ports:
      - target: 8080
        published: 5001
    networks:
      - counter-net
    volumes:
      - type: volume
        source: counter-vol
        target: /app
  redis:
    image: "redis:alpine"
    networks:
      counter-net:

Commands to check further.

docker stack services <service_name>

Output:
counter_web_fe 1/1
counter_redis 10/10

Command to check each replica.

docker stack ps <service_name>

Output:
1 -> counter_redis.1
10 -> counter_web_fe

Hit any node Ipv4 and you will get the app running at port 5001. Load is balanced across 10 running instances.

Always change the compose.yml file for changing the deployment, never do it from inside the node1 using docker service scale command.

Enjoy the auto-healing features.

Clean your work now.

docker stack rm <service_name>

optional @ docker-desktop: docker swarm leave --force

multipass delete n1 n2 ... n5

multipass purge

This marks the end of this blog. There's a lot more to cover and this blog was intended to provide a basic understanding about this topic. Kubernetes provide more advanced features of playing with nodes but Docker Swarm is a good place to getting started. Be curious to learn more about things you didn't fully understand here.