Lab-35: Docker Swarm

Docker swarm is cluster management and orchestration tool for Docker container. Docker engines participating in a cluster are running in swarm mode. You enable swarm mode for an engine by either initializing a swarm or joining an existing swarm.

A swarm is a cluster of Docker engines, or nodes, where you deploy services. The Docker Engine CLI and API include commands to manage swarm nodes (e.g., add or remove nodes), and deploy and orchestrate services across the swarm.

When you run Docker without using swarm mode, you execute container commands. When you run the Docker in swarm mode, you orchestrate services. You can run swarm services and standalone containers on the same Docker instances.

Read more about Docker swarm here

Docker Swarm terminology

Node - A node is an instance of the Docker engine participating in the swarm. You
can nodes in one physical machine or distributed across multiple machine
Manager node - A Manager node manages services on a cluster
Worker node - A worker node executes task dispatched by manager node

Prerequisite

To use swarm you need Docker version v1.12 or higher. Install docker on all nodes, follow instructions below:

Login as a user with sudo permission
$su - divine

$ uname -r   //check kernel version it should be 3.10 or higher
3.10.0-229.el7.x86_64

$ sudo yum update

run Docker installation script
$ curl -fsSL https://get.docker.com/ | sh

Logout and log back in
$logout
$su - divine

Enable the service
$ sudo systemctl enable docker.service
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service

Start Docker daemon
$ sudo systemctl start docker

check if Docker is running
$docker -v
Docker version 1.12.6, build 78d1802

Create Docker group
$ sudo groupadd docker

Add user to Docker group
$sudo usermod -aG docker $(whoami)

Logout and log back in
$logout
$su - divine

[divine@localhost ~]$ docker --version
Docker version 1.12.6, build 78d1802
[divine@localhost ~]$

In this lab I have three physical machines, one machine acting as Manager and two as Worker. Make sure there is IP connectivity among nodes (perform ping test)
Manager= 192.254.211.167
Worker_1= 192.254.211.166
Worker_1= 192.254.211.168

Procedure

  1. Create swarm on machine acting as Manager
[divine@Manager ~]$ docker swarm init --advertise-addr 192.254.211.167
Swarm initialized: current node (9ra9h1uopginqno78zi8rq9ug) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join \
    --token SWMTKN-1-4ok62ytnftqgeyvzlz82zlbuzmbx3soqhyfjsoiylsfx7o3tnh-8b5cyqbg6ukpl9oxap0ntrl7t \
    192.254.211.167:2377

Swarm process is running and listening on port 2377

[divine@Manager ~]$ netstat -pan | grep :2377
(No info could be read for "-p": geteuid()=1002 but you should be root.)
tcp 0 0 192.254.211.167:40892 192.254.211.167:2377 ESTABLISHED -
tcp 0 0 127.0.0.1:53116 127.0.0.1:2377 ESTABLISHED -
tcp6 0 0 :::2377 :::* LISTEN -
tcp6 0 0 192.254.211.167:2377 192.254.211.167:40892 ESTABLISHED -
tcp6 0 0 127.0.0.1:2377 127.0.0.1:53116 ESTABLISHED -
[divine@localhost ~]$

[divine@Manager ~]$ docker node ls
ID                           HOSTNAME               STATUS  AVAILABILITY  MANAGER STATUS
9ra9h1uopginqno78zi8rq9ug *  Manager                 Ready   Active        Leader
[divine@Manager ~]$

2. Add worker_1 to swarm

//run this command to get worker join command
[divine@Manager ~]$ docker swarm join-token worker
To add a worker to this swarm, run the following command:

    docker swarm join \
    --token SWMTKN-1-4ok62ytnftqgeyvzlz82zlbuzmbx3soqhyfjsoiylsfx7o3tnh-8b5cyqbg6ukpl9oxap0ntrl7t \
    192.254.211.167:2377

//run out of  above command on worker node 
[divine@Worker_1 ~]$ docker swarm join \
>     --token SWMTKN-1-4ok62ytnftqgeyvzlz82zlbuzmbx3soqhyfjsoiylsfx7o3tnh-8b5cyqbg6ukpl9oxap0ntrl7t \
>     192.254.211.167:2377
This node joined a swarm as a worker.

3.Add worker_2 to swarm using same command. Run same command on Worker_2

[divine@Worker_2 ~]$ docker swarm join \
>     --token SWMTKN-1-4ok62ytnftqgeyvzlz82zlbuzmbx3soqhyfjsoiylsfx7o3tnh-8b5cyqbg6ukpl9oxap0ntrl7t \
>     192.254.211.167:2377
This node joined a swarm as a worker.

Note:All nodes date/time needs to be in-sync otherwise worker join will fail

4. Check node status in manager node. Run these commands on manager node

MANAGER STATUS:<empty> – worker node, leader – primary manager

AVAILABILITY: active – node is active scheduler can assign task, pause – don’t assign new task but old task remain , drain – shutdown task and don’t assign new task

[divine@Manager ~]$ docker node ls
ID                           HOSTNAME                     STATUS  AVAILABILITY  MANAGER STATUS
0pye1jeusjpyypcesrjpwsdgg    Worker_1                      Ready   Active
9iz0pjzudzvv2ujqyh0dgzhfg    Worker_2                      Ready   Active
9ra9h1uopginqno78zi8rq9ug *  Manager                       Ready   Active        Leader
[divine@localhost ~]$

//inspect manager node
[divine@Manager ~]$ docker node inspect self
[
    {
        "ID": "9ra9h1uopginqno78zi8rq9ug",
        "Version": {
            "Index": 10
        },
        "CreatedAt": "2017-01-19T23:12:49.552590718Z",
        "UpdatedAt": "2017-01-19T23:12:49.812466748Z",
        "Spec": {
            "Role": "manager",
            "Availability": "active"
        },
        "Description": {
            "Hostname": "Manager",
            "Platform": {
                "Architecture": "x86_64",
                "OS": "linux"
            },
            "Resources": {
                "NanoCPUs": 4000000000,
                "MemoryBytes": 12412542976
            },
            "Engine": {
                "EngineVersion": "1.12.6",
                "Plugins": [
                    {
                        "Type": "Network",
                        "Name": "bridge"
                    },
                    {
                        "Type": "Network",
                        "Name": "host"
                    },
                    {
                        "Type": "Network",
                        "Name": "null"
                    },
                    {
                        "Type": "Network",
                        "Name": "overlay"
                    },
                    {
                        "Type": "Volume",
                        "Name": "local"
                    }
                ]
            }
        },
        "Status": {
            "State": "ready"
        },
        "ManagerStatus": {
            "Leader": true,
            "Reachability": "reachable",
            "Addr": "192.254.211.167:2377"
        }
    }
]
[divine@Manager ~]$

5. Inspect worker node. Run below command on manager node

[divine@Manager ~]$ docker node inspect Worker_2
[
    {
        "ID": "9iz0pjzudzvv2ujqyh0dgzhfg",
        "Version": {
            "Index": 16
        },
        "CreatedAt": "2017-01-19T23:19:56.795050481Z",
        "UpdatedAt": "2017-01-19T23:19:56.954890419Z",
        "Spec": {
            "Role": "worker",
            "Availability": "active"
        },
        "Description": {
            "Hostname": "Worker_2",
            "Platform": {
                "Architecture": "x86_64",
                "OS": "linux"
            },
            "Resources": {
                "NanoCPUs": 8000000000,
                "MemoryBytes": 12412542976
            },
            "Engine": {
                "EngineVersion": "1.13.0",
                "Plugins": [
                    {
                        "Type": "Network",
                        "Name": "bridge"
                    },
                    {
                        "Type": "Network",
                        "Name": "host"
                    },
                    {
                        "Type": "Network",
                        "Name": "macvlan"
                    },
                    {
                        "Type": "Network",
                        "Name": "null"
                    },
                    {
                        "Type": "Network",
                        "Name": "overlay"
                    },
                    {
                        "Type": "Volume",
                        "Name": "local"
                    }
                ]
            }
        },
        "Status": {
            "State": "ready"
        }
    }
]

[divine@Manager ~]$ docker node inspect Worker_1
[
    {
        "ID": "0pye1jeusjpyypcesrjpwsdgg",
        "Version": {
            "Index": 21
        },
        "CreatedAt": "2017-01-19T23:32:47.763436319Z",
        "UpdatedAt": "2017-01-19T23:32:47.916593695Z",
        "Spec": {
            "Role": "worker",
            "Availability": "active"
        },
        "Description": {
            "Hostname": "Worker_1",
            "Platform": {
                "Architecture": "x86_64",
                "OS": "linux"
            },
            "Resources": {
                "NanoCPUs": 4000000000,
                "MemoryBytes": 12412542976
            },
            "Engine": {
                "EngineVersion": "1.13.0",
                "Plugins": [
                    {
                        "Type": "Network",
                        "Name": "bridge"
                    },
                    {
                        "Type": "Network",
                        "Name": "host"
                    },
                    {
                        "Type": "Network",
                        "Name": "macvlan"
                    },
                    {
                        "Type": "Network",
                        "Name": "null"
                    },
                    {
                        "Type": "Network",
                        "Name": "overlay"
                    },
                    {
                        "Type": "Volume",
                        "Name": "local"
                    }
                ]
            }
        },
        "Status": {
            "State": "ready"
        }
    }
]
[divine@Manager ~]$

6.Start a service in swarm. You can create service either by pulling image from docker hub or local image. In case of local image you need to manually load image on all worker and manager node.

//no service active on swarm
[divine@Manager ~]$ docker service ls
ID  NAME  REPLICAS  IMAGE  COMMAND

//run this command on manager node. This will create 20 containers in cluster
named http_server using image nginx. Docker will pull this image from Docker
hub and create 20 containers. As you can see containers are equally distributed on
3 nodes by scheduler
[divine@Manager ~]$ docker service create --replicas 20 --name httpd_server nginx
brscnmwefdbxopz6hk2nnxebc
[divine@Manager ~]$ docker service ls
ID            NAME          REPLICAS  IMAGE  COMMAND
brscnmwefdbx  httpd_server  0/20      nginx
[divine@Manager ~]$ docker service ls
ID            NAME          REPLICAS  IMAGE  COMMAND
brscnmwefdbx  httpd_server  0/20      nginx
[divine@Manager ~]$ docker service ls
ID            NAME          REPLICAS  IMAGE  COMMAND
brscnmwefdbx  httpd_server  0/20      nginx
[divine@Manager ~]$ docker service ps httpd_server
ID                         NAME             IMAGE  NODE                         DESIRED STATE  CURRENT STATE                     ERROR
etg7ng9e738kde0cffknfaymx  httpd_server.1   nginx  Worker_2        Running        Preparing less than a second ago
d7o4irupasdflk9dqq955rjil  httpd_server.2   nginx  Manager        Running        Preparing 26 seconds ago
a0px1ll6rla2nfjfenfro3ax1  httpd_server.3   nginx  Worker_2        Running        Preparing less than a second ago
5lx61fb15p2otfmnz0azyk4c1  httpd_server.4   nginx  Worker_2        Running        Preparing less than a second ago
buvz6ndhjb3o7mdbua9y4st3l  httpd_server.5   nginx  Manager        Running        Preparing 26 seconds ago
5zdum3ef3qo2vakppwnjt9t6n  httpd_server.6   nginx  Worker_2        Running        Preparing less than a second ago
7pt4s3fl9z41hhcqwlo2llefa  httpd_server.7   nginx  Worker_1  Running        Running less than a second ago
c5jt346vkr5rcwkk72hfc3if0  httpd_server.8   nginx  Manager        Running        Preparing 26 seconds ago
6wr6r9zz0hfy1zc4lg49i108m  httpd_server.9   nginx  Worker_1  Running        Running less than a second ago
cmzrng2t23udi4i71k0vhf0jl  httpd_server.10  nginx  Worker_2        Running        Preparing less than a second ago
1zagdr2zbwvaz5sd7tnrlucks  httpd_server.11  nginx  Worker_1  Running        Running less than a second ago
dbap94coizealad8clzi1d779  httpd_server.12  nginx  Worker_2        Running        Preparing less than a second ago
57hvlhnd942fnsr6b2wvhp0al  httpd_server.13  nginx  Worker_1  Running        Running less than a second ago
51ic2pk2eoq9hqvx2kp2u60n1  httpd_server.14  nginx  Worker_1  Running        Running less than a second ago
3myxunl1h2dk14zne2hci83av  httpd_server.15  nginx  Worker_1  Running        Running less than a second ago
b7smowk7getaxnc3xilty2vez  httpd_server.16  nginx  Worker_2        Running        Preparing less than a second ago
dlmagnn74o3mqpo61fb9cchtq  httpd_server.17  nginx  Manager        Running        Preparing 26 seconds ago
0t3j2u3wkl4ym3w66rvplgn65  httpd_server.18  nginx  Worker_1  Running        Running less than a second ago
0smn62ybwbdlrwtysfxax3yyl  httpd_server.19  nginx  Manager        Running        Preparing 26 seconds ago
6hk3ge3t5xhyemd70vyw6ymvk  httpd_server.20  nginx  Manager        Running        Preparing 26 seconds ago

[divine@Manager ~]$ docker service ls
ID            NAME          REPLICAS  IMAGE  COMMAND
brscnmwefdbx  httpd_server  16/20     nginx
[divine@Manager ~]$ docker service ls
ID            NAME          REPLICAS  IMAGE  COMMAND
brscnmwefdbx  httpd_server  20/20     nginx
[divine@Manager ~]$

7.Check containers in worker node. Run below commands on worker node

//as you can see this worker node instantiated 7 containers. 
[divine@Worker_1 ~]$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED              STATUS              PORTS               NAMES
0e7c1a499cac        nginx:latest        "nginx -g 'daemon ..."   About a minute ago   Up About a minute   80/tcp, 443/tcp     httpd_server.15.3myxunl1h2dk14zne2hci83av
b2d474cd78f3        nginx:latest        "nginx -g 'daemon ..."   About a minute ago   Up About a minute   80/tcp, 443/tcp     httpd_server.9.6wr6r9zz0hfy1zc4lg49i108m
b4e9323c2303        nginx:latest        "nginx -g 'daemon ..."   About a minute ago   Up About a minute   80/tcp, 443/tcp     httpd_server.7.7pt4s3fl9z41hhcqwlo2llefa
fa2b01136a2d        nginx:latest        "nginx -g 'daemon ..."   About a minute ago   Up About a minute   80/tcp, 443/tcp     httpd_server.11.1zagdr2zbwvaz5sd7tnrlucks
0a1dfde8ba0f        nginx:latest        "nginx -g 'daemon ..."   About a minute ago   Up About a minute   80/tcp, 443/tcp     httpd_server.18.0t3j2u3wkl4ym3w66rvplgn65
0f33c50d86aa        nginx:latest        "nginx -g 'daemon ..."   About a minute ago   Up About a minute   80/tcp, 443/tcp     httpd_server.14.51ic2pk2eoq9hqvx2kp2u60n1
9c66bcd8e981        nginx:latest        "nginx -g 'daemon ..."   About a minute ago   Up About a minute   80/tcp, 443/tcp     httpd_server.13.57hvlhnd942fnsr6b2wvhp0al

[divine@Worker_1 ~]$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
nginx               latest              a39777a1a4a6        2 days ago          182 MB
[divine@Worker_1 ~]$

8. You can scale up or down the service. Run below commands on manager node

//here I am reducing number of containers from 20 to 10
[divine@Manager ~]$ docker service scale httpd_server=10
httpd_server scaled to 10
[divine@Manager ~]$ docker service ls
ID            NAME          REPLICAS  IMAGE  COMMAND
brscnmwefdbx  httpd_server  10/10     nginx
//10 services moved from running to shutdown
[divine@Manager ~]$ docker service ps httpd_server
ID                         NAME             IMAGE  NODE                         DESIRED STATE  CURRENT STATE                    ERROR
etg7ng9e738kde0cffknfaymx  httpd_server.1   nginx  Worker_2        Shutdown       Shutdown less than a second ago
d7o4irupasdflk9dqq955rjil  httpd_server.2   nginx  Manager        Shutdown       Shutdown 18 seconds ago
a0px1ll6rla2nfjfenfro3ax1  httpd_server.3   nginx  Worker_2        Running        Running 47 seconds ago
5lx61fb15p2otfmnz0azyk4c1  httpd_server.4   nginx  Worker_2        Running        Running 41 seconds ago
buvz6ndhjb3o7mdbua9y4st3l  httpd_server.5   nginx  Manager        Running        Running 4 minutes ago
5zdum3ef3qo2vakppwnjt9t6n  httpd_server.6   nginx  Worker_2        Running        Running 41 seconds ago
7pt4s3fl9z41hhcqwlo2llefa  httpd_server.7   nginx  Worker_1  Shutdown       Shutdown less than a second ago
c5jt346vkr5rcwkk72hfc3if0  httpd_server.8   nginx  Manager        Running        Running 4 minutes ago
6wr6r9zz0hfy1zc4lg49i108m  httpd_server.9   nginx  Worker_1  Shutdown       Shutdown less than a second ago
cmzrng2t23udi4i71k0vhf0jl  httpd_server.10  nginx  Worker_2        Shutdown       Shutdown less than a second ago
1zagdr2zbwvaz5sd7tnrlucks  httpd_server.11  nginx  Worker_1  Running        Running about a minute ago
dbap94coizealad8clzi1d779  httpd_server.12  nginx  Worker_2        Shutdown       Shutdown less than a second ago
57hvlhnd942fnsr6b2wvhp0al  httpd_server.13  nginx  Worker_1  Shutdown       Shutdown less than a second ago
51ic2pk2eoq9hqvx2kp2u60n1  httpd_server.14  nginx  Worker_1  Shutdown       Shutdown less than a second ago
3myxunl1h2dk14zne2hci83av  httpd_server.15  nginx  Worker_1  Running        Running about a minute ago
b7smowk7getaxnc3xilty2vez  httpd_server.16  nginx  Worker_2        Shutdown       Shutdown less than a second ago
dlmagnn74o3mqpo61fb9cchtq  httpd_server.17  nginx  Manager        Shutdown       Shutdown 18 seconds ago
0t3j2u3wkl4ym3w66rvplgn65  httpd_server.18  nginx  Worker_1  Running        Running about a minute ago
0smn62ybwbdlrwtysfxax3yyl  httpd_server.19  nginx  Manager        Running        Running 4 minutes ago
6hk3ge3t5xhyemd70vyw6ymvk  httpd_server.20  nginx  Manager        Running        Running 4 minutes ago

//scale up containers from 10 to 15
[divine@Manager ~]$ docker service scale httpd_server=15
httpd_server scaled to 15

[divine@Manager ~]$ docker service ls
ID            NAME          REPLICAS  IMAGE  COMMAND
brscnmwefdbx  httpd_server  15/15     nginx

//5 new containers started. Looks like Docker doesn't move containers from
shutdown to running state instead it creates new containers
[divine@Manager ~]$ docker service ps httpd_server
ID                         NAME                 IMAGE  NODE                         DESIRED STATE  CURRENT STATE                    ERROR
4fs7ekdmlsd7ljo4blssyho35  httpd_server.1       nginx  Worker_2        Running        Running less than a second ago
etg7ng9e738kde0cffknfaymx   \_ httpd_server.1   nginx  Worker_2        Shutdown       Shutdown less than a second ago
4kjcflhtq8t5m39syol75pbgx  httpd_server.2       nginx  Worker_2        Running        Running less than a second ago
d7o4irupasdflk9dqq955rjil   \_ httpd_server.2   nginx  Manager        Shutdown       Shutdown 59 seconds ago
a0px1ll6rla2nfjfenfro3ax1  httpd_server.3       nginx  Worker_2        Running        Running about a minute ago
5lx61fb15p2otfmnz0azyk4c1  httpd_server.4       nginx  Worker_2        Running        Running about a minute ago
buvz6ndhjb3o7mdbua9y4st3l  httpd_server.5       nginx  Manager        Running        Running 4 minutes ago
5zdum3ef3qo2vakppwnjt9t6n  httpd_server.6       nginx  Worker_2        Running        Running about a minute ago
cezj4m9x16dtow9yet4wovi6e  httpd_server.7       nginx  Worker_1  Running        Running less than a second ago
7pt4s3fl9z41hhcqwlo2llefa   \_ httpd_server.7   nginx  Worker_1  Shutdown       Shutdown less than a second ago
c5jt346vkr5rcwkk72hfc3if0  httpd_server.8       nginx  Manager        Running        Running 4 minutes ago
2iyv3fjibrc53k4aoti5sl115  httpd_server.9       nginx  Worker_1  Running        Running less than a second ago
6wr6r9zz0hfy1zc4lg49i108m   \_ httpd_server.9   nginx  Worker_1  Shutdown       Shutdown less than a second ago
7pi63gh79o9sbm1bqy4ouw24j  httpd_server.10      nginx  Manager        Running        Running 8 seconds ago
cmzrng2t23udi4i71k0vhf0jl   \_ httpd_server.10  nginx  Worker_2        Shutdown       Shutdown less than a second ago
1zagdr2zbwvaz5sd7tnrlucks  httpd_server.11      nginx  Worker_1  Running        Running 2 minutes ago
dbap94coizealad8clzi1d779  httpd_server.12      nginx  Worker_2        Shutdown       Shutdown less than a second ago
57hvlhnd942fnsr6b2wvhp0al  httpd_server.13      nginx  Worker_1  Shutdown       Shutdown less than a second ago
51ic2pk2eoq9hqvx2kp2u60n1  httpd_server.14      nginx  Worker_1  Shutdown       Shutdown less than a second ago
3myxunl1h2dk14zne2hci83av  httpd_server.15      nginx  Worker_1  Running        Running 2 minutes ago
b7smowk7getaxnc3xilty2vez  httpd_server.16      nginx  Worker_2        Shutdown       Shutdown less than a second ago
dlmagnn74o3mqpo61fb9cchtq  httpd_server.17      nginx  Manager        Shutdown       Shutdown 58 seconds ago
0t3j2u3wkl4ym3w66rvplgn65  httpd_server.18      nginx  Worker_1  Running        Running 2 minutes ago
0smn62ybwbdlrwtysfxax3yyl  httpd_server.19      nginx  Manager        Running        Running 4 minutes ago
6hk3ge3t5xhyemd70vyw6ymvk  httpd_server.20      nginx  Manager        Running        Running 4 minutes ago

[divine@Manager ~]$ docker service inspect --pretty httpd_server
ID:             brscnmwefdbxopz6hk2nnxebc
Name:           httpd_server
Mode:           Replicated
 Replicas:      15
Placement:
UpdateConfig:
 Parallelism:   1
 On failure:    pause
ContainerSpec:
 Image:         nginx
Resources:
[divine@localhost ~]$

9. Delete a service. Run below commands on manager node

[divine@Manager ~]$ docker service ls
ID            NAME          REPLICAS  IMAGE  COMMAND
brscnmwefdbx  httpd_server  15/15     nginx
[divine@Manager ~]$ docker service rm httpd_server
httpd_server
[divine@Manager ~]$ docker service ls
ID  NAME  REPLICAS  IMAGE  COMMAND
[divine@Manager ~]$ docker service ps httpd_server
Error: No such service: httpd_server
[divine@localhost ~]$

//check worker
[divine@Worker_1 ~]$ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

10. Leave swarm. Remove worker node from swarm

[divine@Manager ~]$ docker node ls
ID                           HOSTNAME                     STATUS  AVAILABILITY  MANAGER STATUS
0pye1jeusjpyypcesrjpwsdgg    Worker_1                      Ready   Active
9iz0pjzudzvv2ujqyh0dgzhfg    Worker_2                      Ready   Active
9ra9h1uopginqno78zi8rq9ug *  Manager                       Ready   Active        Leader

//try this on worker node
[divine@Worker_1 ~]$ docker swarm leave
Node left the swarm.
[divine@fpm4richdev ~]$

//check manager. Worker_1 shows Down state
[divine@Manager ~]$ docker node ls
ID                           HOSTNAME                     STATUS  AVAILABILITY  MANAGER STATUS
0pye1jeusjpyypcesrjpwsdgg    Worker_1                     Down    Active
9iz0pjzudzvv2ujqyh0dgzhfg    Worker_2                     Ready   Active
9ra9h1uopginqno78zi8rq9ug *  Manager                      Ready   Active        Leader
[divine@localhost ~]$

11. Join swarm again. Worker rejoin the swarm

//run this command on manager to get worker node join command
[divine@Manager ~]$ docker swarm join-token worker
To add a worker to this swarm, run the following command:

    docker swarm join \
    --token SWMTKN-1-4ok62ytnftqgeyvzlz82zlbuzmbx3soqhyfjsoiylsfx7o3tnh-8b5cyqbg6ukpl9oxap0ntrl7t \
    192.254.211.167:2377


//try above join command on worker node
[divine@Worker_1 ~]$ docker swarm join \
>     --token SWMTKN-1-4ok62ytnftqgeyvzlz82zlbuzmbx3soqhyfjsoiylsfx7o3tnh-8b5cyqbg6ukpl9oxap0ntrl7t \
>     192.254.211.167:2377
This node joined a swarm as a worker.
[divine@Worker_1 ~]$

//check node status in manager. Docker adds node again it doesn't move down node
to ready
[divine@Manager ~]$ docker node ls
ID                           HOSTNAME                     STATUS  AVAILABILITY  MANAGER STATUS
0pye1jeusjpyypcesrjpwsdgg    Worker_1                     Down    Active
8pzfy2447ox4c2ay8we1l35su    Worker_1                     Ready   Active
9iz0pjzudzvv2ujqyh0dgzhfg    Worker_2                     Ready   Active
9ra9h1uopginqno78zi8rq9ug *  Manager                      Ready   Active        Leader
[divine@localhost ~]$

//remove down node
[divine@Manager ~]$ docker node rm 0pye1jeusjpyypcesrjpwsdgg
0pye1jeusjpyypcesrjpwsdgg

[divine@Manager ~]$ docker node ls
ID                           HOSTNAME                     STATUS  AVAILABILITY  MANAGER STATUS
8pzfy2447ox4c2ay8we1l35su    Worker_1                     Ready   Active
9iz0pjzudzvv2ujqyh0dgzhfg    Worker_2                     Ready   Active
9ra9h1uopginqno78zi8rq9ug *  Manager                      Ready   Active        Leader
[divine@localhost ~]$


Lab-34: Docker networking

Docker networking provides containers to communicate with each other.  Docker creates three types of networks out of the box 1) bridge 2)host and 3)none

A bridge network is typical Linux bridge implementation. A host network is the mapping of host network to container.

//Default network types
[divine@localhost ~]$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
1eb9249c028b        bridge              bridge              local
3958d58a8597        host                host                local
b2dafd188630        none                null                local
[divine@localhost ~]$

//Docker bridge 
[divine@localhost ~]$ ip addr
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 02:42:85:ce:f9:8d brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:85ff:fece:f98d/64 scope link
       valid_lft forever preferred_lft forever

Docker bridge network test

By default containers use bridge network if no network provided during execution. A bridge network is a Linux bridge. Let’s create two containers using image created in Lab-33 and test Docker bridge networking

Instantiate two containers httpd_server_1 & httpd_server_2

[divine@localhost ~]$ docker run -it -d -P --name=httpd_server_1 ubuntu-httpd-server
a06696b1d1a98c1e4afd288e11495f5bea174f9e28aade5b3c6fb563e4d025a0

[divine@localhost ~]$ docker ps
CONTAINER ID        IMAGE                 COMMAND                  CREATED             STATUS              PORTS                   NAMES
a06696b1d1a9        ubuntu-httpd-server   "apachectl -D FOREGRO"   7 seconds ago       Up 4 seconds        0.0.0.0:32773->80/tcp   httpd_server_1

[divine@localhost ~]$ docker run -it -d -P --name=httpd_server_2 ubuntu-httpd-server
b3bc56610cdcd03f666b37ad33bf0d14206c0a747151541f96b250ecc7686050

[divine@localhost ~]$ docker ps
CONTAINER ID        IMAGE                 COMMAND                  CREATED             STATUS              PORTS                   NAMES
b3bc56610cdc        ubuntu-httpd-server   "apachectl -D FOREGRO"   5 seconds ago       Up 3 seconds        0.0.0.0:32774->80/tcp   httpd_server_2
a06696b1d1a9        ubuntu-httpd-server   "apachectl -D FOREGRO"   21 seconds ago      Up 19 seconds       0.0.0.0:32773->80/tcp   httpd_server_1
[divine@localhost ~]$

Check host machine interfaces. You see two new interfaces created and attached to Docker bridge (docker0)

//Two new interface created, one for each container
[divine@localhost ~]$ ip addr
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 02:42:85:ce:f9:8d brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:85ff:fece:f98d/64 scope link
       valid_lft forever preferred_lft forever

62: veth7a81e08@if61: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP
    link/ether 7e:fe:e9:f2:cd:9b brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::7cfe:e9ff:fef2:cd9b/64 scope link
       valid_lft forever preferred_lft forever
64: vetha5bf765@if63: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP
    link/ether 8a:d8:1a:a0:03:52 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::88d8:1aff:fea0:352/64 scope link
       valid_lft forever preferred_lft forever

//Both interfaces attached to docker bridge
[divine@localhost ~]$ sudo brctl show
bridge name     bridge id               STP enabled     interfaces
docker0         8000.024285cef98d       no              veth7a81e08
                                                        vetha5bf765
[divine@localhost ~]$

Inspect your bridge and check IP addresses assigned to each container. As you can see httpd_server_1 assigned IP address 172.17.0.2 and httpd_server_2 172.17.0.3

[divine@localhost ~]$ docker network inspect bridge
[
    {
        "Name": "bridge",
        "Id": "1eb9249c028b10f76476f6a2e92852e6f87a2e50a7e89926f0a096713a44a945",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16"
                }
            ]
        },
        "Internal": false,
        "Containers": {
            "a06696b1d1a98c1e4afd288e11495f5bea174f9e28aade5b3c6fb563e4d025a0": {
                "Name": "httpd_server_1",
                "EndpointID": "beffedfb0145ee33f15c69b6dfd6f2842174a1bf6e298ccb0f944160a731f9b8",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            },
            "b3bc56610cdcd03f666b37ad33bf0d14206c0a747151541f96b250ecc7686050": {
                "Name": "httpd_server_2",
                "EndpointID": "166dd093f7ec1862b5c705889c2100ebfd4cb62ec5c3105ad2cdaddcff695b53",
                "MacAddress": "02:42:ac:11:00:03",
                "IPv4Address": "172.17.0.3/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]
[divine@localhost ~]$

Connect to httpd_server_1 and ping to httpd_server_2

//connect to container httpd_server_1
[divine@localhost ~]$ docker exec -i -t httpd_server_1 /bin/bash
root@a06696b1d1a9:/# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
61: eth0@if62: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.2/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe11:2/64 scope link
       valid_lft forever preferred_lft forever

//ping to container httpd_server_2 ip address
root@a06696b1d1a9:/bin# ping 172.17.0.3
PING 172.17.0.3 (172.17.0.3): 56 data bytes
64 bytes from 172.17.0.3: icmp_seq=0 ttl=64 time=0.105 ms
64 bytes from 172.17.0.3: icmp_seq=1 ttl=64 time=0.052 ms
64 bytes from 172.17.0.3: icmp_seq=2 ttl=64 time=0.052 ms
64 bytes from 172.17.0.3: icmp_seq=3 ttl=64 time=0.052 ms
^C--- 172.17.0.3 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.052/0.065/0.105/0.023 ms

//connect to container httpd_server_2
[divine@localhost ~]$ docker exec -i -t httpd_server_2 /bin/bash
root@b3bc56610cdc:/# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
63: eth0@if64: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.3/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe11:3/64 scope link
       valid_lft forever preferred_lft forever
root@b3bc56610cdc:/#

//ping to external ip 
root@a06696b1d1a9:/# ping 167.254.210.33
PING 167.254.210.33 (167.254.210.33): 56 data bytes
64 bytes from 167.254.210.33: icmp_seq=0 ttl=254 time=0.984 ms
64 bytes from 167.254.210.33: icmp_seq=1 ttl=254 time=0.522 ms
64 bytes from 167.254.210.33: icmp_seq=2 ttl=254 time=0.548 ms
^C--- 167.254.210.33 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.522/0.685/0.984/0.212 ms

//docker bridge is the default gateway for containers
root@a06696b1d1a9:/# ip route
default via 172.17.0.1 dev eth0
172.17.0.0/16 dev eth0  proto kernel  scope link  src 172.17.0.2
root@a06696b1d1a9:/#

Create bridge network

If you don’t like the IP address scheme in default ‘docker0’ bridge you can create your own bridge network and assign your own subnet

Create bridge network with subnet, name the bridge ‘divine_bridge’

[divine@localhost ~]$ docker network create --subnet 192.168.0.0/24 divine_bridge
ea08d7f8b489c8df56f0412ab5aceb16f59de14539fae15fc2cea3d20b09d087

[divine@localhost ~]$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
1eb9249c028b        bridge              bridge              local
ea08d7f8b489        divine_bridge       bridge              local
3958d58a8597        host                host                local
b2dafd188630        none                null                local

//a new bridge created on host machine
[divine@localhost ~]$ ip addr
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 02:42:85:ce:f9:8d brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:85ff:fece:f98d/64 scope link
       valid_lft forever preferred_lft forever
62: veth7a81e08@if61: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP
    link/ether 7e:fe:e9:f2:cd:9b brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::7cfe:e9ff:fef2:cd9b/64 scope link
       valid_lft forever preferred_lft forever
64: vetha5bf765@if63: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP
    link/ether 8a:d8:1a:a0:03:52 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::88d8:1aff:fea0:352/64 scope link
       valid_lft forever preferred_lft forever
69: br-ea08d7f8b489: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
    link/ether 02:42:07:28:7d:d5 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.1/24 scope global br-ea08d7f8b489
       valid_lft forever preferred_lft forever

[divine@localhost ~]$ sudo brctl show
[sudo] password for divine:
bridge name     bridge id               STP enabled     interfaces
br-ea08d7f8b489         8000.024207287dd5       no
docker0         8000.024285cef98d       no              veth7a81e08
                                                        vetha5bf765
[divine@localhost ~]$

Instantiate two containers and attach them to newly created bridge network (divine_bridge) using option –network

[divine@localhost ~]$ docker run -it -d -P --network=divine_bridge \
--name=httpd_server_3 ubuntu-httpd-server
2a68d3bd191d004f8499086d22d2e02896d6642d35de4b24cd00c6c6e9c26cfe

[divine@localhost ~]$ docker run -it -d -P --network=divine_bridge \
--name=httpd_server_4 ubuntu-httpd-server
5c4d9557bf57dc2f3dad2500761553384ab2dcfc3733c475c3f9c75aa213b506

[divine@localhost ~]$ docker ps
CONTAINER ID        IMAGE                 COMMAND                  CREATED             STATUS              PORTS                   NAMES
5c4d9557bf57        ubuntu-httpd-server   "apachectl -D FOREGRO"   3 minutes ago       Up 3 minutes        0.0.0.0:32776->80/tcp   httpd_server_4
2a68d3bd191d        ubuntu-httpd-server   "apachectl -D FOREGRO"   3 minutes ago       Up 3 minutes        0.0.0.0:32775->80/tcp   httpd_server_3
b3bc56610cdc        ubuntu-httpd-server   "apachectl -D FOREGRO"   21 hours ago        Up 21 hours         0.0.0.0:32774->80/tcp   httpd_server_2
a06696b1d1a9        ubuntu-httpd-server   "apachectl -D FOREGRO"   21 hours ago        Up 21 hours         0.0.0.0:32773->80/tcp   httpd_server_1
[divine@localhost ~]$

//inspect your bridge network and check ip addresses assigned to containers
[divine@localhost ~]$ docker network inspect divine_bridge
[
    {
        "Name": "divine_bridge",
        "Id": "ea08d7f8b489c8df56f0412ab5aceb16f59de14539fae15fc2cea3d20b09d087",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "192.168.0.0/24"
                }
            ]
        },
        "Internal": false,
        "Containers": {
            "2a68d3bd191d004f8499086d22d2e02896d6642d35de4b24cd00c6c6e9c26cfe": {
                "Name": "httpd_server_3",
                "EndpointID": "0e55819e97d562df4aaa7d575a84550510a0b3e5ef066c558a9892f8ac90fff7",
                "MacAddress": "02:42:c0:a8:00:02",
                "IPv4Address": "192.168.0.2/24",
                "IPv6Address": ""
            },
            "5c4d9557bf57dc2f3dad2500761553384ab2dcfc3733c475c3f9c75aa213b506": {
                "Name": "httpd_server_4",
                "EndpointID": "cde3779af37a84a9f6a6b3f42a5f23b083fc37ed98ea0dc773fa8a37f3e4cce7",
                "MacAddress": "02:42:c0:a8:00:03",
                "IPv4Address": "192.168.0.3/24",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]

//Test httpd server
[divine@localhost ~]$ curl localhost:32776
Welcome to Apache2 Web server inside Docker
[divine@localhost ~]$


Check connectivity between containers. Make sure containers can communicate with each other thru your bridge

[divine@localhost ~]$ docker exec -i -t httpd_server_3 /bin/bash
root@2a68d3bd191d:/# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
70: eth0@if71: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:c0:a8:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.0.2/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:c0ff:fea8:2/64 scope link
       valid_lft forever preferred_lft forever

 

Docker host network

Host network maps all host interfaces to container. Let’s instantiate a container with host network using –network=host. You will see that container  has same network interfaces as host machine

//instantiate container with --network=host
[divine@localhost ~]$ docker run -it -d -P --network=host --name=httpd_server_5 ubuntu-httpd-server
4ea9600641e13c1c844edf1e8ecbc11c27bdebf88d64bc391a655d8568bdbd03
[divine@localhost ~]$ docker ps
CONTAINER ID        IMAGE                 COMMAND                  CREATED             STATUS              PORTS                   NAMES
4ea9600641e1        ubuntu-httpd-server   "apachectl -D FOREGRO"   42 seconds ago      Up 40 seconds                               httpd_server_5
5c4d9557bf57        ubuntu-httpd-server   "apachectl -D FOREGRO"   2 hours ago         Up 2 hours          0.0.0.0:32776->80/tcp   httpd_server_4
2a68d3bd191d        ubuntu-httpd-server   "apachectl -D FOREGRO"   2 hours ago         Up 2 hours          0.0.0.0:32775->80/tcp   httpd_server_3
b3bc56610cdc        ubuntu-httpd-server   "apachectl -D FOREGRO"   24 hours ago        Up 24 hours         0.0.0.0:32774->80/tcp   httpd_server_2
a06696b1d1a9        ubuntu-httpd-server   "apachectl -D FOREGRO"   24 hours ago        Up 24 hours         0.0.0.0:32773->80/tcp   httpd_server_1

//inspect host network
[divine@localhost ~]$ docker network inspect host
[
    {
        "Name": "host",
        "Id": "3958d58a8597bbc738cd4cba87d1baa8c491cff2e38b57bdd58f241690a45219",
        "Scope": "local",
        "Driver": "host",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": []
        },
        "Internal": false,
        "Containers": {
            "4ea9600641e13c1c844edf1e8ecbc11c27bdebf88d64bc391a655d8568bdbd03": {
                "Name": "httpd_server_5",
                "EndpointID": "3dcbf81cfafac1b6b9f53140b58ec77cde385117f23c28e7ee42add3768ed30f",
                "MacAddress": "",
                "IPv4Address": "",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]

//connect to container
[divine@localhost ~]$ docker exec -i -t httpd_server_5 /bin/bash

//as you can see all host machine interfaces are mapped to container
root@localhost:/# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 2c:27:d7:1c:88:b4 brd ff:ff:ff:ff:ff:ff
    inet 167.254.211.167/23 brd 167.254.211.255 scope global enp1s0
       valid_lft forever preferred_lft forever
    inet6 fe80::2e27:d7ff:fe1c:88b4/64 scope link
       valid_lft forever preferred_lft forever
3: enp0s29f7u1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:50:b6:19:41:65 brd ff:ff:ff:ff:ff:ff
    inet 10.10.10.10/24 brd 10.10.10.255 scope global enp0s29f7u1
       valid_lft forever preferred_lft forever
    inet 192.168.1.2/30 brd 192.168.1.3 scope global enp0s29f7u1
       valid_lft forever preferred_lft forever
    inet6 2101:db8:0:1::100/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::250:b6ff:fe19:4165/64 scope link
       valid_lft forever preferred_lft forever
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:85:ce:f9:8d brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:85ff:fece:f98d/64 scope link
       valid_lft forever preferred_lft forever
62: veth7a81e08@if61: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether 7e:fe:e9:f2:cd:9b brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::7cfe:e9ff:fef2:cd9b/64 scope link
       valid_lft forever preferred_lft forever
64: vetha5bf765@if63: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether 8a:d8:1a:a0:03:52 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::88d8:1aff:fea0:352/64 scope link
       valid_lft forever preferred_lft forever
69: br-ea08d7f8b489: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:07:28:7d:d5 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.1/24 scope global br-ea08d7f8b489
       valid_lft forever preferred_lft forever
    inet6 fe80::42:7ff:fe28:7dd5/64 scope link
       valid_lft forever preferred_lft forever
71: vethf7c7b4b@if70: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ea08d7f8b489 state UP group default
    link/ether 7a:0d:0a:b2:7f:af brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::780d:aff:feb2:7faf/64 scope link
       valid_lft forever preferred_lft forever
73: veth1f04827@if72: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ea08d7f8b489 state UP group default
    link/ether 56:08:e4:bf:94:fd brd ff:ff:ff:ff:ff:ff link-netnsid 3
    inet6 fe80::5408:e4ff:febf:94fd/64 scope link
       valid_lft forever preferred_lft forever

Here is the difference, in host network there is no container port mapping. Under ‘ports’ show no port mapping like  bridge mode

//as you can see there is no port mapping for container in host network
[divine@localhost ~]$  docker ps -f name=httpd_server_5
CONTAINER ID        IMAGE                 COMMAND                  CREATED             STATUS              PORTS               NAMES
4ea9600641e1        ubuntu-httpd-server   "apachectl -D FOREGRO"   44 minutes ago      Up 44 minutes                           httpd_server_5
[divine@localhost ~]$

[divine@localhost ~]$ curl localhost:80
Welcome to Apache2 Web server inside Docker
[divine@localhost ~]$

Below a test to check process and port on host machine when container in host network created. Note:container is running appache2 server and bind to port 80

//As you can see port 80 in LISTEN mode when container created in host network
[divine@localhost ~]$ netstat -pan | grep :80
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
tcp6       0      0 :::80                   :::*                    LISTEN      -

[divine@localhost ~]$ docker ps
CONTAINER ID        IMAGE                 COMMAND                  CREATED             STATUS              PORTS                   NAMES
4ea9600641e1        ubuntu-httpd-server   "apachectl -D FOREGRO"   50 minutes ago      Up 50 minutes                               httpd_server_5

//delete container
[divine@localhost ~]$ docker stop httpd_server_5
httpd_server_5
[divine@localhost ~]$ docker rm httpd_server_5
httpd_server_5

[divine@localhost ~]$ docker ps
CONTAINER ID        IMAGE                 COMMAND                  CREATED             STATUS              PORTS                   NAMES

//no process listening on port 80
[divine@localhost ~]$ netstat -pan | grep :80
(No info could be read for "-p": geteuid()=1002 but you should be root.)


//httpd server failed as expected
[divine@localhost ~]$ curl localhost:80
curl: (7) Failed connect to localhost:80; Connection refused
[divine@localhost ~]$

Let’s test the limitation of host network. Since both host and container share same name space two processes listening on same port can’t be started in container and host. In bridge mode this can be avoided by container port mapping to different port on host.

Let’ try to start httpd process on host machine while container is running. This will be denied as port 80 is already taken by container httpd server

//install httpd on host
[divine@localhost ~]$sudo yum install httpd

//port 80 already taken by container httpd server
[divine@localhost ~]$ netstat -pan | grep :80
(No info could be read for "-p": geteuid()=1002 but you should be root.)
tcp6       0      0 :::80                   :::*                    LISTEN      -


//start httpd server. It failed to start
[divine@localhost ~]$ service httpd start
Redirecting to /bin/systemctl start  httpd.service
==== AUTHENTICATING FOR org.freedesktop.systemd1.manage-units ===
Authentication is required to manage system services or units.
Authenticating as: root
Password:
==== AUTHENTICATION COMPLETE ===
Job for httpd.service failed because the control process exited with error code. See "systemctl status httpd.service" and "journalctl -xe" for details.
[divine@localhost ~]$

//stop container
[divine@localhost ~]$ docker stop httpd_server_5
httpd_server_5

//port no longer in use
[divine@localhost ~]$ netstat -pan | grep :80
(No info could be read for "-p": geteuid()=1002 but you should be root.)


//start httpd service. service started as port 80 is available
[divine@localhost ~]$ service httpd start
Redirecting to /bin/systemctl start  httpd.service
==== AUTHENTICATING FOR org.freedesktop.systemd1.manage-units ===
Authentication is required to manage system services or units.
Authenticating as: root
Password:
==== AUTHENTICATION COMPLETE ===

[divine@localhost ~]$ netstat -pan | grep 80
(No info could be read for "-p": geteuid()=1002 but you should be root.)
tcp6       0      0 :::80                   :::*                    LISTEN      -

None network

Docker container instantiated with –network=none mean no network interface  and no port mapping, it is up to you how you want to setup networking

//instantiate conatiner with --network=none
[divine@localhost ~]$ docker run -it -d -P --network=none --name=httpd_server_8 ubuntu-httpd-server
750a28adfb416139c450f75affc6237c34a7447728d8233e2294e27e3227f24e

//no port mapping
[divine@localhost ~]$ docker ps
CONTAINER ID        IMAGE                 COMMAND                  CREATED             STATUS              PORTS               NAMES
750a28adfb41        ubuntu-httpd-server   "apachectl -D FOREGRO"   9 seconds ago       Up 7 seconds                            httpd_server_8
[divine@localhost ~]$ docker exec -it httpd_server_8 /bin/bash

//No ip interfaces created
root@750a28adfb41:/# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
root@750a28adfb41:/#

 

 

Lab-33:Docker container

What is Docker

Docker is a framework  to create and maintain containers.Docker main intent is to allow developer to build , test and deploy application quicker. Docker provides process level isolation, application portability and consistency. Under the hood it uses Linux kernel features like namespace to create containers

Docker terminology
Docker daemon - Docker process running on host machine
Docker client - cli interface
Docker image - a Docker image is a snapshot of Docker container
Docker container - Docker container is an instance of Docker image
Dockerfile - a Dockerfile contains instructions to build new image

A Docker container image is created in layers i.e. if you want to create a container image for Apache web server application in Ubuntu. You first load Ubuntu base image (you can get base images from Docker hub) you then add Appache2 server to base image then create web page etc. This completes image creation for container. You then instantiate containers from this image. You can save container image and  share with others

Docker container vs VM

A VM contains complete OS and application. A VM is heavy weight (due to full OS) and slow to start. It is processor intensive need big machines (Large RAM and processors) to run . But it provides full isolation from host machine.

A container shares kernel with host machine. Docker container doesn’t contain complete OS which makes it light weight and fast to start.  You can start 100’s of container in a machine. A container typically starts  in seconds compare to VM in minutes

Below pictorial comparison of VM and Docker container

docker1

You can find a spirited discussion on this topic here

Because Docker containers are executed by the Docker engine (as opposed to a hypervisor), they are not fully isolated. However, the trade off is a small footprint: unlike VMware, Docker does not create an entire virtual operating system— instead, all required components not already running on the host machine are packaged up inside the container with the application. Since the host kernel is shared amongst Docker containers, applications only ship with what they need to run—no more, no less.
This makes Docker applications easier and more lightweight to deploy and faster to start up than virtual machines.

 

Prerequisite

Install Docker. As per docker site there are two methods to install Docker I am using Docker script to install. I am using Centos 7.2 on my host machine

Login as a user with sudo permission
$su - divine

$ uname -r   //check kernel version it should be 3.10 or higher
3.10.0-229.el7.x86_64

$ sudo yum update

run Docker installation script
$ curl -fsSL https://get.docker.com/ | sh

Logout and log back in
$logout
$su - divine

Enable the service
$ sudo systemctl enable docker.service
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service

Start Docker daemon
$ sudo systemctl start docker

check if Docker is running
$docker -v
Docker version 1.12.6, build 78d1802

Create Docker group
$ sudo groupadd docker

Add user to Docker group
$sudo usermod -aG docker $(whoami)

Logout and log back in
$logout
$su - divine

Procedure

In this procedure I will create a Docker container with httpd server. I will be using Ubuntu for the container

  1. Create a folder for httpd
    • $mkdir -p docker/httpd
    • $cd docker/httpd
  2. Pull Ubuntu image from Docker hub. This command will pull Ubuntu image from Docker hub and create a container with base image, run the container and provide bash prompt.
    • $docker run -i -t ubuntu bash
    • $docker search ubuntu  //you can search Docker hub for a image
  3. Install Apache2 httpd server and start it inside the container
    • $apt-get update
    • $apt-get install apache2
    • $service apache2 start
  4. Install some needed utilities
    • $apt-get install vim
    • $apt-get install curl
  5. Update Apache2 index.html file
    • $cd /var/www/html
    • $rm index.html
    • $vi index.html  //Add a line “Welcome to Apache2 Web server inside Docker container”
  6. Test Apache2 server inside container
    • $curl localhost:80  //you should get this output “Welcome to Apache2 Web server inside Docker container”
  7. Exit from bash shell
    • $exit

8. Create the image. Specify container id from the prompt in step 2 ([root@0ee9a9324a51 /]#) or get it by running command ‘docker ps -a’

//create image with name 'ubuntu-httpd-server'
$docker commit 0ee9a9324a51 ubuntu-httpd-server
sha256:7ba87e636f0ecadbdb5b45a8e72ad5268be68a2d22f839532ca60aefe486e157

//Check image
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu-httpd-server latest 7ba87e636f0e 11 minutes ago 318.6 MB
ubuntu latest 104bec311bcd 4 weeks ago 128.9 MB

9. Now we have working image for our container. Let’s create a container and test our Apache2 httpd server

[divine@localhost ~]$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu-httpd-server latest 7ba87e636f0e 11 minutes ago 318.6 MB
ubuntu latest 104bec311bcd 4 weeks ago 128.9 MB

//Run image by exposing httpd server port (80) to 8080 on host. At the same 
time start Apache2 server
[divine@localhost ~]$ docker run -p 8080:80 -d ubuntu-httpd-server apachectl -D FOREGROUND
7afd126c01d1e11e9efa4a6dd905e6519ca3a65bb8044565fe568ffee0f3815d

//Check container
[divine@localhost ~]$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7afd126c01d1 ubuntu-httpd-server "apachectl -D FOREGRO" About a minute ago Up About a minute 0.0.0.0:8080->80/tcp clever_hoover

//Test Apache2 httpd server
[divine@localhost ~]$ curl localhost:8080
Welcome to Apache2 Web server inside Docker container

Remove Docker container and images

//Check Docker container state
[divine@localhost ~]$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1b0013c96090 ubuntu-httpd-server "apachectl -D FOREGRO" 11 minutes ago Up 11 minutes 0.0.0.0:8080->80/tcp nostalgic_knuth

//Stop container
[divine@localhost ~]$ docker stop 1b0013c96090
1b0013c96090

//Remove container
[divine@localhost ~]$ docker rm 1b0013c96090
1b0013c96090

[divine@localhost ~]$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[divine@localhost ~]$

[divine@localhost ~]$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu-httpd-server latest 7ba87e636f0e 16 hours ago 318.6 MB
ubuntu latest 104bec311bcd 4 weeks ago 128.9 MB

//Remove Docker image
[divine@localhost ~]$ docker rmi ubuntu-httpd-server
Untagged: ubuntu-httpd-server:latest
Deleted: sha256:7ba87e636f0ecadbdb5b45a8e72ad5268be68a2d22f839532ca60aefe486e157
Deleted: sha256:91784775b2afecffca200e451be31ac2bd4cb1d1bcef06425f889df25fe51b18

[divine@localhost ~]$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu latest 104bec311bcd 4 weeks ago 128.9 MB
[divine@localhost ~]$

Building Docker image using Dockerfile

In this exercise I will create image using Dockerfile. Docker file is a script containing Docker commands. The advantage of using Dockerfile method to build Docker image is that you can share Dockerfile with others so they can create same image.

Create below file under docker/httpd and file name Dockerfile

FROM ubuntu
MAINTAINER Divine life email:divinelife@lifedivine.net

# Update the image with the latest packages (recommended)
RUN apt-get update -y

# Install Apache2 Web Server
RUN apt-get install apache2

# Copy index.html file from host to Docker
ADD index.html /var/www/html/

# Expose httpd server port to host
EXPOSE 80

ENTRYPOINT [ "apachectl" ]
CMD [ "-D", "FOREGROUND" ]

Docker file Terminology

FROM: Base image. it can be image provided by docker or can be image 
you have built
MAINTAINER: Owner of image
RUN: Command to be executed inside the the container
ADD: Add a file from host to container
CMD: Command executed when image instantiated
EXPOSE: Expose container port number to host 
ENTRYPOINT: Commands that needs to be executed when container starts

Create web server index.html file under docker/httpd. Add below line in index.html file

Welcome to Apache2 Web server inside Docker

Build an image using Docker file

[divine@localhost httpd]$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> 8e94f242f5a6 5 minutes ago 267.6 MB
ubuntu latest 104bec311bcd 4 weeks ago 128.9 MB

[divine@localhost httpd]$ pwd
/home/divine/docker/httpd

[divine@localhost httpd]$ ls
Dockerfile index.html

[divine@localhost httpd]$ cat Dockerfile
FROM ubuntu
MAINTAINER Divine life email:divinelife@lifedivine.net

# Update the image with the latest packages (recommended)
RUN apt-get update -y

# Install Apache2 Web Server
RUN apt-get --assume-yes install apache2

# Copy index.html file from host to Docker
ADD index.html /var/www/html/

# Expose httpd server port to host
EXPOSE 80

ENTRYPOINT [ "apachectl" ]
CMD [ "-D", "FOREGROUND" ]

[divine@localhost httpd]$ cat index.html
Welcome to Apache2 Web server inside Docker

//Build Docker image using Dockerfile
[divine@localhost httpd]$ docker build -t ubuntu-httpd-server .
Sending build context to Docker daemon 3.072 kB
Step 1 : FROM ubuntu
 ---> 104bec311bcd
Step 2 : MAINTAINER Divine life email:divinelife@lifedivine.net
 ---> Using cache
 ---> 70b0078a89e9
Step 3 : RUN apt-get update -y
 ---> Using cache
 ---> fb70d91a3aa8
Step 4 : RUN apt-get --assume-yes install apache2
 ---> Using cache
 ---> 8e94f242f5a6
Step 5 : ADD index.html /var/www/html/
 ---> 348b5b76d1ed
Removing intermediate container 8030b8db503b
Step 6 : EXPOSE 80
 ---> Running in 2db0d68aff48
 ---> dc3213733304
Removing intermediate container 2db0d68aff48
Step 7 : ENTRYPOINT apachectl
 ---> Running in 011d64324f77
 ---> c8ef3ef6ecae
Removing intermediate container 011d64324f77
Step 8 : CMD -D FOREGROUND
 ---> Running in 9d0f4579ecb7
 ---> 1b11311232d3
Removing intermediate container 9d0f4579ecb7
Successfully built 1b11311232d3

[divine@localhost httpd]$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu-httpd-server latest 1b11311232d3 34 seconds ago 267.6 MB
ubuntu latest 104bec311bcd 4 weeks ago 128.9 MB

Let’s create container from newly created image  and test httpd server

[divine@localhost httpd]$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

//start container. The -P flag creates dynamic port mapping for exposed port
[divine@localhost httpd]$ docker run -it -d -P ubuntu-httpd-server
d7f7719fcc4f357ea9a5f390e7f4f336ecb5608fa07f4975f7a9958c4950094b

[divine@localhost httpd]$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d7f7719fcc4f ubuntu-httpd-server "apachectl -D FOREGRO" 8 seconds ago Up 5 seconds 0.0.0.0:32768->80/tcp focused_payne


//This command gives port mapping for exposed port. Here container port 80 mapped
to port 32768 on host
[divine@localhost ~]$ docker port d7f7719fcc4f
80/tcp -> 0.0.0.0:32768

//Test httpd web server
[divine@localhost httpd]$ curl localhost:32768
Welcome to Apache2 Web server inside Docker
[divine@localhost httpd]$

Image sharing

Docker image can be shared with others. Save image as tar file using ‘docker save’ command, the receiver can load image using ‘docker load’ command

//Save image 
[divine@localhost httpd]docker save -o httpd-server.tar ubuntu-httpd-server
[divine@localhost httpd]$ ls
Dockerfile httpd-server.tar index.html
[divine@localhost httpd]$

//Copy image from path to any host Now import to your local docker using 
[divine@localhost httpd]docker load -i httpd-server.tar

Inspect Docker container

You can inspect your container using ‘docker inspect’ command.

[divine@localhost ~]$ docker inspect d7f7719fcc4f
[
 {
 "Id": "d7f7719fcc4f357ea9a5f390e7f4f336ecb5608fa07f4975f7a9958c4950094b",
 "Created": "2017-01-17T16:18:30.758448801Z",
 "Path": "apachectl",
 "Args": [
 "-D",
 "FOREGROUND"
 ],
 "State": {
 "Status": "running",
 "Running": true,
 "Paused": false,
 "Restarting": false,
 "OOMKilled": false,
 "Dead": false,
 "Pid": 28133,
 "ExitCode": 0,
 "Error": "",
 "StartedAt": "2017-01-17T16:18:32.458155991Z",
 "FinishedAt": "0001-01-01T00:00:00Z"
 },
 "Image": "sha256:1b11311232d3b2f1e1b39d4652e3d3f8d41c730d1422fc9857f0a87a8897400e",
 "ResolvConfPath": "/var/lib/docker/containers/d7f7719fcc4f357ea9a5f390e7f4f336ecb5608fa07f4975f7a9958c4950094b/resolv.conf",
 "HostnamePath": "/var/lib/docker/containers/d7f7719fcc4f357ea9a5f390e7f4f336ecb5608fa07f4975f7a9958c4950094b/hostname",
 "HostsPath": "/var/lib/docker/containers/d7f7719fcc4f357ea9a5f390e7f4f336ecb5608fa07f4975f7a9958c4950094b/hosts",
 "LogPath": "/var/lib/docker/containers/d7f7719fcc4f357ea9a5f390e7f4f336ecb5608fa07f4975f7a9958c4950094b/d7f7719fcc4f357ea9a5f390e7f4f336ecb5608fa07f4975f7a9958c4950094b-json.log",
 "Name": "/focused_payne",
 "RestartCount": 0,
 "Driver": "devicemapper",
 "MountLabel": "",
 "ProcessLabel": "",
 "AppArmorProfile": "",
 "ExecIDs": null,
 "HostConfig": {
 "Binds": null,
 "ContainerIDFile": "",
 "LogConfig": {
 "Type": "json-file",
 "Config": {}
 },
 "NetworkMode": "default",
 "PortBindings": {},
 "RestartPolicy": {
 "Name": "no",
 "MaximumRetryCount": 0
 },
 "AutoRemove": false,
 "VolumeDriver": "",
 "VolumesFrom": null,
 "CapAdd": null,
 "CapDrop": null,
 "Dns": [],
 "DnsOptions": [],
 "DnsSearch": [],
 "ExtraHosts": null,
 "GroupAdd": null,
 "IpcMode": "",
 "Cgroup": "",
 "Links": null,
 "OomScoreAdj": 0,
 "PidMode": "",
 "Privileged": false,
 "PublishAllPorts": true,
 "ReadonlyRootfs": false,
 "SecurityOpt": null,
 "UTSMode": "",
 "UsernsMode": "",
 "ShmSize": 67108864,
 "Runtime": "runc",
 "ConsoleSize": [
 0,
 0
 ],
 "Isolation": "",
 "CpuShares": 0,
 "Memory": 0,
 "CgroupParent": "",
 "BlkioWeight": 0,
 "BlkioWeightDevice": null,
 "BlkioDeviceReadBps": null,
 "BlkioDeviceWriteBps": null,
 "BlkioDeviceReadIOps": null,
 "BlkioDeviceWriteIOps": null,
 "CpuPeriod": 0,
 "CpuQuota": 0,
 "CpusetCpus": "",
 "CpusetMems": "",
 "Devices": [],
 "DiskQuota": 0,
 "KernelMemory": 0,
 "MemoryReservation": 0,
 "MemorySwap": 0,
 "MemorySwappiness": -1,
 "OomKillDisable": false,
 "PidsLimit": 0,
 "Ulimits": null,
 "CpuCount": 0,
 "CpuPercent": 0,
 "IOMaximumIOps": 0,
 "IOMaximumBandwidth": 0
 },
 "GraphDriver": {
 "Name": "devicemapper",
 "Data": {
 "DeviceId": "84",
 "DeviceName": "docker-253:1-201327333-97b5d29cf22f37da51d47194d0b019138480e4565e743d6bc7cf141f3e2f9326",
 "DeviceSize": "10737418240"
 }
 },
 "Mounts": [],
 "Config": {
 "Hostname": "d7f7719fcc4f",
 "Domainname": "",
 "User": "",
 "AttachStdin": false,
 "AttachStdout": false,
 "AttachStderr": false,
 "ExposedPorts": {
 "80/tcp": {}
 },
 "Tty": true,
 "OpenStdin": true,
 "StdinOnce": false,
 "Env": [
 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
 ],
 "Cmd": [
 "-D",
 "FOREGROUND"
 ],
 "Image": "ubuntu-httpd-server",
 "Volumes": null,
 "WorkingDir": "",
 "Entrypoint": [
 "apachectl"
 ],
 "OnBuild": null,
 "Labels": {}
 },
 "NetworkSettings": {
 "Bridge": "",
 "SandboxID": "5cae0653b9608d5d78cba2c87bbe863dc741a127756b0e1091e5e21969d99dce",
 "HairpinMode": false,
 "LinkLocalIPv6Address": "",
 "LinkLocalIPv6PrefixLen": 0,
 "Ports": {
 "80/tcp": [
 {
 "HostIp": "0.0.0.0",
 "HostPort": "32768"
 }
 ]
 },
 "SandboxKey": "/var/run/docker/netns/5cae0653b960",
 "SecondaryIPAddresses": null,
 "SecondaryIPv6Addresses": null,
 "EndpointID": "4f173ec6498b8c2f7833f5b25cd3c97e50f616223747108bbfcfa9ab4f96f769",
 "Gateway": "172.17.0.1",
 "GlobalIPv6Address": "",
 "GlobalIPv6PrefixLen": 0,
 "IPAddress": "172.17.0.2",
 "IPPrefixLen": 16,
 "IPv6Gateway": "",
 "MacAddress": "02:42:ac:11:00:02",
 "Networks": {
 "bridge": {
 "IPAMConfig": null,
 "Links": null,
 "Aliases": null,
 "NetworkID": "1eb9249c028b10f76476f6a2e92852e6f87a2e50a7e89926f0a096713a44a945",
 "EndpointID": "4f173ec6498b8c2f7833f5b25cd3c97e50f616223747108bbfcfa9ab4f96f769",
 "Gateway": "172.17.0.1",
 "IPAddress": "172.17.0.2",
 "IPPrefixLen": 16,
 "IPv6Gateway": "",
 "GlobalIPv6Address": "",
 "GlobalIPv6PrefixLen": 0,
 "MacAddress": "02:42:ac:11:00:02"
 }
 }
 }
 }
]