Docker advanced container orchestration tutorial

Docker swarm advanced container orchestration concepts and Docker ucp course

Docker : Basic to Advanced

Docker Basics

What is an Image

Images are collection of layers. It is made up of file system changes and metadata.

docker history nginx:latest

The above command will provide the layer history of nginx.


A Dockerfile is a text file that has a series of instructions on how to build your image

Below is an example of Dockerfile

FROM ubuntu MAINTAINER RAMD([email protected]) RUN apt-get update RUN apt-get install -y nginx ENTRYPOINT [“/usr/sbin/nginx”,”-g”,”daemon off;”] EXPOSE 80

This whole commands builds a nginx proxy for us.

FROM is for the base image.

A RUN command is used to execute any commands. In this case we are running a package update and then installing nginx.

The ENTRYPOINT is then running the nginx executable.

EXPOSE command here to inform what port the container will be listening on.

Docker Basic Commands

docker container run --publish 80:80 --name nginxdemo nginx

The above command downloads the image nginx from docker hub

starts new container from that image

opens port 80 on the host Ip

routes that traffic to the container ip port 80.

nginxdemo is the name of the container

Note: you will get bind error if left side(host port) used by another service in your machine.

If you get the bind error, then you can specify any ports as seen in the below command.

docker container run --publish 8088:80 --name nginxdemo nginx

Now if you type localhost:8088 in your web browser nginx page will be displayed.

So what really happens when we specify container run command?

Loads the images from the cache and if not found then download it from remote repository.

Creates a new container and gives it virtual ip on a private network inside docker-engine.

It opens up port 8088 in the host and forwards the traffic to container port 80.

The below command will list the logs of the nginx container

docker container logs nginxdemo

The below command will list the running containers.

docker ps

The below command will list all containers(running and exited containers).

docker ps –a

The below command displays the details of container configuration

docker container inspect nginxdemo

The below command displays the ip address of container

docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id

The below command goes inside the container and opens a bash terminal process.

It will make an additional process to run inside an running container.

docker container exec –it nginxdemo bash

if you specify "exit" and if you type “docker container ls”, you can see that the container will be still running.

You have just exited out of the process only.

What is a container?

A container is a piece of software that packages up code and all its dependencies so the application runs quickly. It can run in any environment.

Container images become containers at runtime created by docker Engine.

Container Networking

Each container is connected with private virtual network “bridge”. This is the default network.

Each container network routes through the NAT with help of Host Ip. It uses the host ip to go to the internet.

Basic commands in docker networks

The below command provides ip address of the nginx container

docker container inspect –format ‘{{ .NetworkSettings.IpAddress }}’ nginxdemo

Note: ip address of nginx container and host address are not same. The Ip address that is being created in the container is the virtual ip.

The below command lists the networks.

docker network ls

The below command creates a network. Here in this example I have named my network to ”nginx_network”

docker network create nginx_network

The below command associates a newly created container to the ”nginx_network”

docker container run –d --name new_nginx --network nginx_network nginx

check using docker inspect you can see that container being associated with the above network.

docker network inspect nginx_network

The below command connects the container to a network.

Replace with your network_id and your container id to associate with.

This does by assigning a new Ethernet interface(NIC) to the container that you are associated with a specified virtual network.

docker network connect (network_id) (container_id)

The below command disconnects the container to a network

docker network disconnect (network_id) (container_id)

Container DNS

The dns is important in docker world.

We cannot rely on ip address of the container as it may go away anytime and we cannot remember the ip address of every container.

By default, a container inherits the DNS settings of the Docker daemon.This applies only to custom network.

What I mean by that is, If you are creating two containers attached to the same custom network which is newly created and if you are doing a ping command in one container to the other container. Ping works!

Note: The default bridge networks does not have a dns built in. If you want the containers to communicate within the bridge network, you should use –link option.

This command creates a container with dns address and used to resolve by dns address.

In the below command, With "--net-alias alternativeName", all containers in the same network can reach the container by using its alias name as “alternativeName” and running an image “mysql”

docker container run –d --net ramnetwork --net-alias alternativeName mysql

Now what I mean by this is, if you run a second container by the below command

docker container run –d --net ramnetwork alpine nslookup alternativeName

It will resolve the dns address by giving the output of mysql container


Below is the process for running a container

Dockerfile =[docker build]=> Docker image =[docker run]=> Docker container

To start (or run) a container you need an image. To create an image you need to build the Dockerfile


Data storage in container is not persistent.

To have a persistent storage, we need to have Volumes in docker world.

It means Data storage location outside the container.

There are two categories of volume. Named volumes and Bind mounts

Note: Volumes need manual deletion. Once we delete the container won’t delete the volume.

Volume commands

The below command lists the volume

docker volume ls

Now to name a volume of your choice use the below command. This is called named volumes

docker container run –d --name nginx –v nginx-db:/var/storage nginx

In the above command, I have assigned volume of name “nginx-db “ and location of files stored will be present in “/var/storage” in the container.

This below command list the configuration for a volume

docker volume inspect nginx-db

In the above command you will see the mount point and location.

This location is present in the actual host(not in the container).

Bind Mounts

Linking the container path to the host path

The below command will map the host’s present working directory to the container volume.

Whatever changes you do in the host working directory it will reflect in container directory location,

for example if you create a html file in host directory location, you can see the changes in the container working directory location usr/share/nginx/html.

docker container run –d --name nginx –p 80:80 –v $(pwd):/usr/share/nginx/html nginx


Passwords are stored in secrets.

Secrets can be accesed by the containers and services that are assigned to.

There are two types of secrets in docker

1.) external secret

2.) file secret

Below is how secrets should be defined in yml file.

secrets: my_secret: file: ./secret_data my_external_secret: external: true

Configuration Breakdown

In the first option, "my_secret",It is file secret in which you need the file path "secret_data".

Second is the external, where in you need to create secret externally by using command "docker secret create". It will get hashed and its more secure.

until now we have seen the basics, now lets focus on advanced one

Docker compose

Compose is a tool for defining and running multi-container Docker applications

You can define your app's environment and services that make up your application in a yml file.

If you have multiple applications, services in your environment and if you want to manage these you can use by having a YAML file in the compose environment.

In the docker compose , it uses the service name as the dns name to talk to another (one container to another)

Docker compose commands

docker-compose build

docker-compose up

docker-compose down

docker-compose logs

docker-compose start

docker-compose stop

docker-compose rm

Let us see a sample docker compose file that can be used in dev environment.

Below is a voting application. Please see the attached image for the flow and explanation provided.

  1. A Voting app is a Python webapp which lets you vote between two options
  2. A Redis queue will collects new votes from Voting app
  3. A worker which consumes votes and stores them in postgres db
  4. A Postgres database is backed by a Docker volume
  5. A Node.js webapp which shows the results of the voting in real time

Below is the compose file for the Voting app

version: "3" services: vote: build: ./vote command: python volumes: - ./vote:/app ports: - "5000:80" networks: - front-tier - back-tier result: build: ./result command: nodemon server.js volumes: - ./result:/app ports: - "5001:80" - "5858:5858" networks: - front-tier - back-tier worker: build: context: ./worker dockerfile: Dockerfile.j networks: - back-tier redis: image: redis:alpine container_name: redis ports: ["6379"] networks: - back-tier db: image: postgres:9.4 container_name: db volumes: - "db-data:/var/lib/postgresql/data" networks: - back-tier volumes: db-data: networks: front-tier: back-tier:

Configuration Breakdown

The docker compose file should have a version that is greater than 3 and above.

The “build” command executes the command inside the “vote” folder.

The network that is created is front-tier and back-tier.

You can see below that there is bind mounts in the volumes in “vote” and “result” containers.

Save this yml file in any name.

Once done, run the below command to deploy the application.

docker-compose –f <path-of-yml-file> up -d

Docker swarm tutorial

why do we need docker swarm instead of docker-compose?

  1. Automate container lifecycle
  2. Scale up and down the services
  3. Recreate the containers when they fail

The above mentioned tasks are not done by “docker-compose” so that is why we are going for swarm.

The below command will create the root certificate for the manager node and join tokens are created

docker swarm init

In swarm terminology we should use the command “docker service create” instead of “docker run”.

what is network Overlay in swarm?

Containers can communicate to each other across network using overlay network.

Containers hosted on multiple host can speak each other. It will treat as the same subnet.

Network overlay does this with the help of routing mechanism called routing mesh.

The below command creates a network named "mynetwork" with driver as overlay

docker network create –driver overlay mynetwork

what is routing mesh?

Virtual Ip(Private ip inside the swarm networking) will act as a load balancer and will distribute the loads across the tasks.

What I mean by this is, if you have nginx service replicated on all 3 nodes.

Virtual ip will be created for the swarm cluster will automatically distribute the loads. This will happen in the backend.

This is a stateless load balancing so the application that contains session cookies will be not useful in this type of load balancing.

To bypass this you need to have other load balancers externally.

Basic commands to update a service in swarm

First let us create a service.

docker service create –p 8080:80 –name nginxService nginx:1.13

The below command scales our services

docker service scale nginxService=5

If we need to change the exposed port of nginx, you can do this by below command

docker service update --publish-rm 8080 --publish-add 9090:80

Docker stack

A stack file is a file in YAML format that defines one or more services, similar to a docker-compose.yml file for Docker Compose but with a few extensions.

In version 1.13 Docker adds new layer of abstraction to swarm called stacks. It is used in production environment.

Need to use “docker stack deploy” rather than “docker service create”.

Note: “build:” option won’t work. Only “deploy:” will work in compose file because this is for prod deployments.

Below is an example of stack file with Voting application as we seen before

version: "3" services: redis: image: redis:alpine ports: - "6379" networks: - frontend deploy: replicas: 1 update_config: parallelism: 2 delay: 10s restart_policy: condition: on-failure db: image: postgres:9.4 volumes: - db-data:/var/lib/postgresql/data networks: - backend deploy: placement: constraints: [node.role == manager] vote: image: dockersamples/examplevotingapp_vote:before ports: - 5000:80 networks: - frontend depends_on: - redis deploy: replicas: 2 update_config: parallelism: 2 restart_policy: condition: on-failure result: image: dockersamples/examplevotingapp_result:before ports: - 5001:80 networks: - backend depends_on: - db deploy: replicas: 1 update_config: parallelism: 2 delay: 10s restart_policy: condition: on-failure worker: image: dockersamples/examplevotingapp_worker networks: - frontend - backend deploy: mode: replicated replicas: 1 labels: [APP=VOTING] restart_policy: condition: on-failure delay: 10s max_attempts: 3 window: 120s placement: constraints: [node.role == manager] networks: frontend: backend: volumes: db-data:

Configuration Breakdown

You can find the “deploy” label in the stack file. In that you can specify the replicas you need to replicate the container.

"update_config" means that if you do stack update and you want all to update at the same time and spin 2 at the same time with delay between them.

“restart_policy” is when the container fails it gets restarted automatically.

you can see the option “constraints”, this will the tell the container to run on the node that has the role of manager.

Deploy using the below command

docker stack deploy –c docker-stack.yml voteapp

Push Images to a private repository

In case if you have a private docker repo configured and if you need to push the images, below is the command.

I have used nginx as an example and tagging the "latest" to "v1"

sudo docker login <your-docker-registry>

sudo docker tag nginx:latest <your-docker-registry>/<your-repo-name>/nginx:v1

sudo docker push <your-docker-registry>/<your-repo-name>/nginx:v1

Docker UCP

UCP is an enterprise grade cluster management solution

In simple terms, You can manage and monitor your container cluster using a graphical UI.

It is built on top of Docker Swarm

There are two ways to Access the UCP cluster

CLI based access and Web based Access


In our case we are using CLI to deploy and manage applications very easily.

You just need to download and use a UCP client bundle.

A client bundle contains a utility scripts ,private and public key pair that can use to configure your Docker client tools to talk to your UCP deployment.

As seen in the below image, you can download the client bundle in the UI of UCP by going to "profile" and click to download client bundles.

Once downloaded the UCP client bundles, Uzip it and place in in your Linux machine.

Run the command

eval "$(<"

It updates the environment variables DOCKER_HOST to make your client tools communicate with your UCP deployment

To confirm that your client tools are now communicating with UCP, run the below command. It will show your ucp version. Now you can do the stack deploy and see it in the graphical interface

docker version --format '{{.Server.Version}}'