Docker UCP backup
The below command encrypts it and stores the backup in /tmp directory.
Note the UCP version. My UCP version is 3.0.5 and input your <ucp-instance-id>
To find the ucp instance id, run the command “docker info” and get the ClusterID as seen in the second below picture
Note: Don’t stop the docker service while taking backup.
docker container run \
--log-driver none --rm \
--interactive \
--name ucp \
-v /var/run/docker.sock:/var/run/docker.sock \
docker/ucp:3.0.5 backup \
--id
Below is the output the command

Below image shows the cluster id

You only need to take the backup on a single UCP manager node.
Since UCP stores the same data on all manager nodes, you can take periodic backups of a single manager node only.
For Docker EE 17.06 or higher, if the Docker engine has SELinux enabled you need to include --security-opt label=disable in the docker command
docker container run --security-opt label=disable --log-driver none --rm -i --name ucp \ -v /var/run/docker.sock:/var/run/docker.sock \ docker/ucp:3.0.5 backup --interactive > /tmp/backup.tar
To find out whether SELinux is enabled in the engine, view the host’s /etc/docker/daemon.json file and search for the name "selinux-enabled":"true"
Restore from the backup of UCP
First uninstall the ucp from the swarm by “uninstall-ucp” and run the below command
Since we have created the passphrase while taking backup, we have to provide passphrase while doing restore.
The backup location in my case is /tmp
docker container run --rm -i --name ucp \ -v /var/run/docker.sock:/var/run/docker.sock \ docker/ucp:3.0.5 restore --passphrase "secret" < /tmp/backup.tar
Note: While doing restore, if the swarm version is different from the version where you took backup, new certificates will be created in ucp access. So the existing client bundle will not work. New client bundle has to be created for taking backup. Ucp only restores the Admin settings and configuration.
Docker Swarm backup
Note: since you need to stop the docker engine of a manager node, you need 2 or more nodes to make sure the cluster is healthy. Select only the manager node, don’t take backup on the leader node as it will cause new election and will cause downtime.
Stop the Docker Engine on the manager
systemctl stop docker
Backup the entire Swarm folder
tar cvzf "/tmp/swarmbackup.tar" /var/lib/docker/swarm/
you will see the below output

Restart the Docker Engine
systemctl start docker
Restore Docker Swarm
Note: Make sure to restore in the same ip address of the node you made backup with. Also make sure to use the same engine version
Stop the Docker Engine
systemctl stop docker
Remove the contents of the swarm folder /var/lib/docker/swarm
Restore by unzipping the backup contents to the swarm location
tar -zxvf swarmbackup.tar -C /var/lib/docker
Restart the Docker
systemctl start docker
Re-initialize the Swarm to create new configuration for the docker swarm.
docker swarm init --force-new-cluster
Now add the manager nodes to the cluster