Deploying Docker images via SSH
A post about writing a bash script to build and deploy multiple Docker images to a remote server
Background
When we began Dockerizing this blog for good, I began to look for ways to automate the build and deployment process. It turned out that although Docker is an excellent container for running applications, there is no standard way to update a server. Luckily with a moderately complex shell script it is indeed doable, and can be fully automated.
Our current architecture is made of several dependent Docker images and a Linux box as a production server. Luckily we have no dynamic data, but the deploy script can easily be modified to handle those too. In that case we would use the data only container approach.
So, let's build our deployment script!
The script
The basic idea of the build script is simple: Build the images, upload them to the server, then restart the containers with the new versions. These are the building blocks, just need some tricks to mix in order to be able to fully automate the process.
For the sake of example, let's say we have an apache app in the apache/ subdirectory and a monitoring app residing in monitoring/.
Setting up
Let's name our script deploy.sh and add some bootstrapping:
#!/bin/bash
set -e
REMOTE_USERNAME="..."
REMOTE_HOST="..."
IMAGE_REPOSITORY="my_repository"
The last variable will be the repository name for the Docker images. It is important to add one, as we will use it for the up-to-date check later.
Building
The first thing to do is to build the images. It has nothing interesting, just a standard Docker build:
function build_image {
docker build -t $IMAGE_REPOSITORY:$1 $2
}
build_image apache apache/
build_image monitoring monitoring/
It builds the image to the predefined repository and adds a tag for easy retrieval.
Uploading to the remote server
Fortunately Docker can save and load an image to/from the standard input stream, so it makes piping possible. It effectively makes the whole uploading a one-liner.
docker save $IMAGE_REPOSITORY:$1 | bzip2 | pv | ssh $REMOTE_USERNAME@$REMOTE_HOST 'bunzip2 | docker load'
What we should implement is to first check whether the image was actually modified or not. As Docker images tends to weight several hundred megabytes, it can save a lot of bandwidth, especially when there are several images and only a few of them are changing. The idea is that we can list the images for a given repository, then extract the image ID, then do the same on the remote machine, and if the IDs are the same, then the images are the same.
The finished part looks like this:
function upload_image_if_needed {
if [[ $(ssh $REMOTE_USERNAME@$REMOTE_HOST "docker images $IMAGE_REPOSITORY | grep $1 | tr -s ' ' | cut -d ' ' -f 3") != $(docker images $IMAGE_REPOSITORY | grep $1 | tr -s ' ' | cut -d ' ' -f 3) ]]
then
echo "$1 image changed, updating..."
docker save $IMAGE_REPOSITORY:$1 | bzip2 | pv | ssh $REMOTE_USERNAME@$REMOTE_HOST 'bunzip2 | docker load'
else
echo "$1 image did not change"
fi
}
upload_image_if_needed apache
upload_image_if_needed monitoring
Updating the containers
The last step is to restart the containers using the new images. The good thing is that we can embed remote bash commands in our deploy script and treat them just like the local ones:
ssh -tt $REMOTE_USERNAME@$REMOTE_HOST << EOF
...
exit
EOF
The first thing is to kill the current containers if they exist:
docker rm -f ${IMAGE_REPOSITORY}_apache || true
docker rm -f ${IMAGE_REPOSITORY}_monitoring || true
The || true is needed, because if the container does not exists, then docker rm gives back an error. Since we just want to have the containers killed, in case they did not exist in the first place, doing nothing is perfectly fine.
The second step is to start the containers, the standard way:
docker run -d --name ${IMAGE_REPOSITORY}_apache $IMAGE_REPOSITORY:apache
docker run -d --name ${IMAGE_REPOSITORY}_monitoring $IMAGE_REPOSITORY:monitoring
The final script
For a better overview, here is the complete script:
#!/bin/bash
set -e
REMOTE_USERNAME="..."
REMOTE_HOST="..."
IMAGE_REPOSITORY="my_repository"
function upload_image_if_needed {
if [[ $(ssh $REMOTE_USERNAME@$REMOTE_HOST "docker images $IMAGE_REPOSITORY | grep $1 | tr -s ' ' | cut -d ' ' -f 3") != $(docker images $IMAGE_REPOSITORY | grep $1 | tr -s ' ' | cut -d ' ' -f 3) ]]
then
echo "$1 image changed, updating..."
docker save $IMAGE_REPOSITORY:$1 | bzip2 | pv | ssh $REMOTE_USERNAME@$REMOTE_HOST 'bunzip2 | docker load'
else
echo "$1 image did not change"
fi
}
function build_image {
docker build -t $IMAGE_REPOSITORY:$1 $2
}
build_image apache apache/
build_image monitoring monitoring/
upload_image_if_needed apache
upload_image_if_needed monitoring
ssh -tt $REMOTE_USERNAME@$REMOTE_HOST << EOF
docker rm -f ${IMAGE_REPOSITORY}_apache || true
docker rm -f ${IMAGE_REPOSITORY}_monitoring || true
docker run -d --name ${IMAGE_REPOSITORY}_apache $IMAGE_REPOSITORY:apache
docker run -d --name ${IMAGE_REPOSITORY}_monitoring $IMAGE_REPOSITORY:monitoring
exit
EOF
Conclusion
Docker is a fascinating container technology that allows building works-everywhere apps, and it comes with good CLI support. It currently lacks the features to make deploying easy, but with some scripting magic, we can work around these issues. I hope the above script gives some insight and possibly a solution to people facing the same problem.