Writing tests for Docker images

Cover Image for Writing tests for Docker images
Sergej Brazdeikis
Sergej Brazdeikis

Why bother writing tests?

For Docker image I can just follow the software installation steps in your Dockerfile and everyone is happy. Right?

Not really... Just remember when your were building your Dockerfile did all builds were always functioning? I'm pretty sure the answer is no. It can fail because of many reasons. And it definitely will.

At the moment the absolute majority of containers on the hub does not have tests. That means all of it is tested manually before push or is not tested at all.

Manual testing is based on Humans. They are lazy and in time tend to forget things. Let the computer do what they do well.

What to test for Docker images

We want to automate the testing of the container itself. Not the software which is inside. The software has its own set of tests, so it is covered.

It makes sense to cover the container interactions between the host machine and other containers.

The areas to cover:

  • Container interaction
    • Network. Is socket available from outside container?
    • File system - file owners and permissions
    • Check core software functionality

Covering these topics it will solve possible faults:

  • Breaking changes from newer Docker version.
  • Breaking changes from a new version of the software itself.
  • Modifying Dockerfile and breaking key functionality

Tips for writing tests

Next, some tips and examples to cover the mentioned topics.

Use BASH scripts for tests

It is executed directly from the shell, there is no wrapper, which would only complicate running Docker.

Some useful general URLs for writing BASH tests:

Fail early

Bail the script as soon as something is wrong. No need to execute further

#!/bin/bash set -eo pipefail
  • -e if script line fails it fails entire script
  • -o will make if any command in the pipeline fails like - curl -s http://awesome-blog.jevsejev.io/ | grep foo it will make the script fail instantly

Enable BASH debug mode

When debugging it is enough to set environment variable DEBUG=true ./runTests.sh. You will see actual commands being executed with all variables applied.

[ "$DEBUG" ] && set -x

Let your script execute from anywhere

Set current working directory to the directory of the script.

cd "$(dirname "$0")"

Check if the image exists

Pretty self-explanatory snippet.

if ! docker inspect "$dockerImage" &> /dev/null; then echo $'\timage does not exist!' false fi

Here false stops the scripts

Check if the socket is available

# Create an instance of the container-under-test cid="$(docker run -d "$dockerImage")" # Remove container afterwards trap "docker rm -vf $cid > /dev/null" EXIT docker exec -it $cid [ -S $socketFile ] || (echo "Socket $socketFile is not found" && exit 1)

In this case, your image needs to support CLI commands. If your Dockerfile entrypoint is an application read below.

Use run.sh script

Use the shell script to run in different modes. For instance, Mysql container can run as a server and in command line. You can pass some variable and use the same container in different ways. It is really handy to access container shell for debugging or testing purposes.

#!/bin/bash COMMAND="/bin/YOUR_APP $@" if [ -n "$CLI" ] && [ $CLI = true ]; then COMMAND="$@" fi exec $COMMAND

Add script to your Dockerfile:

ADD run.sh /run.sh ...bash ENTRYPOINT ["/run.sh"]

it will launch /bin/YOUR_APP with all parameters forwarded to the application. And for CLI mode do:

docker run -it --rm -e="CLI=true" YOUR_IMAGE

Check if file exists

Use this pattern to check if file does exist

docker run -it --rm -e="CLI=true" -v $PWD:/src YOUR_IMAGE SOME_COMMAND_PRODUCING_FILE [ -e $EXPECTED_FILE ] || (echo "Expected file does not exist" && exit 1)

Integrate with CI

Allow CI to build from scratch run tests and push for you. Time savior when you what just push&forget.

Example

#!/bin/bash set -eo pipefail [ "$DEBUG" ] && set -x # set current working directory to the directory of the script cd "$(dirname "$0")" dockerImage=$1 echo "Testing $dockerImage..." if ! docker inspect "$dockerImage" &> /dev/null; then echo $'\timage does not exist!' false fi # testing volume owners TEST_SSH_KEY_USER=`docker run -it -v $PWD/ssh:/home/1000/.ssh:ro $dockerImage stat -c "%U" /home/1000/.ssh/id_rsa` TEST_SYSTEM_USER=`docker run -it $dockerImage whoami` [ $TEST_SSH_KEY_USER = $TEST_SYSTEM_USER ] || (echo "System user does not match volume owner. It is critical for SSH keys" && exit 1)

Full test example is located here

Result

Having these basic tests makes you confident that:

  • bumping the version of software in will still work in newest successful build
  • when using new Docker version you are confident, that it will work
  • Makes image development faster, just run your runTests.sh it will tell you if its fine
  • Builds trust for the community on your image.

Its really usual benefits from writing tests like in any Software development technology.

Happy coding!

Sergej Brazdeikis
Sergej Brazdeikis