BlogQA/Test Automation

Dockerize your tests and test environment (Part 1)

If you are working as a software test engineer, then you are very well aware of the struggles in test automation when it comes to testing against various different browsers, installing test environments from scratch, incompatibilities between selenium, browser drivers, browser versions, etc. It is hard to replicate testing environments, install them from scratch, remember browsers dependencies etc.

Luckily, with emerging technologies based on containers like Docker, many struggles in development and operations have been eradicated. But, did you ever wonder how Docker can contribute to the field of test automation?

This is the question we posed to ourselves in order to eliminate the problems described above and this article (as well as the next one) will present how we did it.

Our initial problem

We started with the following requests: We want to have an isolated environment based on specific technology that is used for running tests (for example an isolated ruby or java environment with preinstalled tools needed for running tests like RSpec or TestNG). This environment should be easy to install and would be temporary (it would exist only when tests are running and teardown would be done after). We should have the ability to test against different browser versions and change those versions very easily. Finally, we wanted to integrate this solution with our company product for test reporting called Owl (you can read more about Owl here and here)

A very big help in this effort came from: https://github.com/SeleniumHQ/docker-selenium which represents Selenium project for dockerized selenium environments (more precisely, you are able to run Firefox or Chrome standalone browsers with the appropriate driver in Docker or you can utilise Selenium Grid and run Hub and specific browsers in a dockerized environment). We thought that the approach with using Selenium Grid was very interesting and could provide us the most benefits, so we started to build our solution around it.

For the purposes of this POC, we used our test automation suite for the Atlantbh homepage which is implemented in Ruby/RSpec/Capybara, but the solution was built so it could be used with any Ruby/RSpec related project. For the purposes of this article, we will stick to the Atlantbh homepage project which is accessible on github repo: https://github.com/ATLANTBH/abhhomepage-automation

Our approach

Here is what it should look like:

  1. Deploy Selenium Grid by running Selenium Hub and then separate nodes for Chrome/Firefox combinations. These nodes would be connected to Selenium Grid
  2. Create Docker image which would contain everything necessary for running tests (rvm, ruby)
  3. Run that image as a docker container and copy over content of our tests
  4. Run tests against Selenium Grid which would, by reading capabilities, know which browser is the target
  5. After tests have been executed, temporary container where tests have been executed, would be destroyed since it is no longer needed

We thought this approach fulfilled most of our requests from above and it would give each test engineer a very easy setup where s/he only needs to have: repo cloned and docker installed.

First thing to do here was to create Dockerfile for our rvm/ruby environment which we could utilize for building a new container where tests would be executed. This is how it looks:

Even if you are not so familiar with the structure of Dockerfile, this one should look fairly easy. In a nutshell, this is an explanation by sections:

# Defaults – we can pass which version of ruby we want to install in rvm

# Install RVM – as the name suggests, this section installs rvm

# Install ruby version – we install ruby version in rvm and bundler gem

# Copy test scripts – we will create /tests directory, copy over our tests from the local machine to the container and run bundle install (Advantage of running bundle install is that it will be executed only during time of creating docker image and we don’t need to execute it again every time we create the container to execute tests. This will save a considerable amount of time in the container run phase. Also, we assume that Gemfile.lock is up to date already, the user will check its repository and then execute tests located in that repository)

# Set working directory and pass tests that you want to execute – as the name suggests, we are setting the work directory in the container to be /tests and we execute tests from that location (tests have already been copied over to this directory in the previous step). An important thing to note is the TESTS_TO_RUN variable which should be populated when we run the container. Here, we can pass which exact tests we want to execute (so we are not limited to execution of complete suite only). This works in the same way that the filtering of tests works in RSpec. We can provide * to execute everything, substring or complete name of the script to execute part of the test suite or one specific test script

How it all works together

Now that we have knowledge of how our Dockerfile works under the hood, it is time to put all the pieces together into the deployable workflow:

  1. Run Selenium Hub:

This command will download selenium/hub image from Docker Hub, run container and expose 4444 port to be accessible from the outside (this is needed since our tests will communicate with Selenium Hub through that port).

We also set GRID_BROWSER_TIMEOUT since it is, by default = 0, and we don’t want our tests to fail because of potential timeout issues on Selenium Hub. To make sure Selenium Hub is running, you can access: http://<SELENIUM_HUB_ADDRESS>:4444 in your browser.

  1. Run Nodes:

These two commands will run separate containers for both firefox and chrome browsers with appropriate drivers. These two nodes will be connected to the previously started Selenium Hub. Also, notice that we use selenium/node-firefox-debug and selenium/node-chrome-debug docker images. We don’t need to use “debug” images, but they expose one interesting detail and that is VNC server. By using these images, we are able to connect to these two nodes via any VNC client and watch the execution of tests live. Inside the container, VNC is running on 5900 port. To be able to access VNC servers from the outside, you need to know which ports are exposed outside of the container so they can be accessed. To find that, you can use the following command:

You can see that port 32768 is exposed for firefox node, while port 32769 is exposed for chrome node. Use your VNC client (Mac users can use the Screen Sharing tool for this purpose) to see live execution on these nodes

You can also verify that chrome and firefox nodes are attached to Selenium Hub using browser: http://<SELENIUM_HUB_ADDRESS>:4444/grid/console

  1. Create image from Dockerfile:

This command assumes that you are located in your project’s root directory and it contains Dockerfile (for this purpose, we will use abhhomepage-automation project). It can take a couple of minutes to build a docker image locally and it will be available under the name: atlantbh/ruby (any other name can be provided also)

  1. Run container from the newly created image:

Now that we have the docker image ready, only thing that is left to do is to run you test suite (or part of it) in a dockerized environment. You can do that using the following command:

This command is pretty much self-explanatory. It will run a temporary docker container with the environment variable specified for SELENIUM_GRID_URL (this variable will be picked up in the tests, for more info see: https://github.com/ATLANTBH/abhhomepage-automation/blob/master/setup_browser.rb), environment variable specified for tests which you want to execute, mounted volume which will make sure that content of /home/ubuntu/abhhomepage-automation is copied over to /tests (it will overwrite /tests content that we had when building the docker image. The purpose of this is to make sure that any change in your tests scripts code does not need re-building of new image, you can just run the docker container again and it will pick up the changes) and image name (atlantbh/ruby)

Conclusion and next steps

I hope this short introduction to the containerized world in context of test automation will help you to realize all the benefits this approach can give you. Using containers in software development is becoming more and more of a mainstream approach and tests are no exception either. The benefits of containerized micro-services can easily been seen in the setup of testing workflows. Here are some of the problems we solved with this approach:

  1. We can easily share our tests and test environment between each other with minimal or zero configuration that needs to be done before running tests (even more, we can pass tests to developers when they want to execute them against their local development environments).
  2. We don’t have to worry about compatibility issues between various selenium and browser drivers versions. It is very easy to change the versions and test them out. Previously, it was very time consuming to install browsers, browser drivers and corresponding selenium web driver gems and make sure all of them work together.
  3. You can set up a cluster of various nodes for a specific browser and run you tests against different versions of browsers to know which ones your tested application covers. Previously, it was nearly impossible to maintain multiple browser versions, browser drivers and selenium web driver gems on one machine. It was compatibility/dependency hell.
  4. Very easy setup in CI environments
  5. Last but not least, there is zero footprint on your environment where you ran this setup. When you are finished with you work, just stop/remove containers and all of this setup is removed also so you can always start from scratch.

Of course, when we went in this direction, more and more ideas arose and the next one seemed natural: How can we fit all this setup into one configuration file which would be executed with one command and all of this will be up and running along with tests being executed?

Enter: Docker Compose. But, it doesn’t stop there. Owl  (our in-house test reporting tool) also has support for Docker and Docker Compose so we wanted to create integrated solution using Docker Compose which will give us ability to easily configure and manage complete end to end test environment (from tests up to the reports).

Stay tuned on our blog for these improvements in our next article.