If you are working as a software test engineer, then you are very well aware of the struggles in test automation when it comes to testing against various different browsers, installing test environments from scratch, incompatibilities between Selenium, browser drivers, browser versions, etc. It is hard to replicate testing environments, install them from scratch, remember browsers dependencies, etc.
Luckily, with emerging technologies based on containers like Docker, many struggles in development and operations have been eradicated. But, did you ever wonder how Docker can contribute to the field of test automation?
This is the question we posed to ourselves in order to eliminate the problems described above and this article (as well as the next one) will present how we did it.
Our initial problem
We started with the following requests: We want to have an isolated environment based on the specific technology that is used for running tests (for example an isolated ruby or java environment with preinstalled tools needed for running tests like RSpec or TestNG). This environment should be easy to install and would be temporary (it would exist only when tests are running and teardown would be done after). We should have the ability to test against different browser versions and change those versions very easily. Finally, we wanted to integrate this solution with our company product for test reporting called Owl (you can read more about Owl here)
A very big help in this effort came from here which represents Selenium project for dockerized Selenium environments (more precisely, you are able to run Firefox or Chrome standalone browsers with the appropriate driver in Docker or you can utilize Selenium Grid and run Hub and specific browsers in a dockerized environment). We thought that the approach with using Selenium Grid was very interesting and could provide us the most benefits, so we started to build our solution around it.
For the purposes of this POC, we used our test automation suite for the Atlantbh homepage which is implemented in Ruby/RSpec/Capybara, but the solution was built so it could be used with any Ruby/RSpec related project. For the purposes of this article, we will stick to the Atlantbh homepage project which is accessible on the GitHub repo.
Our approach
Here is what it should look like:
- Deploy Selenium Grid by running Selenium Hub and then separate nodes for Chrome/Firefox combinations. These nodes would be connected to the Selenium Grid
- Create a Docker image which would contain everything necessary for running tests (rvm, ruby)
- Run that image as a Docker container and copy over the content of our tests
- Run tests against Selenium Grid which would, by reading capabilities, know which browser is the target
- After tests have been executed, a temporary container where tests have been executed would be destroyed since it is no longer needed
We thought this approach fulfilled most of our requests from above and it would give each test engineer a very easy setup where s/he only needs to have: repo cloned and docker installed.
The first thing to do here was to create Dockerfile for our rvm/ruby environment which we could utilize for building a new container where tests would be executed. This is how it looks:
FROM ubuntu:xenial LABEL maintainer="[email protected]" # Defaults ARG RUBY_VERSION="2.3.3" # Install rvm RUN gpg --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3 7D2BAF1CF37B13E2069D6956105BD0E739499BDB RUN apt-get update && apt-get install -y \ curl \ git \ libpq-dev RUN \curl -sSL https://get.rvm.io | bash -s stable # Install ruby version RUN /bin/bash -l -c "rvm install ${RUBY_VERSION}" RUN /bin/bash -l -c "gem install bundler --no-rdoc --no-ri" RUN /bin/bash -l -c "source /etc/profile.d/rvm.sh" # Copy test scripts RUN mkdir /tests COPY . /tests RUN cd /tests && chmod +x spec/* && /bin/bash -l -c "bundle install" # Set working directory and pass tests that you want to execute WORKDIR /tests ENTRYPOINT ["/bin/bash", "-l", "-c"] CMD ["bundle exec rspec spec/${TESTS_TO_RUN}"]
Even if you are not so familiar with the structure of Dockerfile, this one should look fairly easy. In a nutshell, this is an explanation by sections:
# Defaults – we can pass which version of ruby we want to install in rvm
# Install RVM – as the name suggests, this section installs rvm
# Install ruby version – we install the Ruby version in rvm and bundler gem
# Copy test scripts – we will create /tests directory, copy over our tests from the local machine to the container and run bundle install (Advantage of running bundle install is that it will be executed only during the time of creating the Docker image and we don’t need to execute it again every time we create the container to execute tests. This will save a considerable amount of time in the container run phase. Also, we assume that Gemfile.lock is up to date already, the user will check its repository and then execute tests located in that repository)
# Set working directory and pass tests that you want to execute – as the name suggests, we are setting the work directory in the container to be /tests and we execute tests from that location (tests have already been copied over to this directory in the previous step). An important thing to note is the TESTS_TO_RUN variable which should be populated when we run the container. Here, we can pass which exact tests we want to execute (so we are not limited to execution of complete suite only). This works in the same way that the filtering of tests works in RSpec. We can provide * to execute everything, substring or complete name of the script to execute part of the test suite or one specific test script
How it all works together
Now that we have knowledge of how our Dockerfile works under the hood, it is time to put all the pieces together into the deployable workflow:
- Run Selenium Hub:
docker run -d -P -p 4444:4444 -e GRID_BROWSER_TIMEOUT=60 --name selenium-hub selenium/hub
This command will download Selenium/hub image from Docker Hub, run container and expose 4444 port to be accessible from the outside (this is needed since our tests will communicate with Selenium Hub through that port).
We also set GRID_BROWSER_TIMEOUT since it is, by default = 0, and we don’t want our tests to fail because of potential timeout issues on Selenium Hub. To make sure Selenium Hub is running, you can access http://<SELENIUM_HUB_ADDRESS>:4444 in your browser.
- Run Nodes:
docker run -d -P --link selenium-hub:hub selenium/node-firefox-debug:3.4.0-einsteinium
docker run -d -P --link selenium-hub:hub selenium/node-chrome-debug:3.4.0-einsteinium
These two commands will run separate containers for both firefox and chrome browsers with appropriate drivers. These two nodes will be connected to the previously started Selenium Hub. Also, notice that we use selenium/node-firefox-debug and selenium/node-chrome-debug Docker images. We don’t need to use “debug” images, but they expose one interesting detail and that is VNC server. By using these images, we are able to connect to these two nodes via any VNC client and watch the execution of tests live. Inside the container, VNC is running on 5900 port. To be able to access VNC servers from the outside, you need to know which ports are exposed outside of the container so they can be accessed. To find that, you can use the following command:
docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES adc7496479e8 selenium/node-chrome-debug:3.4.0-einsteinium "/opt/bin/entry_po..." 8 minutes ago Up 8 minutes 0.0.0.0:32769->5900/tcp fervent_hypatia ddea6a689c6d selenium/node-firefox-debug:3.4.0-einsteinium "/opt/bin/entry_po..." 8 minutes ago Up 8 minutes 0.0.0.0:32768->5900/tcp zen_goodall
You can see that port 32768 is exposed for firefox node, while port 32769 is exposed for chrome node. Use your VNC client (Mac users can use the Screen Sharing tool for this purpose) to see live execution on these nodes
You can also verify that chrome and firefox nodes are attached to Selenium Hub using browser: http://<SELENIUM_HUB_ADDRESS>:4444/grid/console
- Create image from Dockerfile:
~/abhhomepage-automation$ docker build -t atlantbh/ruby .
This command assumes that you are located in your project’s root directory and it contains Dockerfile (for this purpose, we will use abhhomepage-automation project). It can take a couple of minutes to build a Docker image locally and it will be available under the name: atlantbh/ruby (any other name can be provided also)
- Run container from the newly created image:
Now that we have the Docker image ready, the only thing that is left to do is to run your test suite (or part of it) in a dockerized environment. You can do that using the following command:
docker run -e SELENIUM_GRID_URL="${SELENIUM_HUB_ADDRESS}:4444" -e TESTS_TO_RUN="*" -v /home/ubuntu/abhhomepage-automation:/tests atlantbh/ruby
This command is pretty much self-explanatory. It will run a temporary Docker container with the environment variable specified for SELENIUM_GRID_URL, environment variable specified for tests which you want to execute, mounted volume which will make sure that content of /home/ubuntu/abhhomepage-automation is copied over to /tests (it will overwrite /tests content that we had when building the Docker image. The purpose of this is to make sure that any change in your tests scripts code does not need re-building of a new image, you can just run the Docker container again and it will pick up the changes) and image name (atlantbh/ruby)
Conclusion and next steps
I hope this short introduction to the containerized world in the context of test automation will help you to realize all the benefits this approach can give you. Using containers in software development is becoming more and more of a mainstream approach and tests are no exception either. The benefits of containerized micro-services can easily been seen in the setup of testing workflows. Here are some of the problems we solved with this approach:
- We can easily share our tests and test environment between each other with minimal or zero configuration that needs to be done before running tests (even more, we can pass tests to developers when they want to execute them against their local development environments).
- We don’t have to worry about compatibility issues between various Selenium and browser drivers versions. It is very easy to change the versions and test them out. Previously, it was very time consuming to install browsers, browser drivers, and corresponding Selenium web driver gems and make sure all of them work together.
- You can set up a cluster of various nodes for a specific browser and run your tests against different versions of browsers to know which ones your tested application covers. Previously, it was nearly impossible to maintain multiple browser versions, browser drivers, and Selenium web driver gems on one machine. It was compatibility/dependency hell.
- Very easy setup in CI environments
- Last but not least, there is zero footprint on your environment where you ran this setup. When you are finished with your work, just stop/remove containers, and all of this setup is removed also so you can always start from scratch.
Of course, when we went in this direction, more and more ideas arose and the next one seemed natural: How can we fit all this setup into one configuration file which would be executed with one command, and all of this will be up and running along with tests being executed?
Enter: Docker Compose. But, it doesn’t stop there. Owl (our in-house test reporting tool) also has support for Docker and Docker Compose so we wanted to create an integrated solution using Docker Compose which will give us the ability to easily configure and manage the complete end to end test environment (from tests up to the reports).
Click here for Part 2 of this blog.
sounds great!!