This article assumes that you have already read ‘Dockerize your tests and test environment (Part 1)’,  which presents our initial problems with test environments and talks about the challenges and methods of solving the issues with Docker.  Going forward, we wanted to integrate our solution with the Owl test reporting tool, thus, completing the test execution lifecycle. The other, more challenging part of the process, was the simplification of use, mainly through a change in running the tests themselves and configuration files maintenance. An abstract diagram of the containerized environment that we are trying to build is shown below.

But, can you really simplify running multiple Docker containers and attaching them to external networks? Can you really convert it to a one-liner? That was the challenge that was upon us, and the solution is presented in this article.

Complicated vs Complex

Our solution at the moment was good and was functioning as it should, but it was cumbersome to use and required a lot of manual steps to get the tests up and running. We wanted to make it more simple, easier to use, and easier to maintain. This meant making the solution less complicated but more complex. This was our first goal and it went hand in hand with our second goal: to integrate this solution with our company product for test reporting: Owl (you can read more about Owl here).

To help us achieve this goal, we used Docker Compose. Compose is a tool that is used for both defining and running Docker setups with multiple containers that have some kind of interaction. To use Docker Compose, we need one configuration file: docker-compose.yml. In this file, you define which services (containers) make up your set up and how they behave and interact. These services can include any public Docker image(s), as well as your own, defined in local Dockerfile(s). After defining a configuration file, the only thing that you have to do is run the configuration, which will start all your containers, taking care of all the dependencies, when you want it, in the order you want it. Tearing down the setup is also really simple, and we will talk more about that later.

Configuring Docker Compose

There are a couple of key elements that our docker-compose.yml file needs to handle:

  • running Selenium Grid that consists of hub and Firefox and Chrome nodes
  • opening ports on Selenium Grid nodes for VNC access
  • running test execution in a docker container
  • resolving timing between running containers stated above
  • connecting to the Owl reporting tool

Structure of the Compose file that contains stated elements is shown below:

version: "3"
services:
  selenium-hub:
    image: selenium/hub:3.4.0-einsteinium
    container_name: selenium-hub
    ports:
      - "4444:4444"
    environment:
      - GRID_BROWSER_TIMEOUT=60
  chrome:
    image: selenium/node-chrome-debug:3.0.1-aluminum
    ports:
      - "5900:5900"
    depends_on:
      - selenium-hub
    environment:
      - HUB_PORT_4444_TCP_ADDR=selenium-hub
      - HUB_PORT_4444_TCP_PORT=4444
    volumes:
      - /dev/shm:/dev/shm
  firefox:
    image: selenium/node-firefox-debug:3.4.0-einsteinium
    ports:
      - "5901:5900"
    depends_on:
      - selenium-hub
    environment:
      - HUB_PORT_4444_TCP_ADDR=selenium-hub
      - HUB_PORT_4444_TCP_PORT=4444
    volumes:
      - /dev/shm:/dev/shm
  abhtests:
    image: atlantbh/ruby
    build: .
    depends_on:
      - firefox
      - chrome
    environment:
      - SELENIUM_GRID_URL=selenium-hub:4444
      - DOCKER_COMPOSE_WAIT=30
      - TESTS_TO_RUN=${TESTS_TO_RUN}
    volumes:
      - .:/tests
networks:
  default:
    external:
      name: ${OWL_NETWORK}

Docker Compose file itself is a YAML file structured in the form of a services object, whose child nodes define particular services with all its parameters and configurable options. Besides that, there is a version parameter, which always lays in the root of the document and defines how the document is parsed, the minimum version of the Docker engine needed to run this compose file, and some networking dependencies. At the end of the file there is a networks object which will be explained later.

Now that we understand the basic structure of the Compose file,  we can break down the services that are defined:

  • selenium-hub service which defines Selenium Grid hub container. As with every service, the image used for the container is defined in the beginning. After that, we define the service name and the port(s) that we want to map from the container to the host. At last, we define any environment variables that are needed. In this particular case, GRID_BROWSER_TIMEOUT which gives hub 60 seconds to return the response to the test, once a command is sent to be executed. If we don’t define this, tests would crash every time when a response from the hub is not immediate.
  • chrome and firefox services both have the same layout, which has a few interesting parameters. First of them is the depends_on parameter, which defines the services that need to be up and running before this service attempts to run. In both cases, for firefox and chrome, that is selenium-hub service. The Port that is exposed is used for VNC connection to the container, which makes test debugging a lot easier. Environment variables needed for nodes to operate are the address of the hub and the port for connection. Ending the node service configuration is the volumes parameter, which in this case mounts the /dev/shm folder from the container to the host, thus, increasing the shared memory that processes can use.
  • abhtests service defines the container that executes the tests themselves. The definition of the image used for this container is explained in ‘Dockerize your tests and test environment (Part 1)’, and in the compose file we added a few more variables to adjust test execution, build parameter points to the directory in which the Dockerfile is located, and name parameter tags for that image. In the environment object, besides the address of the Selenium Hub, we specify a period that docker-compose will wait before running this container. This is needed so that the test waits for Hub service to be ready, and not only running. In the end, we specify the test which we want to run, and a folder to mount, where the test will be located.
  • If the tests are executing on the same node where Owl containers are running, then the integration of Owl into this solution is only a matter of naming the docker network that Owl containers are using, so that our docker-compose container can join it and pass on the test results.

How it all works together

After understanding how docker-compose works and how we configured the compose file, it is time to run the containers and execute the tests.

Pre-requisites for installation on host:

  • docker
  • docker-compose
  • Owl

While the first two are rather simple, Owl installation needs a bit more detail. Owl source code can be cloned from its Github repository. After cloning it, all you need to do is to start the application using the following command:

docker-compose up -d

This builds both Owl and Postgres images and starts the application, which is available on default 8090 port.

As for the tests themselves, by default, when running docker-compose with the provided configuration, all the tests will be started. This behavior can be changed by setting the value of TESTS_TO_RUN from the .env file to a specific test name. For example:

TESTS_TO_RUN=check_about_links.rb

As explained earlier, OWL_NETWORK parameter is used for joining the Owl reporting tool network in order to pass on the test results. If you use Owl, just set this parameter value in the .env file to the network name:

OWL_NETWORK=owl_default

Setting up tests to run against Chrome or Firefox is done through the config/environment.yaml file, in which platform parameter has to be set to web, and driver is then used for selecting either browser. Owl reads test results from the database, so you also need to set up a rspec2db.yml file to point to the correct database (the current configuration of ABH tests is set to default database parameters that Owl application uses so additional configuration is not needed).

After setup is done, complete testing environment along with tests can be run using the same command that we used to start up the Owl application:

docker-compose up -d

This will run the Selenium hub, two nodes: node-firefox-debug and node-chrome-debug, and container that will execute the tests. If you have all images already downloaded, the output should be similar to this:

Starting selenium-hub ... done

Starting abhhomepageautomation_chrome_1 ...

Starting abhhomepageautomation_firefox_1 ... done

Starting abhhomepageautomation_abhtests_1 ... done

In order to access the logs from the container that executes the tests, run this command:

docker logs -f abhhomepageautomation_abhtests_1

As mentioned before, to make debugging easier, VNC can be used to access ports specified in the configuration for both Firefox and Chrome containers.

After the tests are finished with the execution, this environment can be turned off using the following command:

docker-compose down

The output of the command should be similar to this:

Stopping abhhomepageautomation_abhtests_1 ... done

Stopping abhhomepageautomation_chrome_1   ... done

Stopping abhhomepageautomation_firefox_1  ... done

Stopping selenium-hub                     ... done

Removing abhhomepageautomation_abhtests_1 ... done

Removing abhhomepageautomation_chrome_1   ... done

Removing abhhomepageautomation_firefox_1  ... done

Removing selenium-hub                     ... done

Network owl_default is external, skipping

Conclusion

In our case, using docker-compose turned out to be a really good choice. Running everything with one command makes your life a lot easier and centralized configuration makes this a rather easy solution to maintain. Although the setup of joining the Owl reporting tool network is rather easy, it is a huge step forward when it comes to completing the process of daily test execution. With only two “docker-compose up’s”, you have a test environment running along with tests and a reporting tool where results of the tests will be persisted. After reading these two articles, we really hope you see the benefits of this approach and that it can help you in your daily test environment setup also.

Protractor parallel execution
QA/Test AutomationTech Bites
May 12, 2023

Protractor parallel execution

Why Parallel Testing? When designing automation test suites, the ability to run tests in parallel is a key feature because of the following benefits: Execution time - Increasing tests efficiency and, finally, faster releases Device compatibility - There are many devices with various versions and specifications that need to be…
Introduction to GraphQL
QA/Test AutomationTech Bites
May 11, 2023

Introduction to GraphQL

In today's Tech Bite topic, we will get familiar with GraphQL and explain its growth in popularity in recent years as the new standard for working with APIs. What is GraphQL? GraphQL stands for Graph Query Language. It is an API specification standard and, as its name says - a…
IQR in Automation Testing
QA/Test AutomationTech Bites
April 25, 2023

IQR in Automation Testing: Unleashing the Data Analytics Potential

Usually, when people talk about statistics, they think only about numbers, but it is much more.  Statistics is an irreplaceable tool in different fields like data analysis, predictive modeling, quality control, scientific research, decision-making models, etc. Overall, statistics provide a valuable toolset for understanding, describing, and making predictions about data,…

Leave a Reply