Why mobile parallelism?

There are many factors to take into account when developing and testing apps for mobile devices. One of the most important things to consider is fragmentation. Fragmentation occurs when there are multiple OS versions currently on the market or multiple device OEMs with independent versions of the same OS, as is the case for Android. In these instances, developers and QA engineers need to make sure that the developed apps support the multitude of devices and OS versions available—which can be a struggle.

Automated tests could be executed on multiple devices but in agile development testing, execution time matters, and testing on multiple devices sequentially would take countless hours. The solution is to execute tests on multiple devices at the same time. It’s not necessary to test every single combination of device and OS on the market, simply a representative sample of the most popular ones. Android is aware of the fragmentation problem with their OS and is providing developers with percentages of active OS versions, which can help narrow down the sample. Additionally, the sample should include devices from different OEMs and screen sizes, as every OEM customizes the Android OS.

AWS Device Farm

If there is no budget for dedicated Android devices, using a hosted device farm is a logical solution. Included in the wide range of AWS services is the AWS Device Farm, which supports Android, iOS, FireOS, and multiple testing frameworks. It also supports manual testing on hosted devices in real-time through a web interface. During these manual sessions, logs with actions and a video of the sessions are recorded.

There’s a free trial period that includes 250 device minutes, after which users are charged $0.17 per device minute. There’s also an unmetered plan for $250 per month, per device. The per-minute plan offers great flexibility, as users only pay for the minutes they use, and adding additional devices to the starting device pool is simple and easy to do.

The process of running tests is user-friendly and well documented. New test runs are created through a wizard in the device farm console. Tests must be packed in a zip file and the app must be built to an APK file. Once the files are uploaded through the wizard, a device pool is chosen and the tests are executed in parallel.

For example, a regression suite lasting 30 minutes and running once a day on five devices would total $25.50 per day or around $500 per month. This estimation is a best-case scenario since in real-life development tests are typically run more often and can get pricey. Other Amazon web services get cheaper with time but the device farm is specific, as new devices need to be added all the time. This could lead to the service not getting cheaper any time soon.

What if you already have the devices to run tests on?

If you already have Android devices on hand for testing, there’s probably no need to pay for additional devices on a hosted device farm. Of course, it’s unlikely that one would have as many devices as there are on a hosted device farm but there may be just enough for testing on multiple Android versions with multiple screen resolutions.

If you’re automating both Android and iOS, the test framework of choice is Appium, in combination with any language you’re familiar with. Running parallel tests with Appium simply means running multiple Appium server instances with different server flags. The idea is to run Appium instances with a unique port, bootstrap port, and device ID attached to it. Then, tests are run in parallel against launched Appium instances, where every instance communicates with one of the connected devices.

The shell script

The process of launching these instances and running the tests can be automated with a shell script. Each shell script does the following:

– fetches the IDs of connected Android devices
– launches Appium instances for those devices
– runs a set of tests on them
– cleans up the launched processes

It is then possible to launch a test suite against multiple devices with a single script call.

Fetching the device IDs

The device IDs can be fetched using the Android Debug Bridge (ADB). ADB is a command-line tool that can be used to communicate with an Android emulator or a connected Android device. ADB can also get the IDs of connected devices by running:

adb devices

The script stores these in an array and uses them to launch the Appium instances.

udid_data=()

i=0
while [[ $i -lt 10 ]]; do
  adb_full_path=`which adb`
  devices_running=`$adb_full_path devices | grep "daemon not running"`
  if [[ $? == 0 ]]; then
    echo "[INFO] adb not ready. Waiting..." && sleep 5
    let i=i+1
  else
    devices_list=`$adb_full_path devices | grep -v "List of devices attached"`
    for device in $devices_list; do
      echo "[INFO] Device UDID is: ${device}"
      if [[ $device != 'unauthorized' && $device != 'device' ]]; then
        udid_data+=($device)
      fi
    done
    break
  fi
done

If ADB daemon is not running, it is started with the first execution of ‘adb devices.’ There is a waiting period after the first launch and then after the daemon is running, a list with connected devices and emulators is retrieved. Then the IDs of the devices which are authorized and are not an emulator are placed into an array for further use.

Running the Appium instances

The number of Appium instances to be launched is equal to the number of device IDs fetched from ADB. For each device ID, a shell function is called which launches an Appium instance.

function start_appium_server() {
  appium_main_port=$1
  appium_bootstrap_port=$2
  appium_server_logs=$3
  udid=$4

  appium_full_path=`which appium`
  nohup $appium_full_path -p $appium_main_port -bp $appium_bootstrap_port -U $udid > "$appium_server_logs.$udid" &
}

The function is called while looping through the collection of fetched IDs. Also, while looping through the IDs, Android version information is fetched with ADB and stored in a new collection together with the Appium port and the Android ID. This collection is to be used for running the tests:

appium_data=()
appium_start_port="4451"
appium_bootstrap_start_port="2251"

# Populate list of appium main and bootstrap ports and start appium server instances
for (( i=0; i<${#udid_data[@]}; i++ ))
do
  p=$((appium_start_port + i))
  bp=$((appium_bootstrap_start_port + i))

  # Get platform version of attached device
  platform_version=`ANDROID_SERIAL=${udid_data[i]} $adb_full_path shell getprop ro.build.version.release`

  udid=${udid_data[i]}
  data="$p,$udid,$platform_version"
  appium_data+=($data)

  # Start appium server
  start_appium_server $p $bp $APPIUM_SERVER_LOGS $udid
  # Ensure that appium server instance is initialized
  sleep 10
done

Running the tests

Similar to the Appium instances, a shell function is called for every connected device that is going to run the tests.

function start_tests() {
  test_framework=$1
  test_dir=$2
  test_logs=$3
  appium_main_port=$4
  udid=$5
  platform_version=$6

  echo "[INFO] Start ${test_framework} test with main port: ${appium_main_port} and udid: ${udid}..."
  case $test_framework in
    "rspec" )
      rspec_full_path=`which rspec`
      cd $test_dir
      UDID=$udid PORT=$appium_main_port PLATFORM_VERSION=$platform_version APP_FILE=$APP_FILE_PATH $rspec_full_path spec > "$test_logs-$udid" &
      pid=$!
      PID_DATA+=($pid)
      cd -
      ;;
    "testng" )
      mvn_full_path=`which mvn`
      cd $test_dir
      $mvn_full_path -DUDID=$udid -DPORT=$appium_main_port -DPLATFORM_VERSION=$platform_version -DTEST_OUTPUT="$test_logs-$udid" -DAPP_FILE=$APP_FILE_PATH test &
      pid=$!
      PID_DATA+=($pid)
      cd -
      ;;
  esac
}

The current implementation supports running Appium tests implemented with Ruby and RSpec or Java and TestNG, but it can be easily extended for running Appium tests implemented with other languages. Every test run is associated with an Appium port, a device ID, and the device platform version – the data that was fetched while starting the Appium instances. The data needed for the test runs is passed to the tests with environment variables. This means that the tests need to be configured to expect these arguments in order for the script to start them. Every concurrent test run has its process ID, which is stored in PID_DATA and that collection is to be used for the cleanup procedures.

The function is then called for every item in the Appium data collection (for every connected device).

for (( i=0; i<${#appium_data[@]}; i++ ))
do
  port=`echo ${appium_data[i]} | cut -d, -f1`
  udid=`echo ${appium_data[i]} | cut -d, -f2`
  platform_version=`echo ${appium_data[i]} | cut -d, -f3`
  start_tests $TEST_FRAMEWORK $TEST_DIR $TEST_LOGS $port $udid $platform_version
done

Cleanup

Before cleaning up the Appium instances, all test runs need to finish. For that reason, the process IDs for the test runs were collected. The following wait ensures that all test runs finish before proceeding with the script execution.

wait ${PID_DATA[*]}

After that, a cleanup method is called. The method uses ‘pkill’ to terminate all of the Appium instances, which are Node.js processes.

function appium_server_instances_cleanup() {
  pkill_full_path=`which pkill`
  $pkill_full_path -f "node"
  echo "[INFO] Waiting for appium server instances to be shut down..." && sleep 5
  ps -ef | grep [n]ode
  if [[ $? == 0 ]]; then
    echo "[ERROR] Appium server instances have not been shut down successfully!"
    exit 1
  fi
}

Next steps

This is the first part of this topic, covering parallel test execution with Android devices. The second part will be to cover the challenges faced while setting up the infrastructure for running iOS tests in parallel. The complete script, regarding this part of the topic, is hosted on our public GitHub repository There is also a README file with steps on how to run the script and how to structure the test code written in Java or Ruby to comply with the parameters of the script. Example tests for both Java and Ruby are also provided. A great way to contribute would be to fork the repo and add support for other languages/test frameworks or to try running the script with different Android devices and suggest improvements or report issues. Be sure to check out Part 2 of parallel mobile testing with Appium for iOS!

Load Testing With Artillery
QA/Test AutomationTech Bites
September 7, 2022

Load Testing With Artillery

Introduction to Load Testing With Artillery The topic of today’s Tech Bite is going to be load testing using Artillery. But before we start talking about Artillery, let’s briefly explain what load testing is. Load testing is a test that checks how systems function under a large number of concurrent…
Jest in test automation
BlogQA/Test Automation
August 26, 2022

Jest in Test Automation

What Is Jest? Jest is an open JavaScript testing library from Facebook. It is mainly used for white box (unit/integration) testing purposes, but it can also be utilized for black box testing techniques (API testing, UI testing..etc.). It has good cross-browser compatibility and is widely used with Selenium for automated…

2 Comments

  • Poornima Suraj says:

    Hi, we were trying out the bash script with 2 android devices connected. There are 4 sessions getting created, and even after the test completes running on both the devices, the bash does not stop and it tries to create 2 more sessions . Could you please guide us?

  • Bakir Jusufbegovic says:

    Hi Suraj,

    This mechanism should work based on the number of attached devices to your USB ports and it should open 2 appium sessions not 4 of them (if you have 2 connected devices). Since this is github project (where we keep this POC), please feel free to open github issue with detailed instructions what you tried (please provide output of adb devices command) as well as logs of your execution (run with bash -x command in debug mode). We can take a look then.

    Thanks,
    Bakir

Leave a Reply