Introduction to Load Testing With Artillery
The topic of today’s Tech Bite is going to be load testing using Artillery.
But before we start talking about Artillery, let’s briefly explain what load testing is.
Load testing is a test that checks how systems function under a large number of concurrent virtual users performing transactions over a certain period of time. To put it short, the test measures how systems handle heavy load volumes.
So What Is Artillery?
Artillery is a modern, powerful, open-source command-line tool written in Node.js that’s purpose-built for load testing. It ships scalable applications that stay performant and resilient under heavy loads.
Artillery focuses on developer productivity and satisfaction and also follows the “batteries-included” philosophy. (Batteries-included meaning that something is self-sufficient, comes out-of-the-box ready to use, with everything that is needed.)
In a nutshell, Artillery is a load generator. It sends a lot of requests, a.k.a. the load, to the server you specify very quickly.
Installing Artillery
As we said, Artillery is an npm package, so that it can be installed either through npm or yarn:
npm install -g artillery
Whether the installation was successful or not can be checked by running the following command:
artillery -V
If everything went well, Artillery would display its version number like this:
___ __ _ ____ _____/ | _____/ /_(_) / /__ _______ __ ___ /____/ /| | / ___/ __/ / / / _ \/ ___/ / / /____/ /____/ ___ |/ / / /_/ / / / __/ / / /_/ /____/ /_/ |_/_/ \__/_/_/_/\___/_/ \__ / /____/ VERSION INFO: Artillery Core: 2.0.0-14 Artillery Pro: not installed (https://artillery.io/product) Node.js: v16.14.2 OS: darwin
Quick Artillery Demo
Now we are ready to start using Artillery. A quick and simple test can be executed by running the following command:
artillery quick –count 10 —num 20 https://artillery.io/
The quick flag lets us run a test without creating a config file.
The count flag represents the number of virtual users we want to simulate, and num is the number of requests each virtual user sends.
This results in 10 virtual users sending 20 HTTP GET requests each to the specified URL.
After running that command, this is the result that Artillery should show us:
Phase started: unnamed (index: 0, duration: 1s) 14:07:19(+0200) Phase completed: unnamed (index: 0, duration: 1s) 14:07:20(+0200) -------------------------------------- Metrics for period to: 14:07:30(+0200) (width: 2.224s) -------------------------------------- http.codes.200: ................................................................ 55 http.codes.301: ................................................................ 156 http.codes.308: ................................................................ 125 http.request_rate: ............................................................. 194/sec http.requests: ................................................................. 430 http.response_time: min: ......................................................................... 24 max: ......................................................................... 335 median: ...................................................................... 44.3 p95: ......................................................................... 89.1 p99: ......................................................................... 284.3 http.responses: ................................................................ 336 vusers.completed: .............................................................. 10 vusers.created: ................................................................ 3 vusers.created_by_name.0: ...................................................... 3 vusers.failed: ................................................................. 0 vusers.session_length: min: ......................................................................... 694.1 max: ......................................................................... 1177.5 median: ...................................................................... 837.3 p95: ......................................................................... 1176.4 p99: ......................................................................... 1176.4 -------------------------------------- Metrics for period to: 14:07:20(+0200) (width: 0.624s) -------------------------------------- http.codes.301: ................................................................ 44 http.codes.308: ................................................................ 16 http.request_rate: ............................................................. 111/sec http.requests: ................................................................. 111 http.response_time: min: ......................................................................... 23 max: ......................................................................... 118 median: ...................................................................... 30.3 p95: ......................................................................... 51.9 p99: ......................................................................... 111.1 http.responses: ................................................................ 60 vusers.created: ................................................................ 7 vusers.created_by_name.0: ...................................................... 7 All VUs finished. Total time: 4 seconds -------------------------------- Summary report @ 14:07:22(+0200) -------------------------------- http.codes.200: ................................................................ 55 http.codes.301: ................................................................ 200 http.codes.308: ................................................................ 141 http.request_rate: ............................................................. 153/sec http.requests: ................................................................. 541 http.response_time: min: ......................................................................... 23 max: ......................................................................... 335 median: ...................................................................... 41.7 p95: ......................................................................... 89.1 p99: ......................................................................... 284.3 http.responses: ................................................................ 396 vusers.completed: .............................................................. 10 vusers.created: ................................................................ 10 vusers.created_by_name.0: ...................................................... 10 vusers.failed: ................................................................. 0 vusers.session_length: min: ......................................................................... 694.1 max: ......................................................................... 1177.5 median: ...................................................................... 837.3 p95: ......................................................................... 1176.4 p99: ......................................................................... 1176.4
This output shows several details about the test run, such as the number of requests, a latency report for each request, the duration of each test, and any HTTP status codes returned from the response. The response is printed every ten seconds by default, and a final response is then printed after the test has been completed with an aggregate statistic of all the tests run.
Request latency and scenario duration are recorded in milliseconds, p95 and p99 values represent 95th and 99th percentile values. (The 95th percentile means that 95 percent is at/below and 5 percent is above a given measure. The percentile is a way of illustrating how common your result is compared to others. For example, if you have 100 data points, you begin by removing the five largest values. The highest value left represents the 95th percentile. The same logic applies to the 99th percentile.) NaN is returned if not enough responses are received to perform the calculation.
If there are any errors (such as socket timeouts), those will be printed under Errors in the report as well.
Configuration
Now let’s show how we can run a load test where we provide a scenario in a declarative way (by creating a .yaml file).
First, we need to create a .yaml file where we’ll define the base URL for the application we want to test like this:
config: target: “https://artillery.io/”
Next, we’ll need to determine the load phases for the test by setting ‘config.phases’. For each load phase, we’ll define how many virtual users we want to generate and how frequently we want this traffic to send requests to our backend. We can set up one or more load phases, each with a different number of users and duration.
For this performance test, we’ll start to define three load phases:
phases: - duration: 60 arrivalRate: 5 name: Warm up - duration: 120 arrivalRate: 5 rampTo: 50 name: Ramp up load - duration: 600 arrivalRate: 50 name: Sustained load
The testing phase consists of several aspects:
- duration: the time of one phase;
- arrivalRate: the number of users added each second;
- rampTo: up to how many users per second the load will grow by;
- name: a name of the phases.
The first phase is a slow ramp-up phase to warm up the application. This phase will send five virtual users to the backend every second for 60 seconds.
The second phase that follows will start with five virtual users and gradually send more users every second for the next two minutes, peaking at 50 virtual users at the end of the phase.
The final phase simulates a sustained spike of 50 virtual users every second for the next ten minutes. This phase is meant to stress test the backend to check the system’s sustainability over a more extended period.
The test definition built so far sets up the configuration for the load test. Now it’s time to define the steps we want the virtual users to go through during the test.
In an Artillery test definition, we can define multiple scenarios for each virtual user (defined by ‘scenarios.flow’), and each scenario will contain one or more operations.
To begin defining scenarios, we can add the scenario setting in our test script. ‘scenarios’ is an array where we can define one or more scenario definitions. Let’s add our scenario definition now:
scenarios: - name: "Artillery Test Example" flow: - get: url: "/docs" expect: - statusCode: 200 - contentType: json - equals: - respMessage: "OK"
Each virtual user’s steps in the application are under scenarios.flow
. The first request is GET /docs
, which leads to the documentation page, and under scenarios.expect
are all the checks we want to run on the response we get.
Now our my-test.yaml file looks like this:
config: target: "https://artillery.io/" phases: - duration: 60 arrivalRate: 5 name: Warm up - duration: 120 arrivalRate: 5 rampTo: 50 name: Ramp up load - duration: 600 arrivalRate: 50 name: Sustained load scenarios: - name: "Artillery Test Example" flow: - get: url: "/docs" expect: - statusCode: 200 - contentType: json - equals: - respMessage: "OK"
And we can run the test with the following command:
artillery run my-test.yaml
This command will launch virtual users, starting with the first phase in our test definition.
Artillery will print a report on the console for every 10-second time window as the test runs, which includes performance metrics collected in that time window. At the end of the test run, you’ll receive a complete summary, including the number of virtual users launched and completed, the number of requests completed, response times, status codes, and any errors that may have occurred.
And this is what our summary report looks like:
All VUs finished. Total time: 13 minutes, 5 seconds -------------------------------- Summary report @ 15:16:51(+0200) -------------------------------- http.codes.200: ................................................................ 33762 http.request_rate: ............................................................. 44/sec http.requests: ................................................................. 33762 http.response_time: min: ......................................................................... 27 max: ......................................................................... 3086 median: ...................................................................... 46.1 p95: ......................................................................... 74.4 p99: ......................................................................... 111.1 http.responses: ................................................................ 33762 vusers.completed: .............................................................. 33762 vusers.created: ................................................................ 33762 vusers.created_by_name.Artillery Test Example: ................................. 33762 vusers.failed: ................................................................. 0 vusers.session_length: min: ......................................................................... 78.6 max: ......................................................................... 7111 median: ...................................................................... 108.9 p95: ......................................................................... 147 p99: ......................................................................... 202.4
We can also add --output my-test-report.json
to the command above in order to save the detailed statistics of the test run to a JSON file.
Conclusion
In this Tech Bite, we’ve talked about how you can set up a load testing workflow for your applications with Artillery. Using Artillery can ensure that your application performance stays predictable under various traffic conditions. You’ll be able to account well for traffic-heavy periods and avoid downtime, even when faced with a sudden increase in the number of users.
Thanks for reading, and we wish you good testing!
“Load Testing With Artillery” Tech Bite was brought to you by Elmedin Karišik, Junior Test Engineer at Atlantbh.
Tech Bites are tips, tricks, snippets or explanations about various programming technologies and paradigms, which can help engineers with their everyday job.