Web Service Load Testing With Simulated Users

map of the world

Singing and dancing is an important part of Latvian culture and every five years we have a grand Song and Dance celebration. It is by far the most attended event in our country and selling tickets to this event is a very challenging task.

Tickets to this years celebration went on sale on March online as well as at the ticket shops, and nearly 100 000 tickets to 19 events sold out in a matter of hours. As many people predicted, the ticket sale didn’t go as planned due to many technical challenges.

We are currently building a web service load testing tool that can simulate load generated from users and this blog post describes our experience with building this tool. There are a lot of things that can go wrong when the load level rises, but we think that our tool can help you pinpoint to issues before they become real problems.

What can go wrong?

Large loads usually imply many challenges that service providers must be ready for, starting with the large user load to service to attacks to system. When expecting large load, the bare minimum to test is whether servers will be able to handle all the load. Large loads may cause some race conditions that are not observed under normal load. Such race conditions may include parallel database access, network bandwidth or some problems using shared variables which does not not happen if few people use the service simultaneously. Another thing that service provider must be ready for is denial of service (DoS). During significantly higher loads than usual, this load can easily become unintentional DoS since servers are not ready to handle that much load. Especially if network interface can’t handle additional loads or the added traffic is unexpected.

Also, someone may attempt to launch a distributed DoS attack to slow down the system even more. In any case, there should be a mechanism preventing DoS by distributing load among multiple handlers or rate limiting requests coming from a single location. Generic security threats must be considered as well, such as cross site scripting or SQL injections – these attacks could seriously jeopardize data integrity or lead to data leaks.

We are working on a solution

For over a year we have been developing a framework that may help identify potential issues with the use of load tests. Idea behind this framework is to simulate as many users as possible without sacrificing any conditions, such as browser specific issues. This means that you could create a very realistic setup to monitor service behaviour from end-user perspective.

To simulate test conditions of realistic scenarios and maintain scalability we had to come up with set of main characteristics:

  • See how your frontend app communicates with backend servers under bad network conditions;
  • Test service availability from different regions across the globe;
  • Support multiple browsers;
  • For more specific apps we offer audio and video feed to simulate user’s webcam and microphone input.

All things considered, there are hundreds of possible configurations that you can test during increased load situations. Coupling all these options with automated simulated user setup gives great and easy-to-setup load test tool.

Enough with the theory, lets see an example

For illustrative purposes we might assume that we want to test an online video streaming service. In this case we may want to test large amounts of simultaneous users watching videos, as well as DoS – both scenarios are testable using our framework.

We will take multiple steps to prepare for these use cases. First we define number of users we want to simulate and divide those users into logical groups. Video streaming services usually offer many videos, so it makes sense if we divide the users by videos they are going to watch. This way we can test many users watching some specific video while a few other users watch something else. We will call these groups “rooms”. When we have defined our groups, the next task is to write test script. Don’t worry, it’s simple!

The framework supports Selenium scripting with NightWatch JS syntax. So all you need to do is compose actions you wish the simulated users would do, as well as some assertions to make during test.

function(client) {
   var roomName = 'loadero_demo';
   client
      .url('https://appr.tc/r/'+roomName+client.globals.room)
      .waitForElementVisible('body', 10*1000)
      .click('#confirm-join-button')
      .pause(30*1000)
      .assert.cssClassPresent("#remote-video", "active")
      .pause(30*1000);
   }

This sample code describes sample script for launching video call between two participants in Google’s Appr.tc

You can decide at what pace users will join. For common use cases users would join rapidly at one point slowing the pace near the time of event. This graph illustrates potential total user count over time.

To simulate real use case scenario, we do not want all users to try and use the service at the same time, so we need to configure some ramp up time. We define ramp up time with two parameters – total time it takes for all users to join and join function. In some use cases you might want users to join at a constant pace, but in different use cases a sudden spike after few peaceful minutes might be what you are looking for.

Once the script is done, we need to configure users. This is done in two steps: define rooms and specify user configurations in each room. During creation of the script we defined different actions for users that belong to specified rooms. In this stage we need to define matching room count. In each room we then define as many users we want. For example, if our main user base was from Europe, we might want to have user distribution among regions as follows: for every 20 users from Europe, we add seven users from United States, and three from Asia. We can mix up browsers for these users and set their network conditions to simulate high speed connection, 4G, 3G or limited speed networks.

The hard part is done and we can run the tests and assert the results. Results contain a lot of useful data, that help to understand what happens at the client side. Starting with the basics – success rate. This will show whether simulated users could successfully complete their actions. Apart from general success rate, we can reconstruct the load amounts at any point in time that were targeted against the web application. Additionally, there are some complicated metrics such as user machine statistics (CPU usage, RAM usage, etc.), or webRTC statistics if that is relevant for the service. Metrics can be asserted in many different ways, such as comparing by region, by browser or any other common parameter. Take note – for increased data accuracy we advise to view these metrics together with metrics collected from application backend servers.

Sample report from Loadero

Loadero – Get your web app ready for production traffic

Our in-house tool can provide load test infrastructure with minimal setup and customizable single user. Combining this with expertise of our engineers, possibilities are endless.

This framework is best suited for new products that plan to enter the market and are unsure about server capabilities. Alternatively businesses with existing products can use this solution to load test their infrastructure to prepare for increased loads or to test new releases. Additionally, any webRTC application will be able to test their non-HTTP servers as well.

Want to know more? Contact us!

Subscribe to our newsletter

Sign up for our newsletter to get regular updates and insights into our solutions and technologies: