Why Measuring Application End-to-End Process Response Time Is Important?

Worker holding a stylus and working with her tablet

Quality assurance is an evolving process. If previously it felt like the clients were mostly sought for black-and-white answers - does my app work or not, then now it is a more complicated discussion that warrants clear, objective answers. Most companies include regression testing in their development cycle to make sure that most of the issues are addressed before they reach their customers. Three questions now need to be addressed - how can we make it work, how can we make it work faster than the competition, and how can we make it work better than the competition? When faster is self-explanatory, better can mean multiple things - better-looking features or more features. While we have been experts at better and better feature part testing for a while now, more and more clients have started to approach us with requests to test their speed. Speed that they have no practical way of measuring - especially on mobile platforms.

What do we mean by latency?

Although latency is said to be a synonym for delay, we at TestDevLab would like to differentiate between these two terms.

Latency - a period of time that represents the time it takes for an application to respond to input or the time it takes for the application to switch between two distinguishable states. For this post - latency is a measure that can record the time it takes for a single action to be completed. If we try to compare it to a train, latency would be the time it takes for the train to stop once emergency braking is engaged. Train engages brakes at full force, but once that is done, there often isn't much else that can be accomplished apart from waiting. Environment, brake wear, and train weight can all be comparable to the network environment, device age, and data amount.

To be clear, in this blog post, we will keep using latency as described by above mentioned definition.

Why latency measurements are important

Latency is a vital aspect of quality. Let's mark down key reasons that can make a product owner reconsider the importance of latency:

  • attracting new, keeping existing users, keeping load times within expectations;
  • being better, faster than the competition;
  • performance balancing:
  • performance similarity on various devices, from iPhone 15 Pro to Vivo Y12.

Now that the points are marked down let us explain why we think that these points make latency important. Only when we approach very niche uses of certain products do we also tend to approach people who have gotten used to waiting a bit longer for their content to load on the screen? The general “deadline” for the main content on a page to be loaded is less than 2 seconds. Two seconds is a very conservative estimate, and most users demand more and more as technology improves. After this seemingly arbitrary point, the majority of users stop waiting and go on to other sites or applications. Unfortunately, this point also applies to situations where conditions are less than perfect. The loading deadline is a thing not just for text format but also for starting calls, opening a Reel or an application, or performing any other action.

Latency is an effective measurement to show how optimized your product is against the device that is helping the user consume your product. In our tests, we have seen that, for example, YouTube works really well on Android devices, but when we go to the iOS platform, it all falls to pieces under network limitations, for example, 3 Mbps or less. Lower performance can occur due to platform limitations, but it is generally bad practice to accept slow performance. Slow or bad performance is a key aspect of why people stop using websites or applications. One bad regression might be caught a day after release, but for some, it might be enough to look for a switch to a different product for the long term.

Why measuring end-to-end latency is important

Whether it is sending a simple text or voice message or not, it is as simple as keeping voice calls or video calls started and alive or playing high-quality videos. What we have found out is that the data that our clients see within their dashboards gives them a good indication of an approximate level of performance. Still, it falls short of giving precise data for precise actions in real-life situations.

There can be many variables that can impact the performance on real devices, and these include, but are not limited to:

  • your CDN network provider and their network density, performance stability, and similarity within regions;
  • device state (count of background applications, battery charge, device temperature, and so on);
  • application state (first-time actions or repeated actions).

It is possible to estimate performance, but only on real devices, during real tests, can one see what kind of performance the device shows from one test execution to another. Even the best estimates or averages will not prepare you for the chaotic variability that can occur in the real world. Only recordings and real evidence can help you debug those situations. The same recordings that would let us analyze and understand what the latency was between two action points would also be our source to analyze to see what went wrong. Monitoring tools, statistics, and reports can definitely give you ideas to start with, but when your goal becomes improving from 0.2 seconds to 0.1 seconds, a more precise approach is needed.

What kind of content can we measure?

The answer is that as long as a human is able to read or see any content, then our solution will be able to do the equivalent. The brilliant way about our capabilities in this is that we can use somewhat basic yet clever approaches to measure any point A to point B action.

We can write down some possible scenarios and metrics:

  • application launch time;
  • landing page load time;
  • page-to-page load time;
  • specific element, text visible at, time (element load time);
  • field focus, keyboard opens time (time to initiate writing a message);
  • message or image pressed to send to message or image is shown to be sent (message or image sending time);
  • message or image is sent from device A to message or image showing up on device B (message or image receiving time);
  • static page to video startup time (the time it takes for a video to start);
  • time it takes from one active video to switch to a different video and play it (video-to-video transition time).

The most impressive thing within our new latency detection framework, according to us, is our new capability to determine startup times for any random video content. Of course, there are logical limitations, but whatever a human would see and detect, that we can do as well and precisely.

What kind of results are expected and received

Users want responsiveness. Today, consumers in the market demand more and more out of their applications, and they compare performance to native applications that have full manufacturer support and integration from lowest to highest level. If there is visible latency on your application between input and action and there isn't any on your competitor's application, that can be a major reason for users, your clients, to switch, even when the overall experience is not that much different, people do not want to wait.

  • What we give is results that represent what the real users, in realistic scenarios, in real life can see.
  • We capture a feel of the application with objective, repeatable measurements by using multiple metrics.
  • Objective - minimal error margin, one frame equivalent time amount, if a recording of the action was recorded at 60 FPS.

A simple explanation of using multiple measurement metrics to understand the true performance is to look at video playback. An important part would be to see what time it takes for the video to start at all. Still, it would also be possibly just as important to see how quickly the entire video can be viewed, how much buffering occurred, and how long the instances are, precisely. If application A launches a video in 2 seconds and buffers for 10 seconds during a 15-second video, it will most certainly overall be less enjoyable than application B, which launches the same video in 3 seconds yet features no buffering.

Other Ways to Optimize User Experience

Even if one assumes that they have done everything that they can do to make their application as fast as possible, there are ways to make any remaining wait times more interesting for the user. This can also be tested. Show short initial parts of the video that get preloaded as the user opens the page or thumbnails before loading them. Blur up parts of the page, show the contents in a lower resolution for a short period, or show animations. To make your application play videos better, here are a couple of ways to think - have shorter video partitions to play the video earlier. It may create more buffering. Try using different formats that are more flexible and can be adjusted during playback. Approach your text or image-based pages in similar ways, and load only the content that the user can see and only in the quality that the user can perceive on their screen. Usually, people won't be able to tell whether the image is 4K or 1080p quality on a mobile device screen.

Assuming something is not the best way to approach things. You should benchmark your performance against your closest competitors to understand where your performance truly lies. To become the best at what you do, contact us for a consultation, and we will show you how we can help you do exactly that.

Subscribe to our newsletter

Sign up for our newsletter to get regular updates and insights into our solutions and technologies: