This website uses cookies. By using the website you agree with our use of cookies. Know more

Technology

The need for speed - 2: call for metrics

The need for speed - 2: call for metrics
Welcome back! On the first part of this article (here) I have discussed the importance of performance, and now I would like to explore with you the metrics.

The metrics for speed (and what to do next)

When we visit a website, we have to wait for it to load. During that time we ask ourselves a few questions, although we don’t think about it much. Here are some of those questions.

1 - Is the website working?

The first moment is when we no longer have a blank screen. This moment is actually crucial and can determine if the user stays or leaves. There is a sort of industry standard here that suggests this should be under the 3 seconds mark. The user should be served with some screen paint that tells them if something is happening, something is loading. If we look deeper into the correlation between the Start Render and bounce rates, we can verify how important that first impression is.
 

From SpeedCurve.

Actually, based on our own site monitoring data, we can see that Farfetch customers aren't even willing to wait 2 seconds. I would say that our max Start Render should be 1.5 seconds from a Real User Monitoring (RUM) perspective.

Based on this analysis, we selected Start Render as our metric to measure the first impression on our users. It's also a metric that we discovered to be good at according to our benchmarking and, therefore, we should track it.

Honourable mention goes to Paint metrics (First, Contentful and Meaningful). We concluded that the checkpoint tapped by all three Paint metrics is more or less the same and fires very close to Start Render. Thus, at first glance, it wouldn't give us extra value. Also, there is a caveat with Fist Paint (FP) and First Contentful Paint (FCP) where more often than not, they are recorded before anything is rendered to the screen.
 
Selected metric: Start Render

2 - Is it done loading?

The second critical moment for the user is when they perceive the page load to be complete. In other words, all the major components should be visible. This gives the user a sense of reassurance. Content is visible and the user understands what to do next.

For this moment, we selected Speed Index to help us, given its focus on user perception.

Speed Index is defined as "the average time at which visible parts of the page are displayed.  It is expressed in milliseconds and dependent on the size of the view port". There are also new metrics such as the "Perceived Speed Index," but we didn't find significant differences. We instead chose Speed Index due to its maturity, as it has been around for several years now.

In order to obtain the Speed Index, we need to calculate how "complete" the page is at various points in time during the page load. WebPageTest has two methods for determining the completeness of what the user sees. One is based on video capturing, where a recording is made of the page loading in the browser and then an inspection is performed on each video frame. The other method is based on paint events. Either method used to determine "completeness" should be agnostic for the actual Speed Index calculation.
 
Selected metric: Speed Index 

3 - Does it do anything?

Last but surely not least is the moment the user is inspired to interact with the page. They should have enough visual cues to identify page components to interact with.  This metric is perhaps the one users value the most, as the page only has value, even once it has loaded when the user is in a position to take action.


In terms of interactivity, we want to inspect if true fluid interactivity is possible. This is exactly where Time To Interactive (TTI) comes into play. There are two types of TTI: First Interactive and Consistently Interactive. Consistently Interactive, as the name suggests, is when the page is consistently quickly responding to user interactions. First Interactive is the first point in time where responses to interactivity are expected to be quick, but things may change due to content that might still be loading.
 
Being interactive means that the browser's main thread is not blocked by a single task more than 50 milliseconds and is therefore "free" to respond to user input within 50 milliseconds.



These periods can be found on WebPageTest’s waterfall view (in green). Although interactive, the window requires a duration of at least five seconds with no task taking longer than 50 milliseconds.
 
Selected metric: Time To Interactive: First Interactive

To sum it up, here are the metrics.
  • Start Render
  • Speed Index
  • Time to Interactive (First Interactive)
  • Page Load Time
  • Page Size
Wait a minute, Page Load Time? Yes, we still think PLT is relevant and it wouldn't be a good idea to remove it. However, with new metrics that aim at giving us a better idea of how users perceive our pages, it's natural that we start focusing on improving those rather than just looking at the PLT.
 
The page size is a bit different and not so obvious, but it gives us something that is also important, which is the overall weight of the page. For virtually all websites, this metric keeps growing year after year with the addition of interactivity, richer media, and third-party Javascript. Services such as the HTTP Archive have shown that this trend continues to grow.



In our marketplace, users request content from different sources. We update editorial content frequently and include content from numerous third parties, supporting functions such as tracking pixels and A/B testing that may pull in even more assets. With that in mind, Page Size could help us in identifying sudden changes in the overall collection of assets that our users are downloading. That alone is a special concern on its own. What weight does our site represent on a user's data plan? The chart below illustrates the data-based cost of our Product page within various countries. These costs are based on the data from ITU and are a best case scenario.


Our metrics also bring us closer to so-called perceived performance and represent all of our intent to deliver content that is shown quickly and with interactivity. Above all, the user is already trying to interact with the page before everything is loaded, so our job is showing something (Start Render), composing something (Speed Index) and unlocking its interactivity (Time To Interactive).

The advantage of having several metrics at our disposal is that they tell us so much about our application. There are hidden patterns in our applications that we wouldn't recognise if we were using a single metric. For instance, our TTI comes after the Page Load event. Normally, we would expect that to happen before. However, due to the number of third party content sources that need to be downloaded, it delays our interactivity. Obviously that's one reason, but there are ways to improve that within our own code. 

We are using these metrics as beacons for improvements that we are working on to provide a better user experience from end to end.

What comes next?

Performance is never done. All of the metrics that were discussed are generic metrics, which means that they apply regardless of the web site we are using. However, what can we expect next in terms of metric evolution?

No one knows our business better than us. So, why not create our own custom metrics that are tailored to our needs and bring us a higher business value? I'm now going to propose a new metric and call it "Time to Add to Bag", in short TAB. A bag is Farfetch’s term for the traditional e-commerce shopping cart.

What's our TAB?

This is a proposal for the Farfetch Product Details Page and it would be a RUM metric. Its definition would be: "How fast can I add to the bag?"

This metric would have a set of requirements that must be fulfilled for the metric to fire. The last requirement met would establish the value for the TAB.

The following are my proposed requirements:
  • Product Image is loaded and visible - The user requires to see the product before adding it to the bag. This is inspired by the Hero Rendering Metrics, where we define that our hero image is the product image. We may even follow an interesting approach to capture this moment, such as the Element Timing. Ever heard of it? Here's my implementation proposal: it could be an addition to the performance timeline and PerformanceObserver. There's already an intent to implement Element Timing for img elements in Chromium.
  • Product Data - This is the data about the product. The name and brand, as well as the price. This is crucial because it allows the user to associate this data with the image. The name of the product acts as our own H1 Hero. It's the easiest one to capture because it is already served in the first page render.
  • The Call To Action - This is in the form of the "Add to Bag" button. Here we are leaving out the size that can be a requirement for clicking the "Add to Bag" button. Since we are using React, we can use one of its life cycle hooks to make sure the button is alive on the DOM. For instance, we could use the ComponentDidMount event to tap into this moment.


With this metric we can express the purpose of the entire page, which is "adding to the bag". We could optimise our work towards reaching the best possible TAB and manage it under a tight performance budget.

TAB can also account for other variations, including using external providers such as Apple Pay. It expresses a different business flow, and TAB could be used to capture its value.

We could also set other custom metrics for important pages, such as the listing page. For instance, "Time to First Row" would account for the first row of products. In mobile, we will always see at least one row. This row will also be the one with more clicks and with higher conversions, so it can be an interesting metric to explore as well.
 
All of the metrics above, whether generic or custom, allow us to extract more value from our monitoring tools by focusing our attention on otherwise hidden details of the user’s experience. By doing so we get closer to our applications in the same way that we would be closer to our users. The ultimate goal of all of this realignment is to put our users at the centre of how we design, develop and monitor, so let's all work for a faster web. 
Related Articles
How to build a recommender system: it's all about rocket science - Part 2
Product

How to build a recommender system: it's all about rocket science - Part 2

By Diogo Gonçalves
Diogo Gonçalves
An engineer, a scientist, a sustainability lover and an AI geek craving for exploring the world with The North Face.
View All Posts
Paula Brochado
Paula Brochado
Astrophysicist of the galaxies, eternal pupil of arts, lover of (good) people, in a quest for all Adidas OG.
View All Posts
View