The Need for Speed - 3: Embracing custom metrics

In the previous posts - Measuring today's web performance and Call for metrics - Manuel Garcia talked about the importance of performance and which metrics matter to us at FARFETCH. In this post, we are going to talk about measuring the performance of specific parts of the page: where you can’t take advantage of the typical browser events, or where the new content does not require a full page load.


How it started
We are part of the team responsible for everything related to size and fit on the web at farfetch.com. Among other things, we are responsible for redesigning and maintaining the Size Guide.
A Size Guide can take many forms. If you have purchased any item of clothing online and wondered what size you should get, you have encountered one of those many options. The FARFETCH Size Guide is a simple modal dialog that gives you information about the fit of the product, its measurements and a table that helps you to convert each size into one of the many scales.


Once the tests are running, you need to add the new Custom Metrics to Speed Curve.


In order to reduce the negative impact on the Product Detail Page’s performance (or any other part of the website where the Size Guide is used), everything is loaded after clicking the size guide button, except the must-have information.
With this approach, the Size Guide became its own little page within its modal dialog. It loads javascript, css, it makes requests to a backend and it renders HTML.
It’s also important to mention, that the size guide takes advantage of the dependencies previously loaded by the product page. That way, we can greatly reduce the size of the javascript bundle.
Motivation
Even though this part of the page is not used in most of the product page hits, it’s an incredibly important source of information - and confidence boost - to assist our clients with their purchase.
Although only 3.5% of our clients interact with the size guide, they are 6.4 times more likely to convert and they have a higher Average Order Value! Our calculations also show that any improvement in performance on high conversion rate interactions, like the size guide, has a huge impact on Gross Merchandise Value (GMV). Furthermore, it guides customers to select the correct size, preventing them from going through the hassle of returns. This reduces CO2 emissions and positively affects our order contribution margin.
After having this version of the size guide live, and lacking the usual browser events for this interaction, we became ever so curious about how it’s actually performing across a variety of different devices and network speeds.
So, we decided to implement the necessary steps to monitor the real performance of the size guide in the wild and in the lab.
Solution
Just like Manuel Garcia’s reference at the end of The need for speed - 2: call for metrics, no one knows our business better than we do, and that's why custom metrics can be so interesting. On top of that and since we had less support from generic metrics, the size guide window was a perfect choice for going down the custom metrics route.
We wanted to capture both visual perception and interactivity so we got inspiration from other generic metrics:
- Size Guide TTI - Time since the user clicks the button until the Size Guide is interactable
- Size Guide TTL - Time since the user clicks the button until the Size Guide is fully loaded.
We can also add other meaningful metrics in the future.
Capturing these metrics
To initiate the setup, we used the method mark(); from the User Timing API. Each mark will count the time between the page load start and its call.
Having the defined timeline in mind, we created our first marker: performance.mark('sizeguide.start');
This marker is fired the moment the user interacts with the component responsible for triggering the render of the Size Guide. In this instance, as the Size Guide is within a modal dialog, it’s triggered the moment that a page component orders the modal dialog to open.
However, when the modal dialog opens, we still do not have the Size Guide fully loaded. There will be javascript and css downloaded and executed, a few requests to a backend, and HTML rendered. After all this, we find the point in which we can consider the Size Guide to be fully loaded. There, we create the last marker: performance.mark('sizeguide.end');
Using the markers
Each mark() created will record the time since the beginning of the page load, however, what we really need is the time interval between the two markers, not each timespan separately.
To illustrate this, let’s imagine a situation where the user enters the product page and 5 seconds (5000 ms) later clicks the Size Guide button. Then, the Size Guide fully loads in 500ms. What happens to our markers would be something like this:
- performance.mark(‘sizeguide.start’); will return the 5000ms that the user took to click the button.
- performance.mark(‘sizeguide.end’); will return 5500ms (5000ms + 500ms)
These two measurements separately from each other don’t mean much, what really matters to us is the time interval between the start and end markers.
To help us here, we use the method performance.measure();
Measuring the intervals between marks
Using the *measure()* method, we give a name to this interval, as well as the start mark and end mark. Something like this:
- performance.measure(name, startMark, endMark);
Which in our case, is as follows:
- performance.measure(‘sizeguide’, ‘sizeguide.start’, ‘sizeguide.end’);
For this use case, we will use the duration property, which tells us the time between the two marks. Following the previous example, it will return a value of 500ms, which is the difference between the 5000ms and the 5500ms of each marker.
Saving the data
So, now that we have our markers and know the interval between them, we need to send them somewhere in order to explore the results. We decided to use New Relic for Real User Monitoring, and Speed Curve for synthetic data, using Web Page Tests.
New Relic (Real User Monitoring)
For New Relic, we took advantage of the Browser Agent API to send in a log with the necessary information. The Product Details Page already created a few events on this API, so we only needed to send in another Page Action as follows:
const duration = performance.measure('sizeguide', 'sizeguide.start', 'sizeguide.end').duration;
newrelic.addPageAction('TTSG', {value: duration});
The first parameter on newrelic.addPageAction() is the name of your log, which is what you are going to use on the query for your dashboard.

Simple dashboard with RUM data
Speed Curve (Synthetic testing)
For Speed Curve and Web Page Tests, you don’t need to add any more code. These intervals and measurements are captured automatically.
Depending on the page and interaction, you might need to create different Web Page Tests scripts for your tests. See their documentation here.
Once the tests are running, you need to add the new Custom Metrics to Speed Curve.

Dashboard with results from various browsers and devices
We can add more details to this dashboard as we work on improvements.
Individual test result

Video sample of the size guide loading
Merging custom metrics with customer metrics
As previously mentioned we were extremely curious about how this application was behaving in a production environment. This exercise gave us the tools we needed to push our boundaries of knowledge a bit further, and start being curious about what developments we could do in order to have a greater impact on our business.
This set of tools will now help us add detail to each improvement we make, and inform us if we are going in the right direction. It might seem odd to focus our attention on such a small part of the page, but it’s important to remember that users are more demanding than they've ever been. If a customer or potential customer has to wait for a page, which is critical for their decision-making process, they might just as well leave our website, search for the information somewhere else and potentially never come back to continue their shopping journey with us. Besides having a huge impact on GMV, performance also has an impact on customer satisfaction and retention.