This website uses cookies. By using the website you agree with our use of cookies. Know more


Mobile App Launch Performance II

By Gonçalo Alvarez
Gonçalo Alvarez
At Farfetch for 5 years. Has built his career in mobile development with sheer curiosity for the most complex tech matters. Carhartt enthusiast.
View All Posts
Mobile App Launch Performance II
How much does a millisecond in app launch cost?

This is the fourth of a series of articles dedicated to mobile app performance at Farfetch and the second of a series zooming in on app launch performance. This time around, the spotlight shines on Mobile App Launch as an engineering business case and factoring it into business decisions. If you have not read the previous article on App Launch, please find it here.

Time is money

"The two most powerful warriors are patience and time” 
- Leo Tolstoy, War and Peace

Leo was probably right. Still, had he published War and Peace in the 21st century in the form of an app and flashed that sentence on app launch, he’d better find a way to get the message through fast. As in 400ms fast.

Measuring Is Key

Measure your app performance, don’t estimate it

Solid performance is built upon solid pillars. If there is one thing you take from this article, let it be this: measure. Measuring app launch performance provides you with the data to continuously monitor deviations, predict surges and drops and make informed technical and product decisions. Not measuring renders you blind, reacting to outer hunches and stimuli, making changes to your code based on data of others, and seeking solutions for problems you may not have. You do not want to navigate a script you and your team did not write.

App launch performance is often an afterthought, and that is both a reason and consequence of poor performance. On the one hand, your performance is poor because you don’t measure it. On the other hand, as your product evolves into an ever more complex compound of app systems, you face an uphill battle to introduce cohesive measuring practices into an increasingly larger and complex codebase. Thus, just when you need to be measuring the most, you will be less likely to even start measuring.

The map is not the territory, it is a representation of the terrain

Analysing app launch code allows you to glance through the health of your entire app as it provides an accurate projection of how larger app flows may perform in your app as there is a big overlap between design patterns, practices and code running (networking, parsing, etc) on early-stage processes and overarching app flows.
Make performance a development-time priority

This will help you keep performance as a first-class citizen within your engineering team, thus contributing to richer performance data that powers better engineering decisions. Encourage your teams to leverage developer tools (e.g., Instruments on Xcode for iOS development) and device data gathering SDK (e.g., MetricKit).

Keeping close track of your performance makes it easier, faster and simpler to troubleshoot issues in your app. Your team should hold a holistic understanding, detailed views and continuous monitoring of every change produced. This better enables them to flag alarming differences in performance resulting from any new code committed by the team.  
Empower your team with the tools to plan, build and deliver great performance on a daily basis

Within the Farfetch Mobile Domain, we do exactly this. By defining the metrics for success, gathering data on our customers and calculating the impact of changes in our app launch brought to life at every release. A strong and active monitoring of your app, feeding data to a reactive and effective alert mechanism, is key to achieving great performance. All the time.

Every (Milli)Second Counts

According to a study published by Akamai, for every additional second that the app consumes at app launch, the conversion rate declines by 7%. Furthermore, user frustration escalates as the waiting time grows longer. Another report by KissMetrics reveals that 19% of customers uninstall or stop using an app if it takes more than five seconds to load.

While these numbers take a fairly linear approach, app performance is an ecosystem. Every variable impacts every other variable. They provide solid ground for trade-off conversations about app launch, adoption and success. Understanding our customer base and behaviour is key to surfacing the impact of each extra second in every app release’s bounce and exit rate.

App launch is important. It is a user experience interruption - the first and most defining one - and it is a centerpiece in app experience. App launch happens a lot. In fact, across all iOS devices, the Farfetch iOS app launches more than 1 million (1.000.000) times a day. Now, after some number crunching, we realised that by reducing just one second in each launch we can save up to 12 days of launch time - equivalent to the time it takes our team to deliver a new version of the Farfetch app to our customers. You could get to the moon and back in 11 days. Twice.

First impressions

Farfetch iOS Mobile App’s user base currently sits at around 4.000.000* users, counting an average of 100.000 new download units1 per release. For the majority of these 100.000 users, this will be the first time they will interact with Farfetch. A chance for Farfetch to deliver a great first impression, a delightful experience, to build our brand and pave the way for a long-lasting and trusting relationship with new customers.
From the moment a new customer taps on the app icon, the operating system triggers a chain of processes to load the app, opens it and renders the first pixels on the user’s phone. A lot is happening both in the user’s device and their head. While the OS rushes to bring the app to life on screen, the fast-pacing mind of the customer gobbles up their subconscious app launch tolerance budget.
The goal is clear: loading the app before the customer hits the threshold of their patience. Failing to do so leads to failure in meeting their expectations, which in turn leads to frustration and distrust. If this happens, the connection touchdown is missed. There is a cost of opportunity here. Let’s run through the figures.

App launch performance inversely correlates with conversion rate - small values in the former supports large values in the latter. Both these variables, in tandem with user base distribution and average gross transaction values, line up the equation, which leads to a linear extrapolation of app launch business impact. 

At Farfetch, we set a baseline target for cold launch TTI2 - Time To Interactive - which currently stands at two seconds. This baseline allows us to calculate the opportunity cost for new app users per release. Consider a fortnightly Gross Transaction Value of $50* for each new customer funnelled through the app download3. Should the app launch performance exceed the baseline by 1 second - i.e. a three-second cold app launch - then we’re facing a $350.000 opportunity cost

Distrust in apps is expensive to tackle. Adding to this opportunity cost are the marketing costs of restoring brand awareness and costs of having users rekindle their love for our products. Does the business case supporting the extra call for imagery at app launch cover this cost? Maybe it does. The good news is now you are equipped with the right tools to pose that question.

Ongoing Deposits

Besides the hit taken on new customer relationships, poor app performance also takes a toll on an existing customer’s relationship with the brand. Relationships are like bank accounts. Deposits strengthen trust and deepen the level of the relationship, while withdrawals shake their foundation and terms.

Aim high at keeping a healthy balance in your customer relationship bank account. Delightful content, smooth UX and trustworthy transactions are deposits. Poor performance, sluggish interactions and flaky communication are withdrawals.

At Farfetch, we set a baseline target for warm launch TTI which currently stands at 1.5 seconds. Most of our 4 million* users will experience a warm launch as they have opened the app before, which means part of the systems are still loaded in memory, resources lie in cache and the operating system faces a lighter task in lifting the app from inactivity into the screen.
Now, zoom out and consider at an estimated yearly GTV of $150* per app user4. Should the app launch performance exceed the baseline by 0.1 second - i.e., a 1.6-second warm app launch - then we’re facing a potential yearly $4.200.000 cost. Does the extra call for user behaviour information to support UI tweaks based on their segment cover this cost? Maybe it does. You now have the tools to labour the point. 

The blink of the eye lasts 0.3 seconds - three times longer - it takes more than 0.1 seconds just to shut your eyelid. On the other side of the equation, the length of 4.200.000 one-dollar bills laid end-to-end extends 648km. The approximate length of stacking 100 Mount Kilimanjaro on top of each other.

As aforementioned, these figures result from a linearization of the problem at hand. App adoption, conversion rate and the product success are a function of so much more than your app’s performance, be it app size or launch. App success is a function of all these variables working together. The figures presented here should be read as a ceiling for potential impact, not an absolute gain or damage to your business as a result of performance degradation or improvement. Also, the cold and warm launch baseline figures presented above are a simplified version of reality to explain the business impact of performance. Different device models and different OS versions are prone to differ in app launch performance, since older and less equipped hardware will not perform as highly as the most recent versions of mobile devices running on the latest versions of iOS. 

Hence the importance of integrating continuous monitoring, measuring and alert mechanisms to our deployment pipelines. Furthermore, by adopting MetricKit and other performance measuring artefacts (e.g. SignPosts), each running on all targeted devices and OS versions as to assess performance on a high sample of devices, we increase the area and spectrum of observability of our apps. This also decreases our team’s reaction times and makes our interventions more precise.

As with app size, this rationale unfolds both ways. First, it helps us frame the business costs of adding more calls and processing on the app launch process. And second, it helps identify additional work we could remove at an early stage of app feature development.

App Launch Performance Budgets

We have talked about balances, deposits and withdrawals. Performance budgets follow the same line of thinking. They consist of a fixed pool of resources we can consume to power our app. Keeping our expenditure within budget is crucial for a healthy performance balance. Keep your deposits frequent and high: measuring, monitoring and making performance a development-time priority. And keep your withdrawals seldom and scarce: avoiding app launch bloat with unnecessary processing, managing app launch data responsibly and eschewing the performance as an afterthought working mode.

Performance budgets enable decision criteria in regards to the quality of the app we are serving to our customers. They provide a common language for discussing the state of app performance, they support trade-off discussions around app launch processing strategy, data consumption and UX and they provide a basis for alarmistic for teams to monitor and act upon in case of regression or degradation. 

The TTI budget results from the sum of the inner stage budgets. While app launch assessment is a function of TTI budget balance, defining inner stage budget promotes granular reflection around each step of App Launch, defining actions to tackle deviations within each step of the process and establishing contained troubleshooting strategies. 

Working with budgets allows us to derive app Launch performance targets - a baseline for calculating potential business impact. Their purpose is not to arbitrate release go/no go decisions - that is a budget’s mission. Rather they serve as a baseline to gauge estimated gains or losses tied to each release’s performance. Exceeding the target leads to losses. Keeping performance under target leads to gains.

Negotiation is key. You trade, compromise and agree on a strategy based on assessing pros and cons. App launch budgets and targets, quantified in seconds, allow us to unfold a trade-off strategy with product counterparts and peer engineering representatives.  Adding extra calls for imagery or user information at early stages of the app now has a richer foundation for discussion, making it clearer to everyone involved: from business to product across to engineering. How much of the budget do we have left to consume, and what impact can we make if we consume it?
Final words

App launch, like app size and every other software performance trait, is an engineering team’s responsibility. The shape, form and performance of our app - especially app launch performance - is our organisation’s business card. To champion the performance of our app, we must make room for discussing it, holding the spotlight on the matter and swaying our organisation’s attention to the commonly opaque tidbits surrounding matters of performance. Dress it in a language used across the board -- to clearly convey its impact, importance and business cases in a language everyone shares, speaks and relates to.

There is a business impact in every decision we make on a daily basis. It is only fair we make informed choices. Measuring is key.

* GTV and LTV figures presented throughout this article are fictional, used to labour the point of App Size costs to Businesses.
1. Apple defines App installations as the total number of times your app has been installed, including redownloads or restores on the same or different device, downloads to multiple devices sharing the same Apple ID and family sharing installations
2. Time to interactive - time past from the moment the user taps the app icon until the app shows first pixels and is responsive. I.e. the tab bar is actionable.
3. To reach this figure, we calculated the 15-day value of new customers based on their Lifetime Transaction Value - LTV - crossed with the new app downloads within this period
4. This time we calculated the 12-month value of new customers based on their LTV and once again crossed them with the app downloads.
Related Articles