This website uses cookies. By using the website you agree with our use of cookies. Know more

Culture

Experimentation at FARFETCH: An Introduction

By Gregory Sherwin
Gregory Sherwin
Human-centric, curiosity-driven technologist enamored with complex systems, platform design, MTN 94 spray paint, and Brunello Cucinelli.
View All Posts
Experimentation at FARFETCH: An Introduction


Among the human skills that computers and AI cannot replace is understanding the relationship between cause and effect.

Although its roots go back to more than a millennium ago, the randomized controlled trial (RCT) has been the scientific gold standard for causality since 1948. Back then, the clinical trial work of the British Medical Research Council set the bar for all major medical interventions ever since. Any identification of a safe and effective COVID-19 vaccine will undoubtedly be the result of randomized controlled trials.

In our modern era of digital business, however, the experimentation practice of RCTs has since extended from medical treatments to helping us learn how to make more effective changes to Internet experiences.

The Rise of Digital Experimentation

Some of its earliest pioneers were digital marketers, who frequently created A/B tests to inform the art of Conversion Rate Optimization (CRO). A decade ago, Google famously increased their revenues by $200 million annually — while also alienating designers — by heavily experimenting between 41 shades of blue.

Yet today we use digital experiments for far more than just dithering over a limited set of design or messaging options on a website. Businesses use them to de-risk software deployments and to discover and validate user features. Pursuing innovation and not just incrementalism, they even use experiments to navigate the uncertainties behind greenfield ideas at the margins of Minimum Viable Products and Lean Startups.

If you are investing time and effort to build new experiences with desired outcomes, why wouldn’t you collect data to inform your decisions? The common alternative is what’s often called the HiPPO, or the Highest Paid Person’s Opinion. Given the diversity and complexity of human experience within today’s highly networked world, no one person can know it all anymore.

Causality in Complex Systems Requires Experimentation

This diversity and complexity make cause and effect incredibly difficult to establish, if not outright impossible. This is because social behaviors are multidimensional, adaptive, and emergent. Thus in these complex system environments, safe-to-fail (as opposed to fail-safe) experiments become the primary way to introduce change and measure its system-wide impact.

Exploring the resulting data along multiple dimensions informs us of whether we want "more like this and less like that”, thus better steering us towards our strategic objectives. This practice iteratively produces insights, learnings, and a better sense of how we should respond with our next experimental change.

But data alone isn’t the answer either. At FARFETCH we champion being data-informed rather than data-driven. Data is contextual, it is prone to bias, and we can interpret it in many different ways. We also must resist the human impulse to turn uncertainties into certainties.

Hence there needs to be a human in the loop — even better if there are many humans — to ask further hard questions and make sense of what’s really going on. Especially when your subjects are Internet users who require other qualities that computers and AI are terrible at: empathy, intuition, and an appreciation for human complexity.

It is no coincidence that "Be Human" is one of the core values among Farfetchers.

Enabling Learning Organizations Through Experimentation

"The combination of an exponential increase in data, better tools to mine insights from that data, and a fast-changing business environment means that companies will increasingly need to, and be able to, compete on the rate of learning." -BCG Henderson Institute, Winning in the '20s

More than ever, businesses today must become learning organizations. Learning about our world, our markets, our customers, and how our products and services can best serve them requires a culture of continuous experimentation.

For all the disruption in our modern world, fortunately the best practices of experimentation have changed little since Francis Bacon first formalized the seven steps of the Scientific Method in 1646. They still apply today, and our digital experiments at FARFETCH are no exception.


The third step of the Scientific Method is especially critical: defining a good hypothesis. At FARFETCH we break down our hypotheses into the following components: 
  • Insight - Both qualitative and quantitative, and developed from research or our previous experiment iteration learnings
  • Proposed change - A description of the change that we want to introduce
  • Behavioral change - A description of the change we are expecting to induce on behavior or outcomes
A good hypothesis thus takes the following form:
Based on <insight>, we believe <proposed change> will result in <behavioral change> as measured by <measurable impact>.

A specific type of experiment, the aforementioned randomized controlled trial, turns out to be perfectly suited for digital experiments. They require three core ingredients:
  1. Control groups: a population that is excluded from the treatment introduced by the experiment,
  2. Randomization: a random assignment of users to receive either the control or the treatment experience to eliminate grouping biases, and
  3. Blinding: concealing the group allocation from those involved in the experiment to further minimize bias and maximize the validity of the results.
Digital technologies can easily support each with the right tools, analytics, and reports.

All of which is all fine and good for individual experiments, but what about experimenting at scale?

Digital Experimentation at Scale

At FARFETCH I belong to a core experimentation team that operates as a kind of Center of Excellence. We exist to provide the right tools and elevate standard practices to support experimentation across our global company. At scale, I’ve found that the challenges are often more cultural than they are technical.

We help weave experimentation into the mindset of product owners, engineers, data scientists, marketers, designers, product analysts, researchers, and the executive team. Common artifacts and regular rituals additionally become important to empower communities of practice focused on experimental learning throughout the organization. A culture of sharing insights, diverse perspectives, experimentation learnings, and aligned objectives becomes essential for evolution.

Here at FARFETCH, in line with our values,  our internal guiding principle is to "Do What’s Never Been Done". In that spirit, we are excited to introduce an extensive new 43-page ebook documenting our digital experimentation experiences, practices, tools, techniques, and lessons at FARFETCH.

All of which echoes Step 7 of the Scientific Method: communicating our results. We hope you will benefit from our shared learnings and can apply them to your own organizations.




Related Articles
How to build a recommender system: it's all about rocket science - Part 2
Product

How to build a recommender system: it's all about rocket science - Part 2

By Diogo Gonçalves
Diogo Gonçalves
An engineer, a scientist, a sustainability lover and an AI geek craving for exploring the world with The North Face.
View All Posts
Paula Brochado
Paula Brochado
Astrophysicist of the galaxies, eternal pupil of arts, lover of (good) people, in a quest for all Adidas OG.
View All Posts
View