This website uses cookies. By using the website you agree with our use of cookies. Know more

Technology

Elasticsearch series: Search @ FARFETCH

By Vitor Saraiva
Vitor Saraiva
Doer by nature, problem solver with performance and scalability mindset. Running around on his comfortable Puma's sneakers.
View All Posts
Elasticsearch series: Search @ FARFETCH
Search is one of the most important features in every e-commerce website (in fact, for any website). It allows the end-users to find what they’re looking for providing both free text search and filtering capabilities. A not so good search result can defraud user expectations and you will possibly start losing sales.

Back in 2013, when I joined Farfetch, free text search, facing, filtering and sorting were powered by a massive SQL Server materialised view fed by SQL Server replication directly from the operational database.

Architecture in 2013

Strong coupling between the operational database and product catalogue one, massive stress on the SQL Server instances while rebuilding materialised view indexes, delays on replication and poor response time performance were some of the challenges we were dealing with. It had become clear to us that this wouldn't cut it for much longer. Farfetch was growing fast, and we needed to be able to cope with it.

Elasticsearch to the rescue

We decided to take the opportunity to introduce a real search-oriented technology that would allow us to lay the foundations for the needed search capabilities. On top of that, we would be able to scale read requests having better response times, leaving SQL Server as the engine for search behind in the process. Elasticsearch had just released version 0.9.01 a few months before. It was the new kid on the block with amazingly good user feedback and a REST API that fit in our service-oriented architecture beautifully.

After knowing better the people behind Elasticsearch, we decided to give it a try. Apache Solr was also part of our POCs (Proof of Concepts), but the smaller learning curve and the "zero configs” setup of Elasticsearch convinced us.

New challenges

Moving from a POC to an actual production-grade product is not always as easy as we would like it to be. The text analysers, tokens, inverted indexes are words that, at the time, were all new to us. We felt like a fish out of water, but the excitement of this new challenging project gave us the tools needed for the job.

Define the right mapping

Defining the right mapping is one of the most important tasks. It needs to respond to all the search requirements and, at the same time, be friendly to writes. Moving from a relational world into a document-oriented one was also very challenging since we were always reluctant to duplicate data.

We tried out several parent-child models mainly because parts of the product data change a lot faster than others, e.g. stock. We wanted to be able to update it independently and faster.

Also, the one-to-many relationship between products and merchants, the nature of Farfetch business model, was a challenge to us. We were very reluctant about duplicating all product metadata for each merchant and tried to keep the merchant as a nested object.

Of course, these first models never saw the light of day. Our performance tests were showing us 300ms+ response times, and that's way above our SLA. We tried, and we failed until we settled with a structure where a document represented a product for a given merchant. This structure allowed us to have merchant-specific data, such as price and delivery options, flattened at the cost of duplication of product metadata, like images, descriptions, categories and attributes.

We also got ourselves a new problem - duplicate products on the product listing pages (we will be writing a dedicated blog post on this subject).
After settling down with a strategy for mapping, it is also essential to choose how the indexes are organised - e.g. multiples indexes or a big index. At the time and due to the country-oriented catalogue rules that define if a given product can be sold in a specific market, we split our data between several indexes - one for each country Farfetch sells.

{
   "product_id" : 13526037,
   "merchant_id" : 1234,
   "price" : 167,
   "categories" : [{
      "id" : 1,
      "description" : "Shoes"
   },
   {
      "id" : 2,
      "description" : "Low-Tops"
   }],
   "short_description": "low top trainers",
   "full_description": "Black leather low top trainers from Puma featuring a pull tab at the rear, a lace-up front fastening and a white rubber sole."
}

This is still roughly the same document structure today which tells us we got something right!

Text analysers and multi-cultures

Farfetch is, from day one, a global company, meaning that we need to be able to deliver to every country in the world and adapt to each market.

At the time we introduced Elasticsearch, we supported 3 different cultures(English, French and Portuguese), so, we needed to make sure we set up the proper analysers.

Since we weren't experts in the matter, we decided to keep it simple and just apply the basic tokenisers and filters, leaving spell checks and other advanced capabilities to a second round.

We also created different index time vs search time analysers so, the user input would follow a different text processing pipeline.

"en_au_analyzer": {
   "filter": [
       "icu_normalizer",
       "en_possessive_filter",
       "en_stop_filter",
       "en_stem_filter",
       "icu_folding"
   ],
   "type": "custom",
   "tokenizer": "icu_tokenizer"
},
"pt_pt_analyzer": {
    "filter": [
        "icu_normalizer",
        "pt_stop_filter",
        "pt_stem_filter",
        "icu_folding"
    ],
    "type": "custom",
    "tokenizer": "icu_tokenizer"
},
"fr_fr_analyzer": {
    "filter": [
       "icu_normalizer",
       "elision",
       "fr_stop_filter",
       "fr_stem_filter",
       "icu_folding"
    ],
    "type": "custom",
    "tokenizer": "icu_tokenizer"
},
"search_analyzer": {
   "filter": [
       "lowercase",
       "asciifolding",
       "stop",
       "snowball"
   ],
   "type": "custom",
   "tokenizer": "standard"
}

Setup the right infrastructure

While we were struggling to find the best mapping and thinking about how we would get the data to Elasticsearch, we were being pushed to answer questions to set up the production infrastructure:
  • How many nodes will we need?
  • Which should be the size of the VMs?
We had no idea how to answer that, and the truth is, there is no magic answer to those questions. We need to keep continually testing because each use case is different and may need more CPU or memory. We gave it a few tests and decided to go with six nodes. Again, we tried to keep it simple, no fancy configurations, besides the ones recommended by the elastic team. All nodes had all the three roles (master, client, and data) and the same size (8 cores and 8gb of memory for the JVM).

We later changed this setup to have dedicated nodes for each role as we found out that it would make the cluster more stable, and we would be able to optimise the VM sizes for each role.

Taking it live

After some months of work, setup and preparation, we finally had our Elasticsearch cluster ready to be used and serve our end users. Since the catalogue is mission-critical, we wanted to make sure that, if something went wrong, we were able to minimise the impact to the end-users. We implemented a "country-based” toggle to soft rollout the new version and increased the load on the Elasticsearch cluster in a controlled way. We also only generated the required indexes giving us also control on the write throughput.

At this time, we had one service version working with both legacy and new search system. This allowed us to learn more about our production environment and user behaviour, adjusting the cluster along the way.

Although this proved to be the right way of rolling out this major change, it also brought a huge workload in our development team - new business requirements needed to be implemented on both versions during the rollout period.

Architecture with Elasticsearch

We still kept the replication and dependency - for the ETL process and added a few table triggers in order to capture the required record changes and propagate them to Elasticsearch.

The current architecture is now a lot different than this one, but, although not perfect, it sets us on the path we wanted.

Conclusions

Elasticsearch is a great and flexible tool, designed to support a lot of use-cases, but you really need to dive in its internal to best use it. Getting things right at the first try is a mission impossible, so, do not be afraid to fail and make mistakes. Make sure you never stop iterating over you mapping and design - there is always something to improve.

Stay tuned for the next blog posts on the Elasticsearch series where we will be discussing a more up to date architecture, how we tackled the multi-datacenter challenge and performance tweaks and tips.

1 Reference here.
Related Articles