This website uses cookies. By using the website you agree with our use of cookies. Know more


FARFETCH Apps Development: from Sprint to Sprint

By Francisco Medeiros
Francisco Medeiros
Foodie, design enthusiast, music addict, fashion lover, and an avid Nike sneakerhead always up for a challenge to be creative.
View All Posts
Sílvia Costa
Sílvia Costa
Development combined with QUALITY is the perfect union. Gucci enthusiast.
View All Posts
FARFETCH Apps Development: from Sprint to Sprint
Our native mobile app development teams have grown 50% in the past year, and our group now consists of 70 engineers. To ensure our product development capacity scales with this team size and level of growth, we continually align our processes to ensure faster integration and synchronization between all members.

While we have some ceremonies that are quite standard within Agile teams performing Scrum, we have some nuances worth mentioning within our processes.

Our sprints consist of a regular, recurring work cycle of two weeks. Starting on Monday, every sprint begins with the sprint planning meeting and finishes after the Mobile review on the Friday of the second week.

During sprint execution, the PODs, which are small custom agile teams, ranging from four to eight members, responsible for a single task, requirement, or part of the backlog, create product enhancements with a high emphasis on both working code and automated tests simultaneously. Developers use techniques such as unit and snapshot testing. Meanwhile, QA engineers perform functional UX and exploratory tests, all done within a continuous integration environment.

We support a fixed and smooth operating release cycle every two weeks with a properly defined process. Among the many steps of this process, our QA engineers perform a regression run of the most critical areas of the apps to ensure the quality of the master build for release.

We take a deeper dive on these processes below.


The Product Manager and the POD review the backlog to ensure it contains the appropriate epics, user stories or tasks, that their priority is up-to-date, and that the items at the top of the backlog are ready for a future sprint. Some of the activities that occur during this refinement of the backlog include the following:
  • Removing user stories that are no longer relevant,
  • Creating new user stories in response to newly discovered needs,
  • Re-assessing the relative priority of stories,
  • Assigning estimates to stories where it is missing,
  • Refining estimations in light of recently uncovered information, and
  • Splitting high priority user stories that are too complex to fit in an upcoming iteration (aka, story slicing).
One of the main objectives is to accurately estimate user stories. This enables the POD to better understand the effort to deliver them in future sprints. It also establishes long-term roadmap visibility within the team.

The group checks if all the required information is available, such as designs, acceptance criteria and testing conditions. Once we have addressed any major uncertainties in the user stories, we then proceed to discuss the technical implementation details for each.


Together with the Product Manager, the POD commits to the product backlog items they will work on during that sprint. We build our capacity plan based on the team's availability and other commitments.

The POD also discusses whether there are any dependencies between the user stories issues. Then the Engineering Team Leader assigns the initial product backlog items to each member working on the next sprint.


Each day at the same time, the POD meets to bring everyone up to date on the information vital for coordination: each member briefly describes any completed contributions, any obstacles that stand in their way and identify what they plan to work on that day. 

This meeting is timeboxed to a maximum duration of 15 minutes. To keep the meeting short, we do this standing up, so that nobody gets too comfortable for too long, and any topic that starts a discussion is kept brief. If further details are required, it is discussed in greater depth after the meeting, by a more focused group of the involved parties.

This ceremony is optional, however. While some PODs do it every day, others prefer to synchronize their teams on an as-needed basis.

Regression test execution

Every two weeks we re-run functional tests as part of our regression testing. This procedure ensures that all previously developed and tested functionality operates as expected before releasing our new changes to a production environment. Our regression tests cover code, design or anything else that affects the overall framework of the system.

As software changes during the sprint, new faults or old defects can emerge. Each POD has a responsibility to perform feature testing throughout the sprint, and checking whether the new code complies with the old, is a key measure of quality.

With fixed release cycles every two weeks, all teams know that they must comply with all the defined criteria before a merge from a development branch to the master branch in source control. Our master branch must always be deployable as our production code. 

To ensure all quality standards are met for deployment, and because we humans aren’t always perfect, this master branch goes through a rigorous process of end-to-end testing.

The regression test execution consists of a set of reusable, documented, step-by-step manual test cases with extensive coverage of the significant and critical system aspects.

Since we have several teams, in order to facilitate and coordinate the process, we created three roles with distinct responsibilities:
  • Regression Owner: QA representative that ensures the creation of the test executions by selecting the adequate test cases to validate that release, coordinates the QA elements along with the execution, manages the discovered defects and provides the approval of the build.
  • Release Owner: Development representative that is the orchestrator of the complete release process execution, from release train and possible fixes, until the binary is released.
  • Release Approver: Engineering Manager that validates all user stories during the sprint/release cycle and ultimately determines if the new version is eligible to roll-out. We can't ship the new release until the approver says so.
Cross-team meetings

The term kumbaya is rooted in an American spiritual and folk song of the same name. The term refers to moments of or efforts at harmony and unity.

We thought it sounded like the right name due to its symbolic meaning for our cross-team meetings. Our kumbaya sessions are, in essence, a forum to address dependencies, technical details and process improvements. Timeboxed to a one-hour session, the Dev Kumbaya occurs once a month. Meanwhile, the QA Kumbaya occurs once every week. Both serve the demanding purpose to align all members of those disciplines.

POD Sync

Held on the first day of the sprint, this one-hour meeting gives all developers visibility into what areas of the system will be worked on during that sprint. It also helps them anticipate possible coding conflicts between teams.

Mobile Review

Held at the end of the sprint, the Mobile Review demonstrates one or more features developed during a sprint by each team. It is an informal meeting, not a status meeting, and we present the new features to elicit feedback and foster collaboration.
This one-hour meeting includes the following elements:
  • Attendees include all Development teams, Product Managers and key stakeholders.
  • Each POD demonstrates the work that it has done and answers questions about the new features.
  • The team collectively celebrates every successfully achieved milestone.

Throughout this post, we've covered the practice behind delivering our native mobile app features successfully: from the planning phase to the development cycle, from sprint to sprint and finally by celebrating every new release. Until the end of the year, we plan to grow the team significantly.

The ability to see a situation from all angles, to understand its system and drivers to find the right solution, has allowed our team to keep growing sustainably.

As we grew from ten people to 70, we experienced a few challenges to do a proper scalable plan without losing our culture and identity. We wanted just enough process and structure to enable the team to deliver successfully. But we also wanted to build off of the Farfetch values to make the mobile cluster team feel like they are in a familiar environment. 

In our most recent Humu survey, 84% of Farfetchers indicated that they would recommend "Farfetch" as a great place to work. This is possible mostly due to a challenging but friendly working environment and a suitable work/life balance.

As the cluster grew, we realised that small teams were more manageable, improving focus and communication. Hence we subdivided the team into smaller elements.

Team alignment is essential when everyone needs to be on the same page about the mission. We, therefore, had to define what each individual on the team should do from the first day they join the cluster.

By providing an inclusive onboarding process, new joiners have an engaging kick-start experience that enables fast integration with their role and their team within a few weeks.  

There are three essential elements within our onboarding process:
  • Operational: Make sure that new joiners have the right materials (computer and working software) and knowledge related to processes, technicalities and tools to do their job correctly.
  • Social: Make new team members feel welcome, help build and promote valuable relationships with their colleagues and managers, and make them feel like an essential part of the organisation.
  • Strategic: Ensure that newcomers know the organisation (structure, vision, mission, goals, key developments, culture) and help them identify with it.
From day one, people that join us take ownership of their role and how it can contribute to their career and the cluster success.

Our fast growth from one to 11 teams also provided a challenge in managing our planning across all teams. Currently, we have an initiative to ensure 85% on-time delivery. As we demonstrated above, this is possible by having a well-defined planning process with high-level estimations, capacity planning and a roadmap. 

Releasing regularly every sprint has also enabled us to meet our goals. By establishing a fixed release cycle, we removed added pressure from the delivery of our features. If for some reason a team can't deliver a feature in a particular sprint, we can aim to deliver it in the next release train without compromising the work of other teams or adding more code merging complexity by holding up a release.

With clearly defined processes and, more importantly, a group of committed individuals with the drive to achieve excellence, we are able to build great app experiences. This is what we love to do, and this is how we do it!
Related Articles
Balancing Between Product Experimentation and Software Reliability

Balancing Between Product Experimentation and Software Reliability

By Francisco Sousa
Francisco Sousa
Technology enthusiast, always avid to learn, learn and learn in his Nike sneakers.
View All Posts
Paulo Barbosa
Paulo Barbosa
At FARFETCH since 2018, making the impossible become possible wearing Timberland boots.
View All Posts