The UK’s most popular price comparison website broke down its single, monolithic application into microservices and learned some lessons about the benefits and drawbacks of a microservices approach to software development along the way.

Speaking at MongoDB World this week, Matt Collinge associate director of technology and architecture at UK insurance price comparison site, spoke about how their tech team moved from a single, monolithic application to the increasingly popular microservices architecture.

The drivers for change were the sort of problems a single application tends to create: having to coordinate releases amongst teams, long feature cycles, changes negatively impacting other product teams, not being able to expose their functionality for partners through APIs and an inability to scale a single database with 200+ tables and a single point of failure any other way but vertically.

The solution was to change the layout of the internal teams from groups of specialists (UI, database administrators, middleware) to autonomous product teams that could change what was theirs through microservices, a set of smaller applications chained together instead of a single, master application.

Read next: What is microservices?

The benefits

Collinge laid out the five benefits of moving to a microservices approach:

1. More manageable: “By taking a single complicated thing and breaking it into smaller things, they become easier to reason about.” This is the Unix design philosophy of building small things that work well (do one thing and do it well) and chaining them together through well-defined interfaces where appropriate.

2. Flexible tech stack: “We haven’t had to commit to a tech stack, we use the right tools for the problem e.g. if we have a machine learning problem we use Python as there is a wealth of open source libraries we can make use of. If we just want to do a restful API we use Node.js.”

Read next: Cloud and microservices help Expedia innovate quickly, says director of technology Elizabeth Eastaugh

3. Reduced cost of failure: “Big services tend fail in a big way. If your monolithic application isn’t working you aren’t making money. In a microservice world if a service stops working it doesn’t take down the entire businesses ability to make money. It drives innovation and shortens the build, measure, learn cycle so people feel more empowered to experiment and it is freeing to build things that don’t have to last ten years.”

4. More specialised: “You can use microservices to specialise what you do. So if you have customer data you can put that into a single microservice, wrap that in layers of additional security and not burden the rest of your services with that same constraint. In a monolithic world there needs to be that highest common denominator.”

5. Independence: “Teams can have their own backlog of change and scale and release independently which allows the organisation as a whole to move faster.”

The drawbacks

Of course there are drawbacks, and the first is the most obvious:

1. Complexity: “By splitting one large thing into small bits you end up having more bits.”

2. Disperate data sources: “You end up with many data sources and so how do you deal with that data?

3. Creating a distributed monolith: “You have to put some effort into design upfront. Just taking a monolith and breaking it into microservices won’t cut it if those bits were tightly coupled to start with, you will just end up with tightly coupled microservices, which is a distributed monolith, which is the worst of all worlds.”

So, how did they deliver?

So the strategy at focused on minimising these drawbacks, and that started with an investment in automation. As Collinge put it: ”Trying to realise continuous delivery by making releases reliable and repeatable and the whole process a non-event, rather than a 4am in the morning cold sweat process.”

Next he wanted to ensure that all the monitoring and alerting frameworks looked the same, so “all of our microservices emit a standard set of metrics for latency, throughput etc. All of these metrics have a standardised threshold so when engineers move among teams there is less surprise when it comes to the shape of the applications.” He also pushed service discovery and health checking into the infrastructure itself.

Read next: deploys WANdisco fusion across its global datacentres

Collinge takes a realistic approach to faults and failures, so “by accepting the fact that computers break and networks fail we have focused on becoming fault tolerant.” What this means in practice is if a non-critical dependency goes down a service will still respond in a degraded form and if the failure is critical dependency like the database it fails fast “so that the client doesn’t expend lots of resources waiting for a response that doesn’t come back,” he said.

Lastly Collinge used automation to ensure that all of the documentation was consistent and reflected what was actually in production. This allows for more self-service and easier implementation for clients.

Data in the monolithic world

Collinge also spoke about moving from a copy and restore approach to giving data to the business intelligence units at, to an event-driven architecture for data collection. “Each microservice emits data in the form of a JSON object. This gives us a real-time view of what is happening across our estate and is scalable.”

“It also means that each microservice has a private data store so no one else relies on the structure of the data which aligns with the Mongo no-schema metadata approach. The team can change the structure of the data in that store without having to coordinate.”

Find your next job with computerworld UK jobs