Microservice Design Patterns: Single Database Per Service
By Michael Ryan, Director of Engineering
This is the first of a series of articles on Microservice Design Patterns. The articles center around the growth of a fictional video streaming service, Day After Sports. Day After Sports streams college and professional sports games in the days after they are played — if someone misses a game they can still watch it “the day after” on Day After Sports.
When Day After Sports began with a founder and two engineers they needed to quickly launch a web portal to get their business up and running. The architecture needed to be easy to set up, easy for a small group of engineers to deploy and maintain, and they needed to get it done quickly so they could prove they had a viable business model before investing in more expensive infrastructure and processes.
Faced with these factors, they chose a monolithic architecture. Monoliths get a bad name but they are often a good architecture for a startup — easy to get up and running and easy for a very small group of engineers to work on, deploy and maintain. They don’t have the overhead associated with more popular cloud architectures like microservices and let a start up prove they have a viable business model before investing in more expensive infrastructure and processes.
But the cat will mew and every dog will have his day. The monolithic architecture that worked well for Day After Sports in their early startup days is now showing wear. As Day After Sports became popular the increased traffic was making it difficult and expensive to scale by simply running multiple copies of the monolith. As the monolith grew in complexity it became increasingly difficult to add and test new features. Deployment frequency decreased. The architecture that helped make Day After Sports a viable company was now an impediment to future growth.
The engineers met with the founder and discussed these limitations. After considering these issues, they decided to migrate from the monolith to a more flexible microservices architecture. When coupled with DevSecOps practices and cloud technologies like auto-scaling, microservices enable an organization to almost arbitrarily scale their engineering efforts while increasing the productivity and agility of every team.
You’ve just been hired by Day After Sports as an architect to guide the migration to a microservice architecture. You’re assigned to a new team that is just beginning to work with microservices. The team is composed of newer employees like yourself along with several engineers who created (and still favor) the monolith. After a week of domain modeling exercises, the team is finally ready to start building microservices. They sit at the conference table and start talking.
Problem #1: Handling Data
Almost immediately the topic of how best to handle data comes up. Sharing data across microservices has some appeal — mostly that data is stored once in a single location and all services have access to a single source of truth. On the other hand, if there are multiple data stores, they might get out of sync with each other — that sounds like chaos! Talk of “eventual consistency” between services just seems like an excuse for sloppy coding. The employees who have been with the company the longest favor a shared data model.
In response you ask what happens if the needs of one microservice evolve differently from the others? And what if changes are made to the central database? Wouldn’t that mean many microservices using that database might have to be updated all at once?
You share your painful experiences of what that kind of coupling can do to a microservice system. You mention the “Independently Deployable Rule” — that at any moment in time each microservice must be deployable by itself, without having to make corresponding deployments of other microservices. This helps ensure microservices can be developed and scaled independently, and that every microservice has just one reason to change.
Some team members think following the Independently Deployable Rule would be a lot of work for little gain. You respond by telling war stories of afternoons spent at a white board disentangling complicated dependency webs before figuring out which services could be deployed in what order.
Although the best solution is not obvious, the team agrees to follow the “Single Database Per Service Pattern” — but with a caveat.
Single Database Per Service Pattern
- Design a single database per service
- The database is private to the microservice. Data is only accessed through the microservice API.
- Variants of this pattern for relational databases include private-tables-per-service, schema-per-service, or database-server-per-service. Regardless of the details, any solution should prevent access from other service tables and keep each service’s data private.
- Keep scalability, high availability and disaster recovery requirements in mind.
You’re happy the team is following your advice, and even happier they proposed a common sense exception to the Single Database Per Service pattern. The solution helps ensure the success of the microservice architecture while also taking into account the intricacies of moving from a monolithic to microservice architecture. Everyone on the team contributed, and everyone is happy with the outcome. The team is coming together.
The caveat: When it makes sense to use a Shared Database per Service Pattern
In greenfield development, design a single database per service.
When decomposing monoliths:
- It often takes several attempts to find the right level of granularity for microservices.
- Decomposing code is much easier than decomposing data. Decomposing data can be very difficult. You only want to do this once if possible.
- Consider using the Shared Database pattern when decomposing a monolith — but only until the code is stable. When the code is stable, decompose the database and move to a Single Database Per Service pattern.
Adopting a microservice architecture comes with technical and non-technical dangers. These pro tips hopefully provide sound advice in addressing them:
Pro tip #1: Budget issues. Every company has competing budget priorities. When you deploy your services with a shared database the system will be “working” — and probably working better than the old monolithic architecture did. Business owners will be happy with the improvements and also happy to spend your remaining budget elsewhere before you can decompose your data. You run the risk of ending up with the dreaded Distributed Monolith.
Pro tip #2: Avoid Distributed Monoliths. Microservices are meant to provide a company with the highest degree of flexibility in building new features and responding to customer demand. This is one reason why it is important each service be independently deployable. On the other hand, even the smallest change to a monolith requires testing and deploying the entire codebase — a time consuming and error-prone endeavor.
A Distributed Monolith is created by tight coupling between microservices. Even though code is distributed across multiple services, the system acts like a monolith in that a change to any one service requires deploying all or nearly all the other services. Distributed Monoliths couple the complexity inherent in microservices (different code bases, different build pipelines and deployment models) with the inflexibility of a Monolith. A Distributed Monolith is literally the worst of both worlds.
Pro Tip #3: Use Allow Lists. Microservices interact with each other through defined APIs and communication protocols. As long as a service adheres to these standards, everything else becomes an implementation detail. A team is free to use the language, database, development processes and even DevOps pipelines of their choice.
This freedom is very valuable and a large benefit of microservice architecture. Left unchecked, this freedom can become a maintenance nightmare as each team implements their own favorite solution to every problem.
Many companies solve this problem by maintaining an allowed list of approved tools in each category. For example, MySQL or MariaDB for relational databases, Mongo or Couchbase for NoSQL solutions, H2 and Redis for in-memory DB, etc.
Pro Tip #4: Domain Modeling is critical to your success. A single microservice is like a neuron in the human brain — interesting in itself, but the real magic happens when neurons and microservices act in concert to produce thought or a scalable, self-healing application.
Neurons communicate with one another in identifiable patterns that are optimized for human cognition. Like neurons, microservices also have optimal communication patterns that are critical to an application’s performance and success. Services within a single domain often communicate better with synchronous communication patterns like REST. Communication between domains is often better accomplished through asynchronous eventing schemes.
Building your microservice architecture and communication patterns to mirror bounded contexts is critical to your success. However, getting domain structures correct can be a difficult process.
Kenzan conducts the Domain Modeling process as workshops between technical and business teams — based on Vaughn Vernon’s domain-driven-design.
Stay tuned for our next article: the Strangler Pattern. The Strangler Pattern helps you avoid big bang transitions and reduce risk by gradually evolving your architecture from monolith to microservices.