Making Microservices Work
In our first post in this series, we showed how microservices can help you architect applications with the future in mind.
Knowing what a microservice is, and what purpose it serves, is a big part of building a successful architecture. But to truly fit your business needs and meet your goals, understanding the variations of microservices is key.
While there is no precise way to define the architectural style of microservices, by looking at specific characteristics, we can better understand what makes an application a microservice. In this post, we’ll go into more detail about how microservices work, and how to make them work for you.
How Big is Micro?
You can build what you think is a microservice, but actually, what you have just created is a distributed monolith. Individual applications become so coupled together that we start referring to them as a single noun. In other cases, we end up seeing microservices get so large that they themselves become monolithic, resulting in a monolith army.
The size of a microservice isn’t determined by the number of lines of code or the amount of functionality, but by the amount of volatility a microservice has — the amount of change that is expected to occur over the life of a microservice. If changing one microservice also requires changing another microservice, it means those microservices have been incorrectly decoupled from one another and should instead be combined. On the other hand, if a microservice is composed of features that are fundamentally dissimilar and volatile, it means those features have been incorrectly coupled together, and they should instead be split up.
In a nutshell, features of a microservice should be grouped by similarity and the likeliness of change. If changes to a microservice keep causing backwards incompatibility or constantly require refactoring, it probably needs to be decomposed into multiple microservices.
Life on the Edge (and in the Middle)
Another part of defining a microservice is understanding what the types of microservices are and the rules they should follow. These can vary depending on your architecture. One architectural solution we’ve employed with success defines two types of microservices: data-driven services in the middle and business-driven services on the edge.
Middle Tier: Driving Data
Data-driven services are called middle tier services, which are solely assigned and are responsible for a single data source. In this architecture, the only way to access any given data source is through the corresponding middle tier service. These are assigned to a single data source, so that if one data source goes down, it will not take the other services with it in the event of an outage. The only responsibility of a middle tier service is to make data available to other microservices. This could also mean applying caching or fallback scenarios to keep the data flowing.
Edge Tier: Driving Business
We refer to business-driven services as edge tier services, which drive the business logic of an application. Edge tier services typically exhibit the most amount of volatility-based decomposition, as business logic for each edge tier service can vary greatly depending on the systems it supports.
While choosing when to build a middle tier service is easy (if you have a data source, you need a microservice to go with it), choosing edge tier services requires more attention to the amount of volatility in the business logic. Functionality is driven by edge services, and in this type of architecture the number of edge tier services will always be greater than the number of middle tier services. Edge tier services connect with one or more middle tier services, but typically won’t connect with another edge tier service. In this particular solution, proper decomposition of microservices based on volatility shouldn’t require edge services to depend on other edge services, or middle services to depend on more than one data source.
While we’ve put this type of architecture to good use, other solutions are certainly possible — more on that in a bit.
Finding Each Other in the Cloud
Deploying microservices to the cloud enables clusters of microservices to be scaled up as load increases, and scaled down after load diminishes. This means that IP addressing of machines running microservices is constantly changing. So the challenge becomes: how do we route fixed traffic to a moving target?
In the past, a common pattern was to reference hosts using domain names rather than IP addresses. In this case, when a host IP changes, the DNS record is updated with the new IP address. Tools such as Consul can be employed to propagate DNS changes. Alternately, redundantly-deployed services can register with a load balancing appliance. The group of services is then referred to using the address of the load balancer. However, in a microservices architecture, this can rapidly become costly due to the large number of load balancers required.
To better solve this issue, we must get creative and route traffic in a more dynamic way. Service discovery is a pattern that lets us identify a group of microservices by name rather than by IP address. With service discovery, a centralized registry is used to store the locations of all services in the surrounding environment.
The most common pattern we implement is to have microservices push their own information to the registry on startup and say, “Hey, my name is service-a, and here is my IP address and port”. Another service can ask the registry for all IP addresses for services named “service-a”. The requesting service can then strategize which one of the services named “service-a” to talk to. This type of push-based discovery pattern can be implemented with Eureka (part of the NetflixOSS stack), and it requires discovery-enabled apps to use a client library to talk to the discovery service.
API Gateways performing reverse proxy operations can also take advantage of this service discovery feature, and can route all traffic for a specific path to a group of microservices of the same name. These gateways are utilized as a router of all inbound HTTP requests, which consolidates Internet-facing traffic under a single domain name. The router can also apply rules or filters to enforce security for edge services, for example, to require authentication. This gives you a fine degree of control over how different types of traffic are routed.
Exploring Alternate Designs
At Kenzan, we are always thinking of ways we can improve on our architectural patterns. Lately we have been exploring alternatives to the edge/middle microservice design pattern described in this post. The details of this alternate design will hopefully make an appearance in a future blog post, but in the meantime here’s a sneak peek.
We like to refer to this design as the one-edge/many-middle pattern. It might even become a three-tiered design as it evolves, with edge services, middle services, and data services. In this scenario, data services provide the data abstraction layer, middle services drive business logic, and edge services drive domain logic. The edge services have similarities to an API gateway, but they also make domain-specific decisions on which middle services to use. The result is a better decoupling of microservices and a clearer design with fewer edge services.
As we are learning, a particular microservice architecture may work for one organization but not for another. We will continue to explore different microservice designs, and evaluate the benefits and challenges of each.
Conclusion
To make microservices work for your business, it’s important to compose them correctly and place the right functionality in the right tier according to your chosen architecture. There are many ways to architect and develop microservices, and we are constantly talking about ways to improve the design.
When starting with a microservice architecture, plan a consistent strategy and define guidelines for the entire organization to use. Share best practices and lessons learned with the other teams, and continue to evolve microservice patterns. Consider the scaling capabilities, and build in capabilities like service discovery.
Going with microservices from the start can be a great approach. But what if you have an existing application you want to migrate to microservices? In our next post, we’ll talk about how to break up a monolith into manageable chunks.