For software developers, factoring an application into component parts is nothing new. Typically, a tiered approach is used, with a back-end store, middle-tier business logic, and a front-end user interface (UI). What has changed over the last few years is that developers are building distributed applications for the cloud.
Here are some changing business needs:
- A service that’s built and operated at scale to reach customers in new geographic regions.
- Faster delivery of features and capabilities to respond to customer demands in an agile way.
- Improved resource utilization to reduce costs.
These business needs are affecting how we build applications. In this article I talk about monolithic and microservices approach in detail.
Table of contents:
For decades, the cost, time, and complexity of provisioning new hardware, whether physical or virtual, has strongly influenced application development. These factors are more pronounced when those applications are mission-critical, since high uptime requires highly available infrastructure, including expensive hardware such as SANs and hardware load-balancers. Because IT infrastructure is static, applications were written to be statically sized and designed for specific hardware, even when virtualized. Even when applications were decomposed to minimize overall hardware requirements and offer some level of agility and independent scaling, it was commonly into the classic three-tier model, with web, business logic and data tiers, as shown in the figure below. However, each tier was still its own monolith, implementing diverse functions that were combined into a single package deployed onto hardware pre-scaled for peak loads. When load caused an application to outgrow its hardware, the answer was typically to “scale up,” or upgrade the application’s hardware to add capacity, in order to avoid datacenter reconfiguration and software re-architecture.
The monolithic application model was a natural result of the limitations of infrastructure agility, but it resulted in inefficiencies. Because static infrastructure and long development cycles meant that there was little advantage to decomposing applications beyond a few tiers, developers created tight coupling between unrelated application services within tiers. A change to any application service, even small ones, required its entire tier to be retested and redeployed. A simple update could have unforeseen effects on the rest of the tier, making changes risky and lengthening development cycles to allow for more rigorous testing. Their dependence on statically-assigned resources and highly-available hardware made applications susceptible to variations in load and hardware performance, which could push them outside their standard operating zone and cause their performance to severely degrade. An outright hardware failure could send the entire application into a tailspin.
Finally, another challenge facing monolithic applications that took advantage of a tiered approach was delivering fast performance with data stored in the backend tier. A typical approach was to introduce intermediate caches to buffer against the inefficiencies caused by separating compute and data, but that raised costs by adding unused hardware resources, and it created additional development and update complexities.
While there are simple and limited scale applications for which a monolithic architecture still makes sense, microservices are a different approach to application development and deployment, one that is perfectly suited to the agility, scale and reliability requirements of many modern cloud applications. A microservices application is decomposed into independent components called “microservices,” that work in concert to deliver the application’s overall functionality. The term “microservice” emphasizes the fact that applications should be composed of services small enough to truly reflect independent concerns such that each microservice implements a single function. Moreover, each has well-defined contracts (API contracts) – typically RESTful – for other microservices to communicate and share data with it. Microservices must also be able to version and update independently of each other. This loose coupling is what supports the rapid and reliable evolution of an application. Figure 3 shows how a monolithic application might be broken into different microservices.
Microservice-based applications also enable the separation of the application from the underlying infrastructure on which it runs. Unlike monolithic applications where developers declare resource requirements to IT, microservices declare their resource requirements to a distributed software system known as a “cluster manager,” that “schedules,” or places, them onto machines assigned to the cluster in order to maximize the cluster’s overall resource utilization while honoring each microservice’s requirements for high availability and data replication, as shown in Figure 4. Because microservices are commonly packaged as containers and many usually fit within a single server or virtual machine, their deployment is fast and they can be densely packed to minimize the cluster’s scale requirements.
With this model, microservice scale-out can be nearly instantaneous, allowing an application to adapt to changing loads. Their loose coupling also means that microservices can scale independently. For example, the public endpoint HTTP listener, one microservice in the Web-facing functionality of an application, might be the only microservice of an application that scales out to handle some additional incoming traffic.
The independent, distributed nature of microservice-based applications also enables rolling updates, where only a subset of the instances of a single microservice will update at any given time. If a problem is detected, a buggy update can be “rolled back,” or undone, before all instances update with the faulty code or configuration. If the update system is automated, integration with Continuous Integration (CI) and Continuous Delivery (CD) pipelines allow developers to safely and frequently evolve the application without fear of impacting availability.
While the classic model for application scalability is to have a load-balanced, stateless tier with a shared external datastore or database to store persistent state, stateful microservices can achieve higher performance, lower latency, massive scale and maintain developer agility for service updates. Stateful microservices manage persistent data, usually storing it locally on the servers on which they are placed to avoid the overhead of network access and complexity of cross-service operations. This enables the fastest possible processing and can eliminate the need for caches. Further, in order to manage data sizes and transfer throughputs beyond that which a single server can support, scalable stateful microservices partition data among their instances and implement schema versioning so that clients see a consistent version even during updates, regardless of which microservice instance they communicate with.
There are benefits to a monolithic design approach. Monolithic applications are often simpler to design, and calls between components are faster because these calls are often over interprocess communication (IPC). Also, everyone tests a single product, which tends to be a more efficient use of human resources. The downside is that there’s a tight coupling between tiered layers, and you can’t scale individual components. If you need to do fixes or upgrades, you have to wait for others to finish their testing. It’s harder to be agile.
Microservices address these downsides and more closely align with the preceding business requirements. But they also have both benefits and liabilities. The benefits of microservices are that each one typically encapsulates simpler business functionality, which you can scale out or in, test, deploy, and manage independently. One important benefit of a microservices approach is that teams are driven more by business scenarios than by technology. Smaller teams develop a microservice based on a customer scenario and use any technologies that they want to use.
In other words, the organization doesn’t need to standardize tech to maintain microservice applications. Individual teams that own services can do what makes sense for them based on team expertise or what’s most appropriate to solve the problem. In practice, a set of recommended technologies, like a particular NoSQL store or web application framework, is preferable.
The downside of microservices is that you have to manage more separate entities and deal with more complex deployments and versioning. Network traffic between the microservices increases, as do the corresponding network latencies. Lots of chatty, granular services can cause a performance nightmare. Without tools to help you view these dependencies, it’s hard to see the whole system.