There can be obstacles when applications are revamped and modernised. The more you update these applications, the more intricacy it may lead to. To make applications run on a container platform and get them to talk to each other and connect is significant for a flexible microservices architecture. Flexibility of microservices can also result in convolutions. This is where the need of service mesh arises.
By providing a centralised control plane and enabling agile, cloud-based application development, services meshes prove to be a remarkable solution. Service mesh provides authentication, authorisation, security and performance services and offers a central point for applying them.
The New Stack states that the history of service mesh takes us back to 2010 when the three-tiered model of application architecture was on the rise that helped superabundance of applications on the web. But this model begins to break down under heavy load. Big organisations like Google, Facebook, Netflix, Twitter started breaking the monolith into independently running systems; thus, the rise of microservices. Subsequently, the rise of cloud-native world took shape. Eventually, the rise of the service mesh started taking place in order to bring sanity to runtime operations.
Uncloaking service mesh
A service mesh is an approach which calls for operating microservices architecture safely, quickly, and reliably. NGINX states that “a service mesh is a configurable, low‑latency infrastructure layer designed to handle a high volume of network‑based interprocess communication among application infrastructure services using application programming interfaces (APIs).” It makes it easier to adopt microservices at scale. It provides discovery, safety, tracing, monitoring and failure handling. It offers these cross-functional features without having to share assets such as an API gateway or baking libraries into every other service.
"A service mesh is a configurable, low‑latency infrastructure layer designed to handle a high volume of network‑based interprocess communication among application infrastructure services using application programming interfaces (APIs)"
Each part of an app, called a “service,” relies on other services to give users what they want. If a user of an online retail app wants to buy something, they need to know if the item is in stock. So, the service that communicates with the company's inventory database needs to communicate with the product webpage, which itself needs to communicate with the user’s online shopping cart. To add business value, this retailer might eventually build a service that gives users in-app product recommendations. This new service will communicate with a database of product tags to make recommendations, but it also needs to communicate with the same inventory database that the product page needed—it’s a lot of reusable, moving parts.
Most modern applications, with fewer and fewer exceptions, are hosted in a data center or on a cloud platform, and communicate with you via the Internet. For decades, some portion of the server-side logic -- often large chunks -- has been provided by reusable code, through components called libraries. The C programming language pioneered the linking of common libraries; more recently, operating systems such as Microsoft Windows provided dynamic link libraries (DLL) which are patched into applications at run time.
So obviously you've seen services at work, and they're nothing new in themselves. Yet there is something relatively new called microservices, which as we've explained here in some depth, are code components designed not only to be patched into multiple applications on-demand, but also scale out. This is how an application supports multiple users simultaneously without replicating itself in its entirety -- or, even less efficiently, replicating the virtual server in which it may be installed, which is how load balancing has worked up to now during the first era of virtualization.
A service mesh is an effort to keep microservices in touch with one another, as well as the broader application, as all this scaling up and down is going on. It is the most liberal, spare-no-effort, pull-out-all-the-stops approach to enabling a microservices architecture for a server-side application, with the aim of guaranteeing connectivity, availability, and low latency.
Think of a service mesh as software-defined networking (SDN) at the level of executable code. In an environment where all microservices are addressable by way of a network, a service mesh redefines the rules of the network. It takes the application's control plane -- its network of contact points, like its nerve center -- and reroutes its connections through a kind of dynamic traffic management complex. This hub is made up of several components that monitor the nature of traffic in the network, and adapt the connections in the control plane to best suit it.
Service-to-service communication is what makes microservices possible. The logic governing communication can be coded into each service without a service mesh layer—but as communication gets more complex, a service mesh becomes more valuable. For cloud-native apps built in a microservices architecture, a service mesh is a way to comprise a large number of discrete services into a functional application.
A service mesh doesn’t introduce new functionality to an app’s runtime environment—apps in any architecture have always needed rules to specify how requests get from point A to point B. What’s different about a service mesh is that it takes the logic governing service-to-service communication out of individual services and abstracts it to a layer of infrastructure.
Reliably delivering requests in a cloud native application can be incredibly complex. A service mesh like Linkerd manages this complexity with a wide array of powerful techniques: circuit-breaking, latency-aware load balancing, eventually consistent (“advisory”) service discovery, retries, and deadlines.
A service mesh is like a city’s network of water pipelines. Your team controls the pipes, connects them as desired, and sets all of their flow controls. Data can pass through your systems, no matter the type or purpose, regardless of the ever-changing needs of the applications supported by the service mesh. The service mesh is platform-independent, thanks to the fact that private and public cloud providers have settled on the de facto standard of Docker containers and Kubernetes orchestration. With these tools, building a service mesh in AWS does not preclude moving the system to Microsoft Azure, or forming a mesh within a vSphere private cloud. Service meshes offer intelligent traffic routing that automatically recovers from network or service failures. This allows for full-stack problem tracing, and even for tracing interservice disruptions.
When your team rolls out a new version of an application, or moves a cluster for application hosting to a new datacenter, security teams generally need to reissue certificates and authorize new servers in the system. This can take time and effort, serving as a roadblock to pushing changes to production. With a service mesh, the security around service-to-service communication is handled by the mesh, abstracting those concerns away from the application itself. The service mesh handles all of the restrictions on which services can talk to each other, which systems have access to which services, and which users can get through to which services. Thus, upgrading an application inside the mesh doesn’t require reallocation of security assets. A service mesh comes with one (large) potential drawback. It adds additional containers. In fact, it doubles them. Most service mesh implementations use a sidecar proxy, coupling one proxy instance with each container-bound microservice. The benefits far outweigh the operating costs, but it means service meshes are often overkill for small environments.
What lies ahead
The adoption of service mesh in the cloud native ecosystem is growing quickly. The requirements for serverless computing fit perfectly into the service mesh’s model of naming and linking. Service mesh is also touted to play a vital role in the areas of service identity and access policy in cloud-native environments.
Service mesh is an important component of the cloud native stack. Major reports by Gartner, IDC, and 451 on microservices point towards service mesh becoming a mandatory option by 2020 for all the organisations running microservices in production. With a service mesh, digital firms can focus on building business value instead of connecting services.