The rise of containers has led to the microservices paradigm where software is developed and deployed as fine-grained services. Multiple services work in tandem to deliver the expected functionality from the application.
Microservices are managed by container orchestration platforms such as Kubernetes and Mesosphere. Developers encapsulate their code in container images that are deployed by the DevOps teams. A container orchestration platform manages the lifecycle of an application and each of its associated microservices. It does most of the heavy lifting by abstracting the compute, network, and storage layers. This enables developers to focus on business logic while letting operators manage the availability of an application.
There are many challenges involved in developing, deploying and managing microservices-based applications. Even though the container management platform tackles most of the orchestration process, developers and operators are expected to implement multiple techniques to get a grip on the current state of the application deployment.
Lack of visibility of the execution environment is a critical issue faced by developers and operators. A microservices-based application is composed of dozens of fine-grained services. At any given point of time, the application would be simultaneously running multiple instances of each service. This execution model makes it hard for developers to understand the chain of events that take place during the runtime. When an application slows down due to a rogue service, it is not easy to identify the culprit to isolate the problem quickly.
One option to overcome this problem is to build a custom software layer that is embedded in each microservice. This layer intercepts the inbound and outbound traffic to provide insight into the environment. The telemetry data collected by this layer is aggregated by a central service which offers insight into the environment.
But it is not viable for an organization to build custom software to handle the communication flow. Microservices developers use a language that’s best suited for the functionality of the service. An application may be assembled from services developed using a variety of languages. It’s not uncommon to find an application that uses Python, PHP, Ruby, Python, Go and C#.
This heterogeneous approach to software development made it extremely hard to build standard interception layer for each service. It’s cumbersome and expensive to invest in agents and SDKs for each language.
The other challenge is the lack of standardization of frameworks and toolkits. Each team working on a service may adopt a different framework and toolkit for service-to-service communication, which makes it difficult to monitor the state of the application execution environment.
Apart from mixing and matching languages, frameworks, and toolkits, microservices developer may use a diverse set of protocols for communication. They may rely on HTTP, gRPC, and Queues to send and receive messages from other services.
Load balancing is an integral part of distributed software deployment. Since existing load balancers are not optimized for microservices, there is a need for a lightweight, container-native load balancer for managing the internal and external traffic.
Finally, the lack of integrated security and network policies hinder the adoption of microservices. DevOps teams should be able to apply rules to manage the communication and traffic flow related to microservices. When a newly deployed service fails, they should be able to route the traffic to an older version of the service until it gets fixed.
The challenges involved in developing and deploying microservices led to the evolution of a new microservices software platform called the service mesh. It is designed to precisely address the issues described above.
The service mesh acts as an abstract layer providing the required plumbing for service-to-service communication and managing the network traffic. Service mesh is transparent to microservices since they are unaware of the presence of the layer. It attaches itself to every microservice without the need to modify the code.
Service mesh is an efficient, lightweight networking layer highly optimized for modern workloads. It is designed to be language, framework, and toolkit agnostic layer enabling developers to focus on what they do best – code. The service mesh relieves the organization from developing and maintaining multiple SDKs and agents developed for each of the platforms. Developers get a consistent piece of software that can be attached to a service written in any language.
Service mesh works with any protocol and any deployment target. Developers would be able to integrate it with REST, HTTP, HTTP/2, and other protocols. The mesh works consistently in development, staging and production environments running across IaaS, PaaS or CaaS deployment targets.
So, to put it simply, a service mesh is a standard piece of software that is attached to each service in a transparent and non-intrusive mechanism to provide unmatched visibility and insight into the microservices execution environment.
Istio is one of the popular service mesh implementations based on Envoy, an open source project developed by Lyft. Envoy acts the proxy that runs like an agent inside every microservice. It is Istio that manages multiple Envoy proxies by defining their security, routing, and communication policies.
Linkerd is another well-known open source service mesh developed by Buoyant. The Cloud Native Computing Foundation currently manages both Envoy and Linkerd.
Service mesh is fast becoming an essential component of microservices infrastructure. If your organization is planning to embrace the microservices paradigm, consider adopting the service mesh.
Hand-picked content for you: Securing microservice environments in a hostile world
This blog is provided for informational purposes only and may require additional research and substantiation by the end user. In addition, the information is provided “as is” without any warranty or condition of any kind, either express or implied. Use of this information is at the end user’s own risk. CenturyLink does not warrant that the information will meet the end user’s requirements or that the implementation or usage of this information will result in the desired outcome of the end user.