Implementing a Service Mesh in Kubernetes

Loft Team
Minute Read

A microservices architecture makes it possible to deploy an application as a collection of loosely coupled services. This means that an error on one service is unlikely to crash the entire application. However, using microservices also means that services need to be able to communicate with each other.

In Kubernetes, traffic flows north-south from external users to the cluster via an ingress. Traffic within the cluster, for example from service to service, flows east-west. A service mesh provides an effective and secure means for handling east-west traffic, the kind of traffic that can enable multiple services in a microservice application to communicate with each other in a Kubernetes deployment.

In this post, you'll learn how to implement a service mesh in Kubernetes. Before we dive deep into the topic, let's take a look at what a service mesh is.

What Is a Service Mesh?

Simply put, a service mesh is a layer of infrastructure that handles service-to-service communication. Without a service mesh, developers will usually code the functionalities that make service-to-service communication into the business logic of their microservice application. So a service mesh can eliminate the need to have extra code that's not part of your core application inside your services.

What Does a Service Mesh Do?

A service mesh uses a proxy (or sidecar) that sides alongside each service. This sidecar is responsible for routing requests from one service to the sidecar in another service. As a result of this nature, service mesh makes collecting metrics and detecting issues easier.

The following list shows some of the things you can use a service mesh for in your application:

1. Collecting Metrics: A service mesh can collect details and logs about the traffic that flows through services and containers.

2. Reducing Downtime: You can use a service mesh to reduce downtime in different ways. For example, a service mesh can route requests away from failing services to increase uptime and stability.

3. Added Security: A Service mesh can offer security features like encryption, authentication, and authorization outside of a microservice application's logic

Service Mesh Options

Istio

Istio provides an open-source service mesh layer that improves security, service-to-service connection, monitoring, and load balancing.

Pros

  • Istio provides several add-ons for metrics and data visualization.
  • It has a good community of users, and finding resources for getting started is easy.
  • It's free and open source.
  • Cons

  • The initial setup can be difficult for engineers, depending on their level of experience.
  • It adds extra load and complexity to the Kubernetes deployment.
  • Linkerd

    Linkerd is a light service mesh implementation that offers security and observability. It also aims to reduce the complexity of service mesh implementation and service-to-service communication.

    Pros

  • Linkerd is lightweight.
  • It provides performance metrics out of the box without an extra configuration.
  • Cons

  • Linkerd adds more complexity to the deployment thanks to the extra layer of infrastructure.
  • Consul

    Consul on Kubernetes offers a service mesh that enables secure connection between services in a microservice application. In addition, Consul supports multiple Kubernetes platforms like GKE, EKS and AKS, and more.

    Pros

  • Consul included multi-platform and Kubernetes distribution support.
  • It makes it easier to set up security policy at the service mesh level instead of within the application code.
  • Cons

  • Consul requires extra work and adds complexity to the deployment process.
  • It uses more resources on the Kubernetes cluster.
  • How to Implement a Service Mesh in Kubernetes

    Now that you know what a service mesh is and what it does in a microservice application deployment, let's walk through how to implement one in Kubernetes. For this example, we'll be using a local Kubernetes instance powered by minikube.

    Step 1: Deploy a Microservice

    If you don't have a microservice in your Kubernetes cluster already, your first step will be to deploy one. For this example, we'll deploy the microservice demo from the Google Cloud Platform GitHub repo.

    To deploy the microservice application, download or copy the content of kubernetes-manifests.yaml from the demo repo to your computer. Next, apply the YAML configuration by running the following command from the same directory as where you pasted the config file:

    kubectl apply -f kubernetes-manifests.yaml
    

    Step 2: Download a Service Mesh Implementation Option

    For this tutorial, we'll use Istio to implement a service mesh. To get Istio, head to the official website and grab the latest version for your operating system. For this tutorial, I'll download the correct version for my operating system.

    Once you have the file, unzip it to your current directory. In the next step, you'll learn how to set up Istio using the content of the file.

    Step 3: Configure Istioctl

    Next, configure your terminal so that it can recognize the istioctl command. To do this, first print the current content of your $PATH variable by running this command:

    echo $PATH
    

    Then run the following command to add the Istio bin from the folder you extracted in the previous step to $PATH:

    export PATH=$PATH:[/home/ea/Development/DevOps/kube_service_mesh/istio-1.17.0/bin]
    

    Replace export "[/home/ea/Development/DevOps/kube_service_mesh/istio-1.17.0/bin]" with the actual path to Istio bin folder on your computer.

    To verify that your configuration works, simply run the following command:

    istioctl
    

    If everything is OK, you should see a list of commands that istioctl supports.

    Step 4: Install Istio

    Install Istio to your Kubernetes cluster. This step is simple thanks to the istioctl tool.

    Simply run the following command to start the installation process:

    istioctl install
    

    The install command does a few things in the background. For example, it creates a new istio-system namespace for Istio. Then it creates two pods under that namespace for istio-ingressgateway and istiod.

    You can run the following command to verify that the namespace and pods were created successfully:

    kubectl get pods -n istio-system
    

    The output should look like this:

    NAME                                    READY   STATUS    RESTARTS   AGE
    istio-ingressgateway-866497d58d-94s66   1/1     Running   0          4m3s
    istiod-58bd97fff4-gkx67                 1/1     Running   0          4m53s
    

    Step 5: Configure Istio to Inject Proxy

    At this point, you have Istio set up, and your microservice application is deployed, but the two components are not working together. In other words, there are no Istio containers in the deployment pods.

    When you run kubectl get pods, you should see a list of pods with each pod having one active container. Once Istio is active, you should see two containers per pod.

    Now you need a way to inject Istio into the microservices. You can do this by applying a label to the namespace that hosts your microservice application.

    Run the following command to enable Istio injection on the default namespace:

    kubectl label namespace default istio-injection=enabled
    

    Step 6: Recreate Pods

    In this step, you'll redeploy your microservice application so that the Istio containers are created during the pod creation phase.

    First, delete all pods related to the previous deployment by running the following command:

    kubectl delete -f kubernetes-manifests.yaml
    

    After running the above command, wait for all pods to terminate. You can run kubectl get pods to confirm that the pods are deleted.

    Next, apply the configuration again to recreate the pods by running the following command:

    kubectl apply -f kubernetes-manifests.yaml
    

    Again, wait for the deployment to complete, then run kubectl get pods. You should see an output similar to this:

    You've successfully implemented a service mesh for the demo microservice application.

    There are additional metrics and monitoring add-ons for Istio that you can find in the installation file you downloaded earlier. You can take further action to install them and access metrics and logs via a web-based dashboard.

    Summing Things Up

    In this post, we walked through what a service mesh is. We defined a service mesh as an infrastructure layer that enables service-to-service communication in microservices.

    Next, we looked at some of the things you can do with a service mesh. This includes improving security, stability, and observability. Also, there are several options for implementing a service mesh, including Istio, Consul, and Linkerd.

    Finally, we walked through the steps for deploying a microservice application and implementing service mesh using Istio.

    Check out our blog if you want to learn more about Kubernetes.

    This post was written by Pius Aboyi. Pius is a mobile and web developer with over 4 years of experience building for the Android platform. He writes code in Java, Kotlin, and PHP. He loves writing about tech and creating how-to tutorials for developers.

    Additional Articles You May Like

    Sign up for our newsletter

    Be the first to know about new features, announcements and industry insights.