How to Set up Metrics Server: An Easy Tutorial for K8s Users

Loft Team
Minute Read

As an advanced Kubernetes user, you need to monitor the performance of your pods and nodes to determine resource usage. Tracking performance allows you to identify potential bottlenecks and troubleshoot issues like resource contention. This helps you avoid outages, slow downs, and other problems.

By tracking, you gain insights into optimizing your cluster's capacity and ensuring efficient autoscaling. However, you'll need to use a metrics server to gain this information.

This post will show you how to setup a metrics server based on the Kubernetes toolkit. With this configuration, you'll be able to collect and store cluster-level information such as CPU utilization, memory usage, and more across all nodes in your Kubernetes cluster. This helps ensure everything is running smoothly without impacting your applications or workloads. You'll also learn how to troubleshoot your metrics server.

What Is the Metrics Server?

The metrics server is a Kubernetes tool used to gather data on resource usage for pods and nodes in your Kubernetes cluster. It's built on top of the metrics API, thus you can use it to get metrics such as CPU and memory usage.

You can then use these metrics to determine how to scale your cluster and allocate resources. Since the metric server is deployed as a pod, you can use it to get metrics from different sources like kubelet, Kubernetes API, and cAdvisor.

But there are instances when you shouldn't use the metrics server. These include: for clusters that aren't Kubernetes, and when you use resources other than CPU and memory for horizontal autoscaling.

Why Is the Metric Server Not Installed by Default?

Despite the metrics server's role, it's not installed by default in Kubernetes. Here are some of the reasons why:

  • The metrics server is a relatively new component in Kubernetes, and they introduced it after the initial release of Kubernetes.
  • Some users may prefer to use other third-party monitoring solutions, like Prometheus, to track resource usage in their clusters.
  • The metrics server is designed to work with a specific version of Kubernetes, and if the K8s version is outdated, it may not work as expected. For example, metrics server version 0.6.x only supports Kubernetes from version 1.19+.
  • Some cloud providers already have their own metrics server that you can integrate with the k8s cluster. An example would be the Amazon Elastic Kubernetes Service (EKS).
  • How Do You Set Up the Metrics Server?

    Before we begin, there are a few prerequisites that you'll need to have in place.

    Prerequisites

  • A K8s cluster up and running. If you don't have a cluster already set up, you can use tools like minikube or kind to spin up a local cluster for testing purposes.
  • The kubectl command line tool installed on your local machine and configured to communicate with your K8s cluster.
  • The aggregation layer configured to use the Kubernetes apiserver with new APIs.
  • Kubelet needs to use a signed certificate from the certificate authority in your cluster. If not, you can add --kubelet-insecure-tls to your metrics server to disable the certificate.
  • Installation

    Installing the metrics server on your K8s cluster is a relatively straightforward process. You can use the Helm Chart, but in our case, we'll use the following steps. The first step is to clone the metrics server repository from GitHub:

    git clone https://github.com/kubernetes-sigs/metrics-server.git
    

    Next, navigate to the deploy/kubernetes directory within the repository and apply the required manifests to your cluster:

    cd metrics-server/deploy/kubernetes
    kubectl apply -f .
    

    This will create several resources within your cluster, including a deployment, a service, and a cluster role binding. You can verify that the installation was successful by checking the status of the deployment:

    kubectl get deployments metrics-server -n kube-system
    

    By default, the metrics server is configured to collect resource usage data for all pods and nodes in your cluster. However, you may want to customize this behavior by specifying a list of namespaces or pod labels to include or exclude. You can do this by modifying the --source flag in the metrics server deployment.

    For example, to only collect resource usage data for pods with the label app=myapp, you can add the following to the deployment manifest:

    containers:
      - command:
        - /metrics-server
        - --source=kubernetes.summary_api:''
        - --source=kubernetes.summary_api:/api/v1/pods?labelSelector=app%3Dmyapp
    

    Note that you'll need to apply the modified deployment manifest to your cluster for the changes to take effect.

    Once the metrics server is up and running, you can query it for resource usage data using the kubectl top command. For example, to see the resource usage for all pods in the default namespace, you can run the following command:

    kubectl top pods --namespace default
    

    This will display a table with the CPU and memory usage for each pod, as well as the pod's name and namespace.

    How Do You Enable Horizontal Pod Autoscaling (HPA)?

    You can use the metric server to enable horizontal pod autoscaling (HPA), which allows you to automatically increase or decrease the number of pods in a deployment based on resource utilization. To enable HPA, you'll need to create an HPA resource and specify the resource type and target utilization that you want to use for scaling.

    Run the following command to create an HPA resource for a deployment:

    kubectl autoscale deployment <deployment-name> --cpu-percent=50 --min=1 --max=10
    

    Replace <deployment-name> with the deployment name you want to scale. The --cpu-percent=50 flag specifies the HPA should scale the deployment based on CPU utilization, and the --min=1 and --max=10 flags specify the minimum and the maximum number of pods that the HPA should maintain.

    When you want to check the list of the autoscalers, you can run the following command:

    kubectl get hpa
    

    You can use the command below to describe the autoscaler in detail:

    kubectl describe hpa
    

    Deleting the autoscaler is simple, and you can accomplish this with the following command:

    kubectl delete hpa
    

    Troubleshooting the Metrics Server

    You may run into trouble with the metrics server, like while setting up an application or service. Here are some steps you can take to troubleshoot the metrics server in Kubernetes:

  • Check the logs of the metrics server pod to see if there are any error messages. This can help you identify issues such as misconfigurations or connectivity problems.
  • Verify that the metrics server is running, and that all its components are healthy. You can use the kubectl command to check the status of the metrics server pod.
  • Ensure that you've set the correct permissions for the metrics server to access the Kubernetes API server. It should have the cluster-admin role.
  • Confirm that you deployed the metrics server to the correct namespace. If not, move the metrics server to the correct namespace.
  • Verify that the system resources are not exhausted. The metrics server might not work correctly if the underlying system lacks resources.
  • Check for compatibility issues. If the version of the metrics server is not compatible with the K8s version you're running, it may not function correctly.
  • If all else fails, you can try uninstalling and reinstalling the metrics server. This can help fix an incorrect update or installation of the metrics server.
  • Conclusion

    Setting up the metrics server allows you to track the CPU and memory usage of pods and nodes in your cluster. This can help you make autoscaling decisions like horizontal pod autoscaling (HPA). Depending on your app's demand, the HPA will adjust the number of pods to ensure that your app runs efficiently. Remember, you'll need to install and configure the metrics server because it isn't installed by default with Kubernetes.

    Now that you know how to set up the metrics server, you also need to consider ways to optimize your Kubernetes cluster. Loft gives you the flexibility to scale your Kubernetes infrastructure efficiently. Loft ensures that clients' clusters are constantly operating at peak performance while helping them save time and money. Install Loft on your Kubernetes cluster today!

    This post was written by Mercy Kibet. Mercy is a full-stack developer with a knack for learning and writing about new and intriguing tech stacks.

    Sign up for our newsletter

    Be the first to know about new features, announcements and industry insights.