Kubernetes Logging: A Comprehensive Guide

Lukas Gentele
Loft Team
7 min read

Google’s open-source container management tool, Kubernetes, is designed to help you manage and deploy applications. It features a robust infrastructure that enables you to deploy and scale software easily. However, you still have to monitor your applications by looking at logs. So, how do you do that?

This post will be a comprehensive guide to logging in Kubernetes. We’ll look at how to retrieve logs for different artifacts, how Kubernetes handles logs, and the complementary tools you can use for Kubernetes logging. Further, we’ll look at the challenges, benefits, and best practices of Kubernetes logging.

#What Is Kubernetes Logging?

Kubernetes logging involves collecting data from resources in the Kubernetes cluster and storing them in a central location for monitoring and analysis.

Logs in Kubernetes are a collection of time-series data that gives you a snapshot of what’s happening in your cluster. Logging in Kubernetes favors capturing standard output and standard error streams. These streams move data from containers in a node to a log file.

Besides being useful in monitoring and analyzing the behavior of your applications, logs can also help you identify potential issues that could affect the operations of your infrastructure. For instance, if there are issues with the authentication system, the logs can help you identify this issue before it becomes a bigger problem.

The various logs that Kubernetes collects are application, system, and infrastructure. The application logs contain information about the application’s behavior, while the system logs provide insight into the system’s behavior. The system logs help identify the events in the system, such as updates to the node status and API requests. Infrastructure logs contain data from the underlying storage and network infrastructure.

#How Do You Handle Logging in Kubernetes?

There are tiers to logging in Kubernetes: logs from the master node, which we’ll refer to as cluster-level logs, and logs from the worker nodes, known as node-level logs.

#Node-Level Logging

This encompasses logs in the worker node, including logs from pods and specific containers in the pods. Logs from the containers are sent as stdout or stderr streams and are stored in /var/log/containerid.

The kubelet will avail these logs through the Kubernetes API server, and you can use the kubectl command-line tool to access the logs.

You can also use the kubelet to enforce log rotation, where you set the limit of your log size and number of files, thus preventing the logs from consuming a lot of disk space.

#Cluster-Level Logging

Cluster-level logs are logs from the API server, the scheduler, and the controller. Kubernetes doesn’t offer native cluster logging capabilities, but you can use alternative methods. These methods may include a side-car pattern or node-level agent configured as a DaemonSet.

With the side-car pattern, you provision a secondary container with a logging agent to aggregate and forward logs to a centralized logging system. For example, you can use Fluentd as the logging agent in a sidecar container. It can collect logs from multiple application containers in a pod and forward them to a centralized logging system such as Elasticsearch. You can configure the Fluentd container using Kubernetes ConfigMaps and mounted volumes to ensure consistency across your pods.

Alternatively, you can have a node-level agent configured as a DaemonSet. In this scenario, the DaemonSet ensures that there’s a pod copy running on every node in a cluster.

#How Do You Check Logs in Kubernetes?

When you want to check logs of different artifacts in your cluster, you can use the kubectl tool.

For example, if you want to check the logs of pods running in your cluster, you can run kubectl logs <pod name>. You should note that if your pod has several containers, you’ll also have to pass the container name.

#What Are the Kubernetes Logging Tools?

We’ve seen above that you can use the kubectl command to check logs of different resources in your Kubernetes cluster, but that’s not all you need for your logs. You need complementary tools to analyze, monitor, and display your logs. Once you get the logs, you can open them in one of the many Kubernetes logging tools that are available in the market today. These are some of the top tools:


Fluentd is an indexer that can process and collect logs from your Kubernetes cluster. Depending on your use case, you can forward these logs to platforms like Amazon S3, Kafka, and Prometheus.

#Fluent Bit

The Fluent Bit is a multi-platform log processor and forwarder that can collect and send logs and data from various sources. It’s compatible with Kubernetes and Docker environments and can be used to create custom plugins. It has a built-in pluggable architecture that allows users to create custom plugins that can be used for different functions, such as monitoring and logging.

#Elastic Stack

Elastic Stack is one of the most popular tools that can be used to analyze and monitor the logs in Kubernetes. This open-source platform includes tools such as Logstash, Elasticsearch, and Kibana. For example, you can use Logstash as your logs ingest pipeline, and then you can visualize them using Kibana.

#Grafana Loki

Another popular tool is Grafana Loki, a log aggregation and indexer that can collect, query, and store data from Kubernetes clusters. This is an ideal tool for long-term storage since it can handle horizontal scalability.

#Benefits of Kubernetes Logging

Kubernetes logging offers many benefits. Consider:

**Security: Using the logs, **security teams can quickly identify and respond to incidents involving unauthorized access and data breaches

**Effective Debugging and Troubleshooting: **With Kubernetes logs, you can quickly identify and resolve issues related to your application. They help you analyze the data and identify the root cause of your problem. The logs give you a sneak peek at what is happening in your application. Thus, you can easily troubleshoot by monitoring and analyzing the logs on various activities of your infrastructure and application.

**Regulatory Compliance: **With Kubernetes logs, you can easily meet the various compliance requirements of regulations such as HIPAA, PCI DSS, and GDPR. You can also provide evidence of your efforts to maintain a compliant environment.

**Performance Optimization: **By analyzing logs, you can identify bottlenecks and address areas of concern that affect the performance of your applications and systems. This process can help you implement fixes and improve the efficiency of your operations.

#What Are the Challenges of Kubernetes Logging?

Kubernetes Logging, despite being useful, has challenges:

Limited Scalability: Managing large volumes of log information from a Kubernetes cluster as it grows can be difficult. Many organizations find managing their logs outside the cluster easier and then integrating that data into their existing logging and monitoring solutions.

Security and Compliance: Besides general security concerns, Kubernetes logging must adhere to strict security and compliance requirements to ensure the confidentiality, integrity, and availability of data. Without proper security and compliance measures, sensitive information could be exposed.

#Kubernetes Logging Best Practices

Here are some best practices to follow when logging in Kubernetes:

Perform Log Aggregation: Kubernetes allows you to aggregate logs from all the nodes in the cluster and store them in a centralized location. This approach makes analyzing and troubleshooting issues across the entire cluster easier.

Use Standard Logging Libraries: Use standard logging libraries such as Log4j, Logback, or Logrus to write logs in your applications. These libraries provide a consistent way of logging and formatting logs, making it easier to analyze logs across different applications.

Limit Log Volume: Logs can quickly consume a lot of storage space, leading to performance issues and increased storage costs. Configure log rotation and retention policies to limit this volume.

#Why Kubernetes Logging Is Important

Logging is a crucial aspect of monitoring and managing Kubernetes clusters. Kubernetes provides a variety of logging mechanisms that you can use to collect and analyze logs from containers running on nodes. These include built-in tools like kubectl logs and third-party solutions like Fluentd, Elasticsearch, and Grafana. By collecting and analyzing logs, you can gain valuable insights into the health and performance of Kubernetes clusters and quickly identify and troubleshoot issues.

Visit Loft.sh if you want to know more about efficiently scaling your Kubernetes infrastructure.

This post was written by Mercy Kibet. Mercy is a full-stack developer with a knack for learning and writing about new and intriguing tech stacks.

Sign up for our newsletter

Be the first to know about new features, announcements and industry insights.