6 Kubernetes Logging Best Practices

Keshav Malik
Minute Read

Suppose you're responsible for managing a Kubernetes environment with multiple applications running at once. As you navigate through the complex system, it becomes clear that monitoring all these applications simultaneously can be daunting. But fear not: logging is here to save the day!

Logging is vital in monitoring application behavior, diagnosing issues, and ensuring compliance. Effective logging can help quickly identify and resolve problems while providing detailed records of system activity to comply with regulations. Today, we'll explore best practices for logging in Kubernetes, including log collection, storage, access control, and SIGTERM signal management.

By implementing these Kubernetes logging best practices, organizations can ensure proper logging in their Kubernetes environments, improving application performance, better troubleshooting, and easier compliance management. So let's dive in and learn how to log like a pro in Kubernetes!

The Importance of Proper Logging in Kubernetes

Logging is a critical tool for monitoring application behavior in the dynamic world of Kubernetes, where containers are short lived. With proper logging mechanisms, developers and operators can track key metrics, monitor application performance, and troubleshoot issues more efficiently.

Businesses in heavily regulated industries such as finance, healthcare, and government must keep detailed logs of system activity, including application logs. Noncompliance can result in legal and financial penalties and harm an organization's reputation. Best practices for logging in Kubernetes can help ensure that logs are collected and stored in a compliant manner.

Best Practices for Kubernetes Logging

Effective monitoring and troubleshooting of applications in Kubernetes require proper logging. Inadequate logging can result in challenges in identifying issues quickly and optimizing system performance. Organizations can enhance their monitoring and troubleshooting capabilities by adhering to logging best practices. These include utilizing a logging agent, centralizing logs, and managing log formats.

To optimize logging in Kubernetes, deploying a logging agent is crucial. The agent can collect, filter, and format logs from diverse sources, including application containers, and transmit them to a centralized location for analysis. This approach streamlines gathering and examining logs, reducing the time to resolve issues. Centralized logs can also be accessed by all members of the development and operations teams, which can enhance collaborative troubleshooting.

Managing log formats is equally vital in effective logging in Kubernetes. Using a consistent log format that's easy to search and analyze across applications and services is essential. Standardizing log formats improves the efficiency of log analysis and lowers the risk of errors resulting from inconsistent log formats.

Use an Agent to Collect Logs

A recommended best practice for logging in Kubernetes is deploying a logging agent to each node. A logging agent is a lightweight daemon that runs on every node in the Kubernetes cluster and collects and forwards logs to a centralized location. By deploying a logging agent to each node, you can ensure comprehensive logging, regardless of the node that hosts the application.

Many developers use logging agents Fluentd and Fluent Bit in Kubernetes. Fluentd is a sturdy, open-source data collector that gathers logs from numerous sources, parses them, and forwards them to various destinations. Fluent Bit functions the same way but is limited to logs in containers. Fluentd and Fluent Bit offer high configurability and can be tailored to your application's specific needs.

Getting Started with Code Snippets

Here's an example config for sending logs to Splunk using Fluentd in Kubernetes.

Let's look at the Fluentd configuration file:

<match kubernetes.**>

@type splunk_hec

host splunk.evilcorp.com

port 8088

protocol https

token YOUR_ACCESS_TOKEN_HERE

sourcetype fluentd

<buffer>

@type file

path /var/log/fluentd-buffers/kubernetes.buffer

flush_interval 5s

</buffer>

</match>

This fluentd configuration forwards logs from Kubernetes to a Splunk instance hosted on splunk.evilcorp.com via the Splunk HTTP Event Collector (HEC) protocol. The logs are buffered and stored in a file at /var/log/fluentd-buffers/kubernetes.buffer, with a flush interval of every five seconds. The configuration specifies that the source type for the logs is fluentd. Additionally, the access token for the Splunk HEC is required, which needs to be added in place of YOUR_ACCESS_TOKEN_HERE.

Next, we have a ConfigMap to store the Fluentd configuration.

apiVersion: v1

kind: ConfigMap

metadata:

name: fluentd-config

data:

fluent.conf: |

&lt;match kubernetes.**>

  @type splunk_hec

  host splunk.evilcorp.com

  port 8088

  protocol https

  token YOUR_ACCESS_TOKEN_HERE

  sourcetype fluentd

  &lt;buffer>

    @type file

    path /var/log/fluentd-buffers/kubernetes.buffer

    flush_interval 5s

  &lt;/buffer>

&lt;/match>

This is a Kubernetes configuration file that defines a ConfigMap named fluentd-config. The ConfigMap contains a Fluentd configuration specified in the fluent.conf file.

Final Step

Finally, we have a Fluentd DaemonSet to deploy Fluentd as a sidecar container alongside your application pods.

apiVersion: apps/v1

kind: DaemonSet

metadata:

name: fluentd

spec:

selector:

matchLabels:

  app: fluentd

template:

metadata:

  labels:

    app: fluentd

spec:

  containers:

  - name: fluentd

    image: fluent/fluentd:v1.11.2-debian-1.0

    volumeMounts:

    - name: fluentd-config

      mountPath: /fluentd/etc/fluent.conf

      subPath: fluent.conf

    - name: buffer

      mountPath: /var/log/fluentd-buffers

  volumes:

  - name: fluentd-config

    configMap:

      name: fluentd-config

  - name: buffer

    emptyDir: {}

This DaemonSet ensures that one fluentd pod is running on each node in the Kubernetes cluster. Additionally, the configuration maps the fluentd-config ConfigMap to the /fluentd/etc/fluent.conf path in the container and mounts an emptyDir to /var/log/fluentd-buffers for buffering logs. With this configuration, Fluentd will collect logs from your application pods and send them to Splunk for storage and analysis.

Send Logs to a Centralized Location for Debugging

When managing multiple applications in Kubernetes, sending logs to a centralized location is a key best practice. This lets you quickly access logs from all nodes in the cluster and troubleshoot issues effectively. Logging platforms such as ELK or Splunk offer scalable ways to store, search, and analyze logs from multiple sources, making maintaining and monitoring logs across the entire system easier.

However, ensuring that logs are transmitted securely to the chosen logging platform is also important. To do this, you can configure the logging agent to send logs via a secure connection using Transport Layer Security (TLS) protocol. This encrypts log data and authenticates the logging agent with the logging platform, providing a safe and secure way to transmit sensitive data while protecting it from unauthorized access.

Manage SIGTERM Signals in Pods to Push Logs

In Kubernetes, managing SIGTERM signals in pods is a critical best practice for logging. If a pod is terminated, you will lose any logs not written to disk or sent to a centralized location. To avoid this, you can configure Kubernetes to send a SIGTERM signal to pods before termination. This signal allows pods to gracefully shut down and flush any remaining logs to disk or send them to a centralized location.

To ensure that all logs are sent to a centralized location before a pod is terminated, organizations can use a preStop life cycle hook. This hook is a Kubernetes feature that lets containers perform a specific action before they are terminated.

Code Snippet

Here's an example Kubernetes YAML configuration that demonstrates how to use a preStop life cycle hook to push logs to a central location before a pod is terminated.

apiVersion: v1

kind: Pod

metadata:

name: example-app

spec:

containers:

  • name: example-app

    image: some-example-app-image

    lifecycle:

    preStop:

    exec:
    
      command: ["/bin/sh", "-c", "curl -X POST -d @/var/log/exampleapp.log https://splunk.evilcorp.com/api/v1/logs"]
    

    volumeMounts:

  • name: logs-volume

    mountPath: /var/log

  • volumes:

  • name: logs-volume

    emptyDir: {}

  • This example shows a NodeJS application running inside a pod with a logging agent container. The preStop hook is configured to execute a curl command to transmit the application's log file to a Splunk server via its API endpoint. The preStop hook gets activated when the pod receives the SIGTERM signal. The hook prompts the application to send any remaining logs to the Splunk server before the pod shuts down.

    Managing SIGTERM signals correctly in pods helps you collect critical log data for better troubleshooting and analysis in Kubernetes. This means you can detect and resolve issues more efficiently, creating a more stable and reliable environment.

    Control access to logs with RBAC

    Organizations can use RBAC in Kubernetes to control access to logs and restrict sensitive log data to authorized personnel. This involves defining roles and permissions for different types of personnel, such as logging and monitoring personnel. By doing so, organizations can manage who can access logs and what actions they can perform with them.

    To implement RBAC, organizations can use Kubernetes' custom roles to give users defined permissions. For instance, organizations can create a "logging" role that permits personnel to access log data but prevents them from modifying system or application settings. This way, logging personnel can efficiently perform their tasks without causing unintended changes to the system.

    RBAC helps ensure that log data is used only for legitimate purposes, like resolving issues and troubleshooting. Additionally, defining roles and permissions lets logging and monitoring personnel access specific log data required to perform their jobs effectively.

    Use Sidecar Containers with Logging Agents

    Deploying logging agents using sidecar containers is a common approach for Kubernetes logging. A sidecar container is a secondary container that runs alongside the primary application container within a pod. In the context of logging, a sidecar container can execute a logging agent, configuring it to gather and send logs to a central location.

    Using a sidecar container with a logging agent helps ensure logs accumulate and flow to a central location without changing the primary application container. This approach allows organizations to utilize ready-to-use application containers without worrying about their logging capabilities. The logging agent container collects and transmits logs to a central location, eliminating the need for the application container to manage log collection and transmission.

    To configure the logging agent container to send logs to a central location, organizations can employ the same methods when deploying logging agents directly to nodes. You'll need to configure the agent to use a consistent logging format, collect system and application logs, and transmit logs to a centralized platform.

    Code Snippet

    Here's an example Kubernetes YAML configuration for a pod that includes a sidecar container running a logging agent.

    apiVersion: v1

    kind: Pod

    metadata:

    name: example-pod

    spec:

    containers:

    main application container configuration

  • name: main-app-container

    image: example-image:latest

  • logging agent container configuration

  • name: logging-agent

    image: logging-agent-image:latest

    volumeMounts:

  • name: log-volume

    mountPath: /var/log/myapp

  • volumes:

  • name: log-volume

    emptyDir: {}

  • In the above example, the pod consists of two containers, the main-app-container and the logging-agent-container. The logging-agent container is a sidecar container that runs the logging agent, configured to collect logs from the /var/log/myapp directory. The directory is mounted as a volume from the log-volume volume. This approach simplifies logging management because logs are gathered and sent to a central location by the logging agent container without requiring modifications to the main application container.

    Deploying this pod configuration is an easy way to ensure logs are gathered and sent securely to a centralized logging platform. Additionally, this makes managing and troubleshooting problems simpler. Using sidecar containers in this manner can increase application performance and reliability while streamlining log management.

    Conclusion

    Logging is a vital part of managing applications in Kubernetes environments. With the complexity of modern applications and the dynamic nature of Kubernetes, having visibility into application behavior is critical for troubleshooting issues, ensuring compliance, and improving performance. In this post, we explored some best practices for Kubernetes logging to help organizations ensure proper logging in their environments.

    Remember, these best practices may need to be customized based on your specific needs. But regardless, following these best practices can improve the quality of your logs, making it easier to manage and troubleshoot issues, maintain compliance, and improve overall application performance. Proper logging is essential for managing applications in Kubernetes environments. We hope these best practices make it easier for you.

    Additionally, check out Loft if you want to know about efficiently scaling your Kubernetes infrastructure.

    This post was written by Keshav Malik, a highly skilled and enthusiastic Security Engineer. Keshav has a passion for automation, hacking, and exploring different tools and technologies. With a love for finding innovative solutions to complex problems, Keshav is constantly seeking new opportunities to grow and improve as a professional. He is dedicated to staying ahead of the curve and is always on the lookout for the latest and greatest tools and technologies.

    Sign up for our newsletter

    Be the first to know about new features, announcements and industry insights.