Managing Access to Kubernetes Clusters for Engineering Teams

Daniel Olaogun
Minute Read

Kubernetes is a container orchestration tool for managing, deploying, and scaling containerized applications. It helps engineering teams deploy and manage applications across multiple servers with fewer complexities. Some of its best-known features include:

  • Self-healing, which automatically restarts your application container if it crashes
  • Horizontal scaling your application up or down as the traffic load increases or decreases
  • Automatic rollouts and rollbacks for gradually deploying an updated version of your application or quickly rolling back to the previous version if you detect an issue
  • In Kubernetes, your containerized application is abstracted by a pod that can be replicated in a node or across many nodes. Nodes run your containerized applications and can contain one or multiple pods depending on the node resources. A set of nodes is called a cluster.

    As your Kubernetes cluster grows, you may need help managing it. This means you’ll need the ability to add users to your cluster and provide the required permissions, among other tasks. In this article, you’ll learn how to manage access to your Kubernetes cluster as well as how to manage the users who are given this access.

    Why You Need Kubernetes Clusters

    A Kubernetes cluster contains control plane and worker nodes that work together to handle and distribute traffic from within and outside the cluster. Worker nodes are a set of virtual or physical machines that run your containerized applications, while the control plane node controls the worker nodes. The control plane node manages and maintains the desired state of the cluster, scheduling pods to worker nodes based on available resources. It also provides the API endpoint that users interact with.

    The following are some important use cases for Kubernetes clusters.

    Provisioning of Multiple Environments

    During the development and release of your application, you need environments for development and testing as well as a separate production environment for the application release. A Kubernetes cluster provides these environments using namespaces.

    Previously, managing multiple environments for a single application meant spinning up separate virtual machines for each environment. This was tedious, and managing the differences between the environments could be difficult. Kubernetes clusters simplify the process.

    Running Multiple Deployments

    Kubernetes clusters allow you to run multiple deployments of your applications, for example development, testing, and production deployments of the same application, in one cluster. You can also deploy multiple applications with different functionalities. However, these deployments will share the resources provided in the cluster.

    Easy Scaling of Deployments

    As your application traffic increases, the application consumes more resources. Increasing its resources ensures that the increased traffic won’t cause downtime. With Kubernetes, you can scale your application deployments by replicating them on multiple nodes in your cluster. This distributes the incoming traffic load across all the nodes running your application in the cluster.

    Managing Access in Your Kubernetes Cluster

    As your organization grows and the number of deployments in your Kubernetes cluster skyrockets, you need more users to help manage the cluster. However, you should also ensure that you can effectively manage the access of those users.

    Access to the Kubernetes API is managed through authentication, authorization, and admission control. When a user makes a request through the API using a client such as kubectl, Kubernetes checks the user’s authenticity. If the user can’t be verified, the request is rejected; if Kubernetes can verify the user, the request moves to authorization, which confirms that the user has the required permission to initiate this request. If they do not, the request is rejected, but if the user has the right permissions, then the request moves to the final verification of admission control. Admission control ensures that the request is good (such as verifying that the container image you want to deploy is secure).

    Granting Users Access

    Other users must be recognized by Kubernetes in order to connect with your cluster. However, Kubernetes does not provide the ability to manage users out of the box. Users are generally managed outside of Kubernetes using services like Microsoft Active Directory, Okta OpenID Connect, AWS Identity and Access Management, and Loft.

    Managed Kubernetes services such as Azure Kubernetes Service (AKS), Amazon Elastic Kubernetes Service (Amazon EKS), and Google Kubernetes Engine (GKE) incorporate their identity management system with Kubernetes services to manage and authenticate users.

    If you have a self-managed Kubernetes cluster, there are multiple services available to manage user access:

  • Single sign-on with Loft
  • OpenID Connect with Okta
  • Kubernetes authentication through Dex
  • Access management using OpenSSL; note that you should send the kube-config file in a secure and encrypted manner to your authenticated users
  • You can also use the above options to handle user access for managed Kubernetes services. The Kubernetes documentation provides other authentication strategies for authenticating users in your cluster.

    Managing Access

    Once users have been granted access to your Kubernetes cluster, there are several strategies to best manage that access.

    Providing Only Needed Permissions

    Once you have successfully authenticated the users required in your cluster, give them just enough permissions to perform their duties. Depending on your team structure, it is not always a good idea for all users to have the same high-level access. Otherwise, a user might perform an operation without understanding its consequences, or you might lose access to your admin rights because your user privileges were changed by a malicious user.

    There are different authorization modes in Kubernetes used for access control, including role-based access control (RBAC), attribute-based access control (ABAC), node, and webhook. However, RBAC is commonly used to implement user roles and permissions. For more information on how to implement RBAC in your Kubernetes cluster, check out the official documentation.

    Decommissioning Users as Needed

    When a user is no longer a part of your cluster team, you should delete the user from the cluster. This prevents the user from continuing to access the cluster and performing unauthorized activities.

    Most of the user management platforms noted above allow you to remove users with ease.

    Enabling Auditing

    Kubernetes auditing logs all actions performed in your cluster sequentially. Auditing your cluster gives you this information:

  • What happened?
  • When did it happen?
  • Who initiated it?
  • On what did it happen?
  • Where was it observed?
  • From where was it initiated?
  • Where was it going?
  • When something unexpected happens in your Kubernetes cluster, the logs generated by the audit will guide you in getting to the root cause.

    Kubernetes does not enable auditing by default; you must enable it yourself. Doing so is up to you and your team, but for the security of your Kubernetes cluster, enabling auditing is recommended.

    Using Loft for Access Control

    Loft is a platform built on top of Kubernetes that adds multitenancy and self-service capabilities, enabling you to control and manage access in your Kubernetes cluster. As previously noted, you can integrate Loft into your cluster to handle authentication and access control. Loft also integrates with many single sign-on (SSO) providers that you can use with your cluster.

    In Kubernetes, non-admin users don’t have the privileges to list, create, or delete namespaces in a shared cluster. However, Loft offers a feature called spaces, a virtual abstraction of a Kubernetes namespace. Once a space is created, a corresponding namespace is created, and if the space is deleted, the namespace is deleted as well.

    Loft also offers an auditing section similar to Kubernetes auditing, which records all operations and actions performed by users and applications using the Loft API in your Kubernetes cluster.

    Best Practices

    Remember that when configuring access in your Kubernetes cluster, there are some best practices you should follow:

    1. Follow the principle of least privilege
    2. Enable auditing in your cluster
    3. Routinely check roles and permissions assigned to users
    4. Remove users that are no longer relevant in your cluster

    Conclusion

    As you’ve learned, you have multiple options for granting and managing user access in your Kubernetes cluster, whether your cluster is self-managed or managed by a cloud provider. It’s important that you provide the right level of access to your different users and revoke that access when necessary. This way, you can ensure that your cluster is safe from misuse as you scale up your Kubernetes workflow.

    If you need a third-party solution for managing access control and self-service in your Kubernetes cluster, consider Loft. It integrates well with cloud-native tools and can be used with kubectl or GitOps. Loft is easy to implement and offers several cost optimization features. You can request a demo to see what Loft can do for you.

    Photo by Matt Seymour on Unsplash

    Sign up for our newsletter

    Be the first to know about new features, announcements and industry insights.