Kubectl Login: Solving Authentication For Kubernetes
Kubernetes authentication means validating the identity of who or what is sending a request to the Kubernetes server. A request can originate from a pod, within a cluster, or from a human user. Kubernetes authentication is needed to secure an application by validating the identity of a user.
Kubernetes does not offer any native implementation for creating and managing users, which means it does not have any object-stores for users or groups. This fact allows admins to integrate their organization’s identity service provider; it also allows cloud providers to integrate their cloud identity service offering with the likes of Google Cloud Identity and Microsoft AD, eliminating the need to recreate users or manage them twice.
Authenticating users with Google Kubernetes Engine (GKE) or Amazon Elastic Kubernetes Service (Amazon EKS) is not the most straightforward process, but this article lays out the groundwork for integrating authentication for your users.
First, let’s go over your options for authentication.
Dex is an OpenID Connect (OIDC) provider for Kubernetes with various OIDC endpoints for multiple identity providers. Dex acts as a middleman in the authentication chain between the client app (kubectl) and upstream identity provider. Dex becomes the identity provider and issuer of ID tokens for Kubernetes by accessing users’ information from third-party upstream identity providers such as GitHub, SAML, or Microsoft.
The workflow for setting up authentication on Kubernetes using Dex goes like this: the user initiates a login request to Dex, which redirects to GitHub. GitHub returns relevant encrypted information such as ID token, access token, and refresh token back to Dex. Dex forwards this information to kubectl, which passes it to the Kubernetes API server for authentication. Once authentication is complete, the output is sent back to the user.
Before this workflow takes place, Dex needs to be configured on your Kubernetes cluster. This includes setting up the Kubernetes cluster with the appropriate flags and CA volume mount, creating authentication secrets for TLS and GitHub OAuth2 client credentials, and deploying Dex to the cluster.
Dex is challenging to set up—especially on a cluster. Plus, there’s no built-in SSO experience and not much customer support when problems arise, compared to some other services.
#gcloud or AWS CLI Access
The Kubernetes API server in GKE can be accessed using gcloud. OAuth is recommended for cluster authentication and is automatically configured by GKE. The gcloud tool logs users into Google Cloud, gets an OAuth access token for the cluster, which keeps the access token up to date, and sets up the kubeconfig.
By default, only one user has access to the GKE resources. GKE supports multiple options for managing access to a project and its clusters using role-based access control (RBAC). Google Cloud IAM and GKE are integrated, meaning you can employ the same Google account for both. Make sure you’ve enabled the GKE API in the Google Console, that the whole Google Cloud SDK is installed in the terminal, and that the gcloud initial defaults have been configured.
There are two options for authenticating with the gcloud command. You can use a user account for authentication, which launches a browser window to start the familiar Google authentication flow.
First, the user signs in to the gcloud tool using their Google credentials. This opens up a web browser to complete the Google Cloud authentication process:
gcloud auth login
If the cluster does not exist yet, create a GKE cluster:
gcloud container clusters create CLUSTER_NAME
Then, create a kubeconfig containing the configuration to access the newly created cluster. This will be stored in the
gcloud container clusters get-credentials CLUSTER_NAME --zone=COMPUTE_ZONE
The Kubernetes credentials can be viewed by running:
kubectl config view
Verify that you are authenticated to the cluster by running:
You can also set up a service account within your Google Cloud project, which allows you to create the cluster credentials.
Gcloud authentication does have a downside. If you’re unable to install the gcloud tool in your environment, each user has to manually create a kubeconfig file to authenticate, which is far from straightforward.
Amazon EKS uses AWS Identity and Access Management (IAM) to provide authentication to your Kubernetes cluster using the
aws eks get-token command, available in version 1.16.156 or later of the AWS CLI or the AWS IAM Authenticator for Kubernetes. Ensure you have version 1.16.156 or later of the AWS CLI.
Next, create the Kubernetes configuration file—the kubeconfig—using the
update-config command. This command is used to create or update your kubeconfig for your cluster.
Before running the command below, make sure you have permission to use the
eks:DescribeCluster API action with the specified cluster. See Amazon identity-based policy examples:
aws eks --region <region-code> update-kubeconfig --name <cluster_name>
The config gets created in the
.kube/config path. Test that access to the cluster is granted without errors:
kubectl get svc
The output should resemble the following:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/kubernetes ClusterIP 10.101.0.2 <none> 443/TCP 5m
Teleport is an open-source identity-aware tool that provides multi-protocol access to applications and servers using SSH, HTTPS, Kubernetes API, MySQL, or PostgreSQL wire protocols.
Teleport provides a simple way to access all environments via a single access point, using the same identity provider. This allows for the same RBAC rules for both Kubernetes and SSH, improving user experience. Teleport provides a command-line tool,
tsh, which allows for SSH access to Kubernetes.
On the client side, users must authenticate using the
tsh login command; this opens the web browser if SSO authentication is configured. Upon successful authentication, tsh automatically updates the ~/.kube/config file for all Kubernetes tools. They can also switch between multiple clusters using
tsh kube login. Teleport is usually integrated with enterprise SSO based on Okta, Github, Google Apps, Active Directory, or other identity providers for production use.
To set up authentication of Teleport in a Kubernetes cluster, you need to install Teleport in your cluster and create DNS records on either GKE or AWS. Next, you have to create Single Sign-On for Kubernetes using GitHub and Okta. For a more in-depth guide to creating Kubernetes authentication on Teleport for GKE or EKS clusters, visit the Teleport Documentation.
Teleport is very useful when you want to enable SSH access for your prospective Kubernetes users. The ease of setup depends on the cluster and Single Sign-On platform you choose to integrate with, such as Okta or Active Directory.
Software engineers need direct access to Kubernetes to move faster and build high-quality cloud-native services. With Loft, it’s easier and cheaper to give your engineering teams full access to Kubernetes clusters. Loft works with any Kubernetes cluster to provide a self-service system that lets engineers create namespaces whenever they need them.
Using Loft is simple. Once you have created your Kubernetes cluster in the cloud and switched to the right Kubernetes context, you’re ready for download.
To download the Loft CLI, run the following:
curl -s -L "https://github.com/loft-sh/loft/releases/latest" | sed -nE 's!.*"([^"]*loft-darwin-amd64)".*!https://github.com\1!p' | xargs -n 1 curl -L -o loft && chmod +x loft; sudo mv loft /usr/local/bin;
Then make use of
loft start to log in to the Loft UI.
Loft will follow up with some configuration questions.
Next, the Loft CLI will open a URL in your browser that looks something like
https://localhost:9955. Now you can log in with the username and password shown in the
loft start output.
You can also connect to an existing cluster from the Loft UI by using the Connect Cluster button on the Clusters page.
Adding authentication to your Kubernetes cluster is easy. In Loft, access to a cluster is determined by a cluster account. Each user can have one or more accounts for one or more clusters that they have connected. Each of these clusters can run in a different zone or even on a different cloud platform. You can connect EKS clusters from AWS and GKE clusters and then give users access according to what they need to work with.
To do this, create a user via the Loft UI for a username and password-based authentication. Once logged in, select Users on the sidebar, and click Add User. Fill out the username and user details—you can also create a password if you wish. Creating a new user provides a link to your user, which can then be used to log in to the Loft UI and connect to the Kubernetes cluster.
Instead of using the Loft UI, you can also do everything via Loft CLI and kubectl. Loft also integrates with other auth providers such as GitHub and Okta for Single Sign-On via the OpenID Connect protocol. Visit the Loft Documentation to learn more about how to integrate other authentication options with Loft for your Kubernetes cluster.
Loft is very easy to set up compared to the alternatives outlined in this article. Programmatic setup is minimal since adding users can be done directly via the user interface.
Loft not only increases productivity it also saves you money. Loft provides a feature called Sleep Mode for Namespaces which automatically puts Kubernetes namespaces to sleep after a certain period of inactivity. If Loft records no Kubernetes API server requests within a particular namespace, sleep mode kicks in. This process saves computing costs when the namespace isn’t being used, such as after hours or on the weekend.
In this article, you saw different ways to authenticate Kubernetes: Dex, Teleport, gcloud for GKE, AWS CLI for EKS, and Loft. When reviewing your options for Kubernetes authentication, it’s essential to determine the pros and cons of each, including the ease of creating and managing user access to your Kubernetes resources. Loft offers many features and ease of use, and is specialized for use on Kubernetes clusters.