Table of Contents
When you're first starting out with Kubernetes (k8s), the care and feeding of your containers often seems complicated. The terminology is different, even from Docker, and the commands look opaque. How do you know where your application is running? What happens if you want to start, stop, or restart it?
This article will show you how you can use kubectl to restart pods. We'll cover why you may need to restart a pod, how to get a list, how to manage pods in namespaces, and how to stop and start replica sets and specific instances.
If you want to follow the examples in this article, you'll need a system running Docker with Minikube installed and started.
Kubectl Series
- Kubectl Rollout Restart: 3 Ways to Use It
- Kubectl Get Context: Its Uses and How to Get Started
- Kubectl Get Nodes: Why and How to Use It
- Kubectl Proxy: When and How to Use it to Access the Kubernetes API
- Kubectl Patch: What You Can Use It for and How to Do It
- How to Restart Pods in Kubectl: A Tutorial With Examples
- Kubectl Login: Solving Authentication For Kubernetes
- Kubectl Exec: Everything You Need to Know
- Installing and Managing kubectl Plugins with Krew
What is Kubectl?
Kubectl is your command-line interface for Kubernetes clusters. It's useful for viewing cluster status, applying manifest files, viewing and modifying resources and, as we'll see, starting and stopping pods. It communicates with Kubernetes via its API connection.
Depending on what system you're using, you might have to install kubectl separately.
Why Restart a Pod?
One of Kubernetes primary strengths is automatically managing resources for you. So why would you want to restart a pod by hand?
- Resources - if a pod needs more memory than allowed, Kubernetes will stop it with an out of memory error. You need to update its resource requirements and restart it.
- Unrecoverable errors - your application may experience an error that it can't recover from. While you likely have a new bug to fix, you need to restart the current instance for now.
- Stuck in an inactive state - similar to an unrecoverable error, pods can get stuck in a pending or inactive state. Only a restart will clear the problem.
Kubectl and Namespaces
Before we see how to restart a pod, we need to talk about Kubernetes and namespaces. Kubernetes uses namespaces to divide a Kubernetes cluster into logical sets of processes. This makes it easier to administer the cluster by giving you a way to place related pods and services into different groups.
Let's look at namespaces in action. Open a shell on your Minikube system and get a list of namespaces:
% kubectl get namespace
NAME STATUS AGE
default Active 5d22h
kube-node-lease Active 5d22h
kube-public Active 5d22h
kube-system Active 5d22h
This cluster has four namespaces: default, kube-node-lease, kube-public, and kube-system.
Let's get a list of pods.
% kubectl get pods
No resources found in default namespace.
It's an empty list because kubectl only retrieved the list from the default namespace. If we want to see what's running in another namespace, we'll need to specify one with the --namespace command line option:
% kubectl --namespace=kube-system get pods
NAME READY STATUS RESTARTS AGE
coredns-6d4b75cb6d-xsqnx 1/1 Running 0 5d22h
etcd-minikube 1/1 Running 0 5d22h
kube-apiserver-minikube 1/1 Running 0 5d22h
kube-controller-manager-minikube 1/1 Running 0 5d22h
kube-proxy-hqxbp 1/1 Running 0 5d22h
kube-scheduler-minikube 1/1 Running 0 5d22h
storage-provisioner 1/1 Running 287 (4d20h ago) 5d22h
That's better. We're looking at a list of the pods Minikube uses to keep the cluster up and running.
If you don't specify a namespace to kubectl, it will assume you're referring to the default namespace. This is true whether you're starting, stopping, listing, or deleting resources. For a small cluster that's only running a single set of services, default may be the only namespace you need to worry about.
But when you're working on more complex systems, you may need to use different namespaces, and specifying the --namespace option can quickly grow tedious. Let's look at how you can override the default.
First, let's create a new namespace:
% kubectl create namespace nginx
namespace/nginx created
% kubectl get namespace
NAME STATUS AGE
default Active 5d22h
kube-node-lease Active 5d22h
kube-public Active 5d22h
kube-system Active 5d22h
nginx Active 9s
We're going to use this namespace for the rest of the tutorial. Fortunately, kubectl has an option for setting the default namespace.
% kubectl get pods
No resources found in default namespace.
% kubectl config set-context --current --namespace=nginx
Context "minikube" modified.
% kubectl get pods
No resources found in nginx namespace.
That's better! Set-context lets us override the default namespace.
Let's create a pod and look at how to stop, start, and restart it.
Namespaces in Kubernetes are an excellent way to manage your clusters. This is especially true when you have different teams working on a project. While namespaces are a good tool for some light isolation, they are not complete Kubernetes clusters on their own, which means that you run some serious risks and limitations when you rely on namespace-based isolation. Learn about Kubernetes namespaces vs. virtual clusters and why Kubernetes namespaces are not good enough
Restarting Pods
We're already in our new nginx namespace. Let's create a deployment and apply it there.
Here's the description of our Nginx deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
Save this in a file named nginx_deployment.yaml and apply it with kubectl:
% kubectl apply -f nginx_deployment.yaml
deployment.apps/nginx-deployment created
% kubectl get pods -n nginx
NAME READY STATUS RESTARTS AGE
nginx-deployment-544dc8b7c4-4gcdd 1/1 Running 0 5s
nginx-deployment-544dc8b7c4-9k8zb 1/1 Running 0 5s
% kubectl expose deployment nginx-deployment --port=8080 --target-port=80
service/nginx-deployment exposed
Now we have two Nginx pods running, and they're exposed as a service.
Let's imagine that we discovered an issue with the Nginx configuration, and we need to force a restart so the pods reread it. What are our options?
Kubectl Scale
One way to force a restart is to use kubectl to stop the current instances and start a new set. You do this by manipulating the scale of your service. It's easier to show than tell the steps:
% kubectl get pods -n nginx
NAME READY STATUS RESTARTS AGE
nginx-deployment-544dc8b7c4-4gcdd 1/1 Running 0 9m2s
nginx-deployment-544dc8b7c4-9k8zb 1/1 Running 0 9m2s
% kubectl scale deployment nginx-deployment --replicas=0 -n nginx
deployment.apps/nginx-deployment scaled
% kubectl get pods -n nginx
No resources found in nginx namespace.
% kubectl scale deployment nginx-deployment --replicas=2 -n nginx
deployment.apps/nginx-deployment scaled
% kubectl get pods -n nginx
NAME READY STATUS RESTARTS AGE
nginx-deployment-544dc8b7c4-2sfwl 0/1 ContainerCreating 0 2s
nginx-deployment-544dc8b7c4-6nh57 0/1 ContainerCreating 0 2s
First, we listed the pods in the nginx namespace. Just as we specified in the YAML above, there are two.
Then we used kubectl scale to set the number of replicas to zero. This command is a mouthful because it requires both the deployment and service names.
kubectl scale deployment <deployment name> --replicas=<num> -n <service name>
When we get the pods a second time, they're all gone.
So, we set the scale back to two, and look again. We have two new pods running!
Kubectl Rollout Restart
You can also force a restart by having kubectl doing a rolling restart. This is a relatively new capability that restarts each pod in your deployment, one by one.
% kubectl rollout restart deployment nginx-deployment -n nginx
deployment.apps/nginx-deployment restarted
% kubectl get pods -n nginx
NAME READY STATUS RESTARTS AGE
nginx-deployment-544dc8b7c4-2sfwl 1/1 Running 0 9m13s
nginx-deployment-544dc8b7c4-6nh57 1/1 Running 0 9m13s
nginx-deployment-999b88dbb-zsndq 0/1 ContainerCreating 0 2s
% kubectl get pods -n nginx
NAME READY STATUS RESTARTS AGE
nginx-deployment-999b88dbb-tgv6c 1/1 Running 0 7s
nginx-deployment-999b88dbb-zsndq 1/1 Running 0 9s
If you run get pods right away, you may catch the restarts in action, as shown above.
Here again, the command is rather long.
kubectl rollout restart deployment <deployment name> -n <namespace>
Kubectl Delete
Our deployment YAML specified a replica:
spec:
replicas: 2
selector:
matchLabels:
app: nginx
So, Kubernetes created a replicaset for us:
% kubectl get replicaset
NAME DESIRED CURRENT READY AGE
nginx-deployment-999b88dbb 2 2 2 2m33s
% kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-999b88dbb-8c45h 1/1 Running 0 2m36s
nginx-deployment-999b88dbb-prqwz 1/1 Running 0 2m36s
It's named nginx-deployment-999b88dbb and has two pods in it. Their names begin with their replicaset name.
As a result, you can delete the pods in your deployment using kubectl delete replicaset.
% kubectl delete replicaset nginx-deployment-999b88dbb
replicaset.apps "nginx-deployment-999b88dbb" deleted
% kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-999b88dbb-9lccf 1/1 Running 0 5s
nginx-deployment-999b88dbb-pdvxr 1/1 Running 0 5s
When you delete a replicaset, Kubernetes automatically creates a new one, so it restarts all your pods!
Kubectl Get Pods
Finally, you can replace pods individually with a combination of kubectl get pods and kubectl replace.
One of kubectl's useful abilities is to generate YAML representations of Kubernetes objects:
% kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-999b88dbb-8zpbx 1/1 Running 0 4m46s
nginx-deployment-999b88dbb-pdvxr 1/1 Running 0 8m30s
% kubectl get pod nginx-deployment-999b88dbb-pdvxr -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubectl.kubernetes.io/restartedAt: "2022-07-08T13:57:23-04:00"
creationTimestamp: "2022-07-08T18:14:06Z"
(trimmed)
This YAML is the same format as the one used in Kubernetes configuration files, so by piping this command to replace, you can replace the pod.
% kubectl get pod nginx-deployment-999b88dbb-9lccf -o yaml |kubectl replace --force -f -
pod "nginx-deployment-999b88dbb-9lccf" deleted
pod/nginx-deployment-999b88dbb-9lccf replaced
The -f option tells kubectl replace to expect a file. The single dash tells it that the file is standard input.
This command is useful if you only want to replace a single pod. It's also useful for pods that are not part of a replicaset or service.
Kubectl Restart Pod
We've covered several approaches to restarting Kubernetes pods with kubectl. Each one has its advantages and disadvantages. Depending on why you need to restart the pods and what kind of configuration you have, manipulating a services scaling factor, perform a new rollout, or replacing a specific node will make sense.
Either way, know you now how to use kubectl to restart pods! Put this new knowledge to use today!
This post was written by Eric Goebelbecker. Eric has worked in the financial markets in New York City for 25 years, developing infrastructure for market data and financial information exchange (FIX) protocol networks. He loves to talk about what makes teams effective (or not so effective!).