Table of Contents
The popularity of Kubernetes and its ecosystem grows like a snowball rolling down Mount Everest. Imagine the design patterns, numerous workload requirements, workload types, and behaviors that fuel the development of Kubernetes. Every release goes a long way to satisfy requested enhancements. Each feature enhancement opens possibilities.
One of these gems is the init containers feature. It brings many opportunities in the initialization phase of the workloads.
If you want some hands-on time with init containers, here is a k8s Init Containers git repo. With this repo, you can deploy init containers in a few minutes.
What is an init container?
An init container is a type of container that has a few modified operational behavior and rules. One of the most dominant features is that init containers are started and terminated before application containers, and they must run to completion with success. They specifically exist for initializing the workload environment.
Let's say that we have an application at hand which requires some setup in deploy time. Completing the setup tasks in an init container would ensure the environment is ready before the application is started and thus achieves our goal of having a lean application container image.
A different way to perform these setup tasks is to use some tooling on the application container, such as a shell script. The script would ensure resources are created and available, perform disk operations, read config-maps, etc., before running the application itself.
Motivations for using init containers
Using distroless container images as base images almost ensures using init containers if the application requires tasks to be run in initialization. As we can't install tooling into the application container image, we need to create a new image for just initialization to be used by the new init container.
Other reasons to use init containers include security improvements, such as running privileged tasks before the app runs, generating configurations using secrets that would not be available in runtime, resource availability checks, database migrations, etc. Essentially any task that needs to be completed before the app runs falls into this category. Here is a list of some practical examples:
Lifecycle of init containers
Here is an overview of the lifecycle from applying manifests via kubectl to pod receiving traffic.
Init containers' features and behaviors
Init container is a container type with specific differences geared to the initialization of a pod. The kubelet executes init containers before any other containers in the pod after ensuring storage and network services are up. If there is more than one init container, they are run sequentially in the order they appear on the configuration.
The kubelet expects the init container to run to completion; for the kubelet to start the next init container, the current init container must run to completion successfully. The kubelet restarts a failing init container until it succeeds.
Running sequentially
Each init container must be completed successfully with exit code zero to run the next init container if any is defined. If any of the init containers return a non-zero exit code, kubelet restarts it. Once all init containers are run to completion successfully (zero exit code), kubelet starts the regular containers in parallel.
Restart policy
A non-zero exit code triggers a restart of the init-container depending on the pod.spec.restartPolicy. If pod's restartPolicy is set to "Always," init containers are restarted as if the restartPolicy was set to "OnFailure." If rather a pod's restartPolicy is set to "Never," then init containers aren't restarted on failure, so the pod is deemed to have failed permanently.
Updating the value of the image field of an init container in the PodSpec doesn't require a restart of the pod.
Updating init container images
A change in the image used for a init container doesn't warrant a restart or change in the pod. Kubelet uses the updated image in the next cycle of the pod.
Fields that aren't supported
Although init containers and regular containers share the exact spec of container object, some fields aren't allowed to be set for init containers.
These fields allow advanced lifecycle state transitions, which are not applicable to init containers; their success is defined by exit code only. If these values are set, validation of the PodSpec fails.
Requests and limits calculation
Resource management is one of the most important jobs of a cluster. Init containers require resources to run to completion, so setting requests and limits for resources makes sense.
The actual resource requests and limits used during the scheduling of a pod is called effective resource request/limit. And effective requests/limits are calculated slightly differently when init containers exist in the pod.
For init containers, the scheduler traverses all init containers and finds out the most significant request or limit among all init containers. The most significant value used for scheduling is effective init request/limit.
Of course, a pod is the smallest unit that Kubernetes deploys. So we need to think about the pod's effective requests/limits. A pod's effective request/limit is calculated by comparing the effective init request/limit and the sum of requests/limits for all regular containers. The highest resource request/limit for a particular resource is selected and used in the pod scheduling.
This behavior allows us to use different resource requests/limits for the init containers, which is excellent. However, at the same time, they are tangled, and we need to bear in mind that we allocate resources for some containers that only run on app start, which might prevent the scheduler from finding an available node.
Separation of concerns
Using the init container approach means that we would have at least two different container images; an image for the init container, which includes all the tooling for setup, and the other is the application image. Having separate images allows us to distribute ownership of the images to different teams.
Security
Using init containers can bring some security benefits. Let's assume that the app requires some assets from a git repository or cloud storage. Instead of giving access to the secrets/credentials to the app container, where it might get accessed by an unauthorized party, we can only provide access to the init container. Reducing access means that exposure of the secrets/credentials is short-lived and less accessible.
Another aspect is that now we don't need to install tools that would only be used when the pod starts. Reducing the number of packages and tools reduces the attack surface. By moving the tools required at startup to init containers, you can use distroless images for your apps and let the init containers work on those tasks.
Configuring init containers
Init containers are defined in the pod.spec.initContainers array, whereas regular containers are defined under the pod. spec.containers array. Both hold Container objects.
pod.spec is defined in the Kubernetes source code below; we can see that InitContainers and Containers are arrays of Container type.
// File: https://github.com/kubernetes/kubernetes/blob/e6c093d87ea4cbb530a7b2ae91e54c0842d8308a/pkg/apis/core/types.go#L2813
...
// PodSpec is a description of a pod
type PodSpec struct {
Volumes []Volume
// List of initialization containers belonging to the pod.
InitContainers []Container
// List of containers belonging to the pod.
Containers []Container
...
From a configuration point of view, there is a slight difference; as mentioned above, lifecycle, livenessProbe, readinessProbe, startupProbe fields are not supported for init containers.
Debugging and troubleshooting of init containers
We can gather information regarding init containers directly and indirectly. Before we dive into the individual init container statuses, let's look at pod-wide indicators.
Pod status
During the pod's initialization, some "init container" related statuses are displayed on pod status. These valuable outputs are shown with a prefix of "init:"
Pod StatusDescriptionInit:N/MN init containers successfully executed to completion so far out of a total of M init containers.Init:ErrorInit Container has and is constantly failing, and kubelet triggers the back-off algorithm.Init:CrashLoopBackOffInit Container has and is constantly failing, and kubelet triggers the back-off algorithm.PendingInit container execution hasn't started.PodInitializing or RunningAll init containers executed to completion with zero exit code.
Let's see these states in a couple of examples.
kubectl get pods
NAME READY STATUS RESTARTS AGE
...
k8s-init-containers-668b46c54d-kg4qm 0/1 Init:1/2 1 8s
Init:1/2
status tells us there are two init containers, and one of them has run to completion.
kubectl get pods
NAME READY STATUS RESTARTS AGE
...
k8s-init-containers-668b46c54d-kg4qm 0/1 Init:CrashLoopBackOff 5 4m12s
Init:CrashLoopBackOff
would point out that one of the init containers fails repeatedly. At this point, the kubelet restarts failed init container until it reaches successful completion.
Let's use the kubectl get pods
command with the --watch
flag to see the pod status changes more clearly. This flag enables the command to watch the objects and output the changes.
kubectl get pods --watch
NAME READY STATUS RESTARTS AGE
k8s-init-containers-64f984c8d7-tdjrg 0/1 Pending 0 0s
k8s-init-containers-64f984c8d7-tdjrg 0/1 Pending 0 4s
k8s-init-containers-64f984c8d7-tdjrg 0/1 Init:0/4 0 4s
k8s-init-containers-64f984c8d7-tdjrg 0/1 Init:0/4 0 6s
k8s-init-containers-64f984c8d7-tdjrg 0/1 Init:Error 0 7s
k8s-init-containers-64f984c8d7-tdjrg 0/1 Init:1/4 1 8s
k8s-init-containers-64f984c8d7-tdjrg 0/1 Init:1/4 1 9s
k8s-init-containers-64f984c8d7-tdjrg 0/1 Init:2/4 1 43s
k8s-init-containers-64f984c8d7-tdjrg 0/1 Init:3/4 1 44s
k8s-init-containers-64f984c8d7-tdjrg 0/1 PodInitializing 0 45s
k8s-init-containers-64f984c8d7-tdjrg 1/1 Running 0 46s
At first, the pod is in the Pending
state where the cluster accepted the pod; however, it hasn't been scheduled yet. We can see that there are four init containers (Init:x/4
); next, init containers are started sequentially. One of the init containers has an error (Init:Error
), and it has been restarted (RESTARTS: 1
). After the restart, this time init container executes to completion successfully. The next init container is started, and the rest of the init containers also run to successful completion. The pod status moves from the PodInitializing
state to the "Running" state.
Pod conditions
The pod conditions are high-level status views of the pod regarding certain statuses, which is a good place to start. The kubelet populates Pod.status.Conditions array with PodCondition objects.
There are four types of conditions, and each of them also has a status field.
The status field's value can be True
, False
, Unknown
, indicating whether the pod passed the condition requirements or not.
The most relevant condition type from the aspect of init containers is "Initialized." Once every init container runs to completion with success, the pod's "Initialized" condition's status is set to True
, signaling that initialization is finished successfully.
...
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
...
Here is a Conditions array when there is a failure on init containers. As the initialization hasn't finished, the rest of the containers are not run. Kubelet restarts the failed init container.
...
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
...
You can gather a list of pod names and their pod status condition for programmatic access, which has a type of Initialized
with the following JSONPath expression.
kubectl get pods -o=jsonpath='{"Pod Name, Condition Initialized"}{"\n"}{range .items[*]}{.metadata.name},{@.status.conditions[?(@.type=="Initialized")].status}{"\n"}{end}'
Pod Name, Condition Initialized
buildkit-builder0-76784c68c7-c52nw,True
k8s-init-containers-b4ddb8ffc-2sh4m,True
postgresql-postgresql-0,True
redis-master-0,True
ContainerStatus object (in pod.status.initContainerStatuses[])
The status of the init containers can be found under the pod.status.initContainerStatuses array, which holds a ContainerStatus object for each init container.
kubectl get pods k8s-init-containers-64f984c8d7-tdjrg -o yaml
...
initContainerStatuses:
- containerID: containerd://b60870084a2065d…
image: docker.io/loftsh/app-init:FLiBwNW
imageID: docker.io/loftsh/app-init@sha256:e46e2…
lastState: {}
name: init-fetch-files
ready: true
restartCount: 2
state:
terminated:
containerID: containerd://b60870084a2065def2d289f7…
exitCode: 0
finishedAt: "2022-02-27T02:09:37Z"
reason: Completed
startedAt: "2022-02-27T02:09:37Z"
...
We can see that the "init-fetch-files" init container has been restarted two times; in the last restart, it has run into completion by exiting with a zero exit code.
Container state field
The state field we've seen in ContainerStatus is available in the kubectl describe pods <pod-name>
command output as well. Let's examine a pod that has more init containers in it. We use a pod with two init containers and an app container for this example.
To gather further information, let's examine the kubectl describe pod <pod-name>
command's output.
kubectl describe pod k8s-init-containers-668b46c54d-kg4qm
Some of the output is cut for brevity code blocks have ellipses to indicate the cut.
Look at the output for the first init container, "init-fetch-files." Container's state is Terminated
, the reason is Completed, and Exit Code is zero. The container was executed and exited with a zero-code on the first try ( Restart Count
is zero).
…
Init Containers:
init-fetch-files:
Container ID: containerd://31ac3…
Image: loftsh/app-init:rQyZJBK
Image ID: docker.io/loftsh/app-init@sha256:79ce…
Port: <none>
Host Port: <none>
Command:
fetchFiles.sh
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 15 Mar 2022 13:54:39 +0000
Finished: Tue, 15 Mar 2022 13:54:39 +0000
Ready: True
Restart Count: 0
Environment:
PUBLIC_ROOT: /data/public
Mounts:
/data from app-data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-44vbg (ro)
…
Let's examine the second init container, "init-check-services"; the state is Waiting
, the reason is CrashLoopBackOff
, and Exit Code
is one, a non-zero value indicating an error. The Restart Count
field shows that the container is restarted 223 times; clearly, there are problems.
…
Init Containers:
…
init-check-services:
Container ID: containerd://eb1dbae93999a63fed7…
Image: loftsh/app-init:kxPrHjW
Image ID: docker.io/loftsh/app-init@sha256:5…
Port: <none>
Host Port: <none>
Command:
checkServices.sh
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 15 Mar 2022 14:11:23 +0000
Finished: Tue, 15 Mar 2022 14:11:23 +0000
Ready: False
Restart Count: 223
Environment:
APP_NAME: app
SERVICES: redis-master,postgresql
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ftmj6 (ro)
Any idea of what would the next container's state be?
Lastly, let's look at the app container state. The state is Waiting, and the reason is set to PodInitializing; since not all init containers run to completion with success, the pod is still in initializing state.
…
Containers:
app:
Container ID:
Image: loftsh/app:sNKfQGb
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment:
APP_NAME: app
Mounts:
/data from app-data (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-44vbg (ro)
The pod conditions are also set appropriately.
…
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
…
You might have noticed the State field's output differ for each container state; there is three types of container state and their fields:
If multiple states are recorded, then Last State
is added, which aids in troubleshooting. An example of Last State is as follows:
...
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Fri, 04 Mar 2022 07:16:21 +0000
Finished: Fri, 04 Mar 2022 07:16:21 +0000
...
The output shows that the container terminated with a non-zero exit code (1).
Container logs
You can access the init containers logs the same way you do with a regular container.
kubectl logs k8s-init-containers-668b46c54d-kg4qm -c init-check-services
At least one (app-redis) of the services (app-redis app-postgresql app-mongodb) is not available yet.
Events
Kubernetes events are also a good source of information.
kubectl get events
LAST SEEN TYPE REASON OBJECT MESSAGE
81s Warning BackOff pod/k8s-init-containers-5c694cd678-gr8zg Back-off restarting the failed container
Conclusion
Init containers bring a different mindset to the initialization phase of an app or service. As we've seen in the earlier sections, there are many benefits of using init containers ranging from enhanced security, separation of ownership, ability to keep the app containers lean as possible, and more. Configuration, troubleshooting, and monitoring init containers aren't much different from regular containers, apart from the few behavioral differences you need to consider.