This article's content
Managing Pods in Kubernetes

As mentioned before, a Pod is Kubernetes’ most atomic deployable unit. Containers must always be placed within a Pod. A Pod always belongs to one and only one node (which is located at a certain region in the world). Pods have a life cycle including phases such as init, pending, running, succeeded or failed. Pods live and die but never come back to life. Pods can (and usually are) scaled horizontally by being cloned/replicated in what is called a ReplicaSet.

Technically speaking, a Pod is a shared execution environment: All Containers within a Pod share IP address, memory and volumes. Unfortunately that also opens up the opportunity to couple Containers more tightly together than they need to be. Although there are use cases where it makes sense to have multiple containers in one pod, it is generally advised to design your code to let a single Container run in its own Pod. Keep in mind that “scaling your application with Kubernetes” is done by scaling your Pods. That’s why it is important to keep Containers loosely coupled.

A Pod adds more features to your infrastructure such as health checks to see if your container is reachable (Probes), or whether a pod should prioritize a specific node when it gets created (Affinity), or how the pod should behave when it restarts or terminates (Restart policies and Termination control).

You can and should configure resource limits for your pods, to allow them to only use a certain maximum level of CPU for example.

If your Container app needs access to the Kubernetes APIserver, then you use a Service Account which automatically comes with each Pod.

Pods can be created declaratively (preferred) by creating a yml file first and then applying it or imperatively by using

# create Pod and ReplicaSet from nginx:alpine image
kubectl run <pod-name> --image=nginx:alpine

Pods are defined declaratively via a pod.yml. For example:

# file.pod.yml
apiVersion: v1
kind: Pod
metadata:
  name: nginx  # name of the Pod
  labels:
    env: test
spec:          # all containers inside the Pod
  containers:
  - name: nginx
    image: nginx
    ports:
    - containerPort: 80 # define a custom port if you want to
    imagePullPolicy: IfNotPresent
  nodeSelector:
    disktype: ssd

After specifying a yml file you can give it a trial run with kubectl create -f file.pod.yml --dry-run=client --validate=true. You should add --save-config when you want to use kubectl apply in the future. Even better: Ignore kubectl create completely and always create your pod spec with kubectl apply -f file.pod.yml. That way you can create the Pod and use the same command to update the Pod specification. By the way, some configuration (such as ports) cannot be changed using apply.

Next, just leave the dry-run and validate parameter out, to actually create the Pod.

What happens: The yml file is posted to the API server, the requests is getting authenticated and authorized, config is getting persisted to the Cluster Store of the Master Node and the Scheduler will assign the Pod to a Node.

In-place/non-disruptive changes can also be made to a Pod using kubectl edit or kubectl patch (for a particular property).

Get a list of pods with kubectl get pods --watch. Add -o wide for useful additional columns. Get detailed info about a pod with kubectl describe pods <pod-name>.

The IP address of a Pod is only visible within the cluster (if defaults are used). You can use kubectl port-forward pod/<pod-name> 8080:80 (external:internal) to make it available for external access, i.e. maybe from your developer machine.

You can kubectl delete pod <pod-name> or kubectl delete -f file.pod.yml. Be aware that Kubernetes will usually spin up another Pod a few seconds after your deletion. So don’t be fooled when running kubectl get pod to find a Pod with the same name but different ID. If you want to get rid of a Pod and not have it respawn, you have to kubectl delete deployment <deployment-name>.

A Probe is a diagnostic performed periodically by the Kubelet on a container returning either Success, Failure or Unknown. Liveness Probe checks if a Pod is alive, healthy and running as expected (Ask: “When should a container restart?”). You can check the health via ExecAction, TCPSocketAction or HTTPGetAction as in this example:

apiVersion: v1
kind: Pod
metadata:
  name: lr-nginx
  labels:
    app: nginx
    rel: stable
spec:
  containers:
    - name: lr-nginx
      image: nginx:alpine
      ports:
        - containerPort: 80
      livenessProbe:
        httpGet:
          path: /index.html
          port: 80
        initialDelaySeconds: 15
        timeoutSeconds: 2
        periodSeconds: 5 # check every 5 seconds
        failureThreshold: 1 # allow 1 failure before failing Pod
      readinessProbe:
        httpGet:
          path: /index.html
          port: 80
        initialDelaySeconds: 15
        timeoutSeconds: 2
        periodSeconds: 5 # check every 5 seconds
        failureThreshold: 1 # allow 1 failure before failing Pod

A Readiness Probe can be used to determine if a Pod should receive requests (Ask: “When should a container start receiving traffic?”).

About Author

Mathias Bothe To my job profile

I am Mathias from Heidelberg, Germany. I am a passionate IT freelancer with 15+ years experience in programming, especially in developing web based applications for companies that range from small startups to the big players out there. I create Bosycom and initiated several software projects.