Pods are defined as the smallest, most basic deployable objects in Kubernetes. It represents a single instance of a running process in your cluster. Pods can be considered to be a self-contained, isolated “logical host” that contains the systemic needs of the application it serves.

In this article, you will learn about Kubernetes pods, use causes, and lifecycles, as well as how to use pods to deploy an application.

What Is A K8s Pod?

Among all the objects in Kubernetes, the pod is the smallest building block. Within a cluster, a pod represents a process that’s running. The internal of a pod can have one or more containers. Those within a single pod share:

  • A unique network IP
  • Network
  • Storage
  • Any additional specifications you’ve applied to the pod

An alternative way to think of a pod is as a “logical host” that is specific to your application and holds one or more tightly-coupled containers. For example, we have an app-container and a logging-container in a pod. The only responsibility of the logging-container is to pull logs from the app-container. Locating your containers in a pod terminates extra communication setup because they’re co-located, so everything is local, and that they share all the resources. This is similar to execution on the same physical server in a pre-container world.

There are other things to do with pods as well. You might have an init container that initializes a second container. Once the second container is up and serving, the first container’s job is done and hence it stops working.

Pod Model Types

You can create two types of pod models which are as follows:
One-container-per-pod: This is the most popular model. The post is the “wrapper” for a single container. Because pod is the smallest object that K8s, therefore, it manages the pods instead of directly managing the containers.
Multi-container-pod: In this model, a pod can hold multiple co-located containers that are closely coupled to share resources. These containers work as a one, cohesive unit of service. The pods then wraps these multi containers with storage resources into a single unit. Example use cases include sidecars, proxies, logging.

Each pod runs a single instance of your application. If you need to scale the app horizontally (such as running several replicas), you can use a pod per instance. This is contrasting from running multiple containers or the same app within a single pod.

It is important to highlight that pods are not intended as durable entities. In case a node fails or if you are maintaining nodes, the pods won’t be able to survive. To overcome this issue, K8S has controllers- generally, a pod can be created with a type of controller.

Pod Lifecycle Phases

A pod status is responsible to tell us where the pod is in the lifecycle. It is meant to just give an idea not precisely, hence, it is a better practice to debug if the pod does not come up cleanly. The five phases of a pod lifecycle are:

Pending: In this phase, the pod is accepted, but at least one container image has not been created.
Running: In this case, the pod is bound to a node and all containers are created. One container is running or within the process of starting or restarting
Succeeded: In the third phase, all containers in the pod successfully terminated and will not restart.
Failed: In this phase, all containers get terminated, with at least one container failing. The failed container exited with a non-zero status.
Unknown: In this phase, the state of the pod could not be obtained.

Pods In Practice
Now that you know what a pod theoretically, it is time to know how it looks in practice. First, we will go over a simple pod manifest, later, we will deploy an example app showing how you can work with it.

The Manifest (YAML)

The manifest can be broken down into four parts:

ApiVersion– It is the version of the Kubernetes API you are using.
Kind– It is the kind of object you wish to create.
Metadata– It is the information that uniquely identifies the object, such as name or namespace.
Spec– It is defined as a specified configuration of our pod, for example, image name, container name, volumes, etc.

ApiVersion, kind, and metadata are the compulsory fields that are applicable to all Kubernetes objects, not just pods. The layout of the spec is also a requirement but it varies across objects. An example of manifest that is shown below shows what a single container pod spec looks like.

apiVersion: "api version"              (1)
kind: "object to create"                  (2)
Metadata:                   (3)
  Name: "Pod name"
  labels:
    App: "label value"
Spec:                       (4)
  containers:
  - name: "container name"
    image: "image to use for container"

After you have understood how the manifest looks, we will reveal both the models for creating a pod.

Single Container Pod

The pod-1.yaml is the manifest for our single container pod. It runs annginx pod that reflects something for us.

apiVersion: v1
kind: Pod
metadata:
  name: firstpod
  labels:
    app: myapp
spec:
  containers:
  - name: my-first-pod
    image: nginx

Moving forward, we deploy this manifest into our local Kubernetes cluster by running Kubectl create -f pod-1.yaml. Afterwards, we run “kubectl get pods” to confirm if our pods is running as expected.

kubectl get pod
NAME                                          READY     STATUS    RESTARTS   AGE
firstpod                                      1/1       Running   0          45s

You can see that it is now running. To confirm if it is actually running, run kubectl exec firstpod—kubeconfig=kubeconfig — service nginx status. This runs a command inside our pod by passing in — service nginx status. (Note: this is similar to running docker exec.)

kubectl exec firstpod  service nginx status
nginx is running.

Now, we’ll clean up by running kubectl delete pod firstpod.

kubectl delete pod firstpod
pod "firstpod" deleted

Multi-Container Manifest

In this instance, we will deploy something more useful: a pod with multiple containers that work as a single entity. One container writes the present date to a file every 10 seconds while the opposite container serves the logs of us.

Move forward and deploy the pod-2.yaml manifest with kubectl create -f pod-2.yaml.

apiVersion: v1
kind: Pod
metadata:
  name: multi-container-pod # Name of our pod
spec:
  volumes:
  - name: shared-date-logs  # Creating a shared volume for my containers
    emptyDir: {}
  containers:
  - name: container-writing-dates # Name of first container
    image: alpine # Image to use for first container
    command: ["/bin/sh"]
    args: ["-c", "while true; do date >> /var/log/output.txt; sleep 10;done"] # writing date every 10secs
    volumeMounts:
    - name: shared-date-logs
      mountPath: /var/log # Mounting log dir so app can write to it.
  - name: container-serving-dates # Name of second container
    image: nginx:1.7.9 # Image for second container
    ports:
      - containerPort: 80 # Defining what port to use.
    volumeMounts:
    - name: shared-date-logs
      mountPath: /usr/share/nginx/html # Where nginx will serve the written file

Here, take a brief pause and understand the volumes in pods. In the example given above, volumes provide how for the containers to communicate during the pod’s life. If you delete a pod and recreate it again, any stored data in the shared volume is lost. Persistent volumes object rescues this issue so that your data exists beyond pod loss. This multi-container example is used not only to demonstrate how to create two container pods but to also show the way both containers share resources.

kubectl create -f pod-2.yaml
pod "multi-container-pod" created

Afterward, we confirm that it is actually deployed.

Now, you can see that it running. It is time to make sure that our second container is serving the dates.

Check to ensure two containers are in our pod by running kubectl describe pod “pod name”. This command shows what the created object seems like.

Containers:
  container-writing-dates:
    Container ID:  docker://e5274fb901cf276ed5d94b625b36f240e3ca7f1a89cbe74b3c492347e98c7a5b
    Image:         alpine
    Image ID:      docker-pullable://alpine@sha256:621c2f39f8133acb8e64023a94dbdf0d5ca81896102b9e57c0dc184cadaf5528
    Port:          
    Host Port:     
    Command:
      /bin/sh
    Args:
      -c
      while true; do date >> /var/log/output.txt; sleep 10;done
    State:          Running
      Started:      Fri, 16 Nov 2018 11:31:44 -0700
    Ready:          True
    Restart Count:  0
    Environment:    
    Mounts:
      /var/log from shared-date-logs (rw)
      /var/run/secrets/Kubernetes.io/serviceaccount from default-token-8dl5j (ro)
  container-serving-dates:
    Container ID:   docker://f9c85f3fe398c3197644fb117dc1681635268903b3bba43aa0a1d151fab6ad22
    Image:          nginx:1.7.9
    Image ID:       docker-pullable://nginx@sha256:e3456c851a152494c3e4ff5fcc26f240206abac0c9d794affb40e0714846c451
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Fri, 16 Nov 2018 11:31:44 -0700
    Ready:          True
    Restart Count:  0
    Environment:    
    Mounts:
      /usr/share/nginx/html from shared-date-logs (rw)
      /var/run/secrets/Kubernetes.io/serviceaccount from default-token-8dl5j (ro)

Now that both the containers are running, it is time to make sure both are doing their assigned jobs. Connect to the container by running kubectl exec -ti multi-container-pod -c container-serving-dates —kubeconfig=kubeconfig bash. Now we are inside the container.

Finally, we run curl ‘http://localhost:80/output.txt’ inside the container and it should serve our file. (If you don’t have curl installed in the container, first run apt-get update && apt-get install curl then run curl ‘http://localhost:80/output.txt’ again.)

curl 'http://localhost:80/app.txt'
Fri Nov 16 18:31:44 UTC 2018
Fri Nov 16 18:31:54 UTC 2018
Fri Nov 16 18:32:04 UTC 2018
Fri Nov 16 18:32:14 UTC 2018
Fri Nov 16 18:32:24 UTC 2018
Fri Nov 16 18:32:34 UTC 2018
Fri Nov 16 18:32:44 UTC 2018
Fri Nov 16 18:32:54 UTC 2018
Team IdeaClan

Team IdeaClan

A martech company with 10+ years in the market that is engaging technology and media buying skills to transform the face of digital marketing