Site icon Vinsguru

Kubernetes Liveness Probe vs Readiness Probe

Overview:

In this article, I would like to show you the difference between the Kubernetes Liveness probe vs Readiness probe which we use in the Pod deployments yaml to monitor the health of the pods in the Kubernetes cluster.

Need For Probes:

Pod is a collection of 1 or more docker containers. It is an atomic unit of scaling in Kubernetes. Pod has a life-cycle with multiple phases. For example, When we deploy a pod in the Kubernetes cluster, Kubernetes has to start from scheduling the pod in one of the nodes in the cluster, pulling the docker image, starting a container and ensuring that containers are ready to serve the traffic etc!

As Pod is the collection of docker containers, in order to accept any incoming request, all the containers must be ready to serve the requests! So it will take some time – usually within a minute & but it mostly depends on the application. So, as soon as we send a deployment request, our application is not ready to serve! Also, we know that software will eventually fail! Anything could happen.For example, a memory leak could lead to OOM error in few hours/days. In that case, Kubernetes has to kill the pod when the pods are not working as expected and reschedule another pod to handle the load on the cluster.

Kubernetes Livenesss Probe and Readiness probe are the tools to monitor the pods & its health and take appropriate actions in case of failure.

These probes are optional for a Pod deployment and should be configured under a container section in the deployment file if required.

To demonstrate these probes behavior, I am going to create a very simple Spring boot application with a single controller.

Sample Application:

I am creating a very simple Spring boot application to show how things work!  This is my project structure.

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

@SpringBootApplication
public class ProbesDemoApplication {

    public static void main(String[] args) {
        SpringApplication.run(ProbesDemoApplication.class, args);
    }

}
import org.springframework.beans.factory.annotation.Value;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class ProbeController {

    private static final String MESSAGE = "Slept for %d ms";

    @Value("${request.processing.time}")
    private long requestProcessingTime;

    @GetMapping("/processing-time/{time}")
    public void setRequestProcessingTime(@PathVariable long time){
        this.requestProcessingTime = time;
    }

    @GetMapping("/health")
    public ResponseEntity<String> health() throws InterruptedException {
        //sleep to simulate processing time
        Thread.sleep(requestProcessingTime);
        return ResponseEntity.ok(String.format(MESSAGE, requestProcessingTime));
    }

}
# Use JRE8 slim
FROM openjdk:8u191-jre-alpine3.8

# Add the app jar
ADD target/*.jar app.jar

ENTRYPOINT sleep ${START_DELAY:-0} && java -jar app.jar

Pod Deployment:

Lets deploy the above app. I am creating 2 replicas. I pass the start delay as 60 seconds. Request processing time is 0 seconds.

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    run: probe-demo
  name: probe-demo
spec:
  replicas: 2
  selector:
    matchLabels:
      run: probe-demo
  template:
    metadata:
      labels:
        run: probe-demo
    spec:
      containers:
      - image: vinsdocker/probe-demo
        name: probe-demo
        env:
        - name: START_DELAY
          value: "60"        
        ports:
        - containerPort: 8080
kubectl apply -f pod-deployment.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app: probe-demo
  name: probe-demo
spec:
  selector:
    run: probe-demo
  type: NodePort
  ports:
  - name: 8080-8080
    port: 8080
    protocol: TCP
    targetPort: 8080
    nodePort: 30001
kubectl apply -f service.yaml

wget -qO- http://probe-demo:8080/health

Readiness Probe:

I am including the readiness probe in the container section of the pod-deployment yaml as shown below.

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    run: probe-demo
  name: probe-demo
spec:
  replicas: 2
  selector:
    matchLabels:
      run: probe-demo
  template:
    metadata:
      labels:
        run: probe-demo
    spec:
      containers:
      - image: vinsdocker/probe-demo
        name: probe-demo
        env:
        - name: START_DELAY
          value: "60"        
        ports:
        - containerPort: 8080
        readinessProbe:
          httpGet:
            path: /health
            port: 8080
          initialDelaySeconds: 60
          periodSeconds: 5
          successThreshold: 1
          failureThreshold: 3
          timeoutSeconds: 1
kubectl delete -f pod-deployment.yaml
kubectl apply -f pod-deployment.yaml

wget -qO- http://probe-demo:8080/processing-time/30000
wget -qO- http://probe-demo:8080/health

 

wget -qO- http://probe-demo:8080/health

Note: The readiness probe is used to remove the pod from the service. But it does not kill the pod. Pod is continued to be in running status even though it is not responding as we expected.

But we might want to take some action for the pods which are causing issues like this. In this case, I might want to restart pod.

Liveness Probe:

Lets see what happens If I configure the Liveness probe instead of Readiness probe as shown here.

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    run: probe-demo
  name: probe-demo
spec:
  replicas: 2
  selector:
    matchLabels:
      run: probe-demo
  template:
    metadata:
      labels:
        run: probe-demo
    spec:
      containers:
      - image: vinsdocker/probe-demo
        name: probe-demo
        env:
        - name: START_DELAY
          value: "60"
        ports:
        - containerPort: 8080
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
          initialDelaySeconds: 60
          periodSeconds: 5
          successThreshold: 1
          failureThreshold: 3
          timeoutSeconds: 1

Note: Liveness probe can NOT be used to make a pod unavailable in case of issues.

wget -qO- http://probe-demo:8080/processing-time/30000

 

Note: Both readiness probe and liveness probe seem to have same behavior. They do same type of checks. But the action they take in case of failures is different. Readiness Probe shuts the traffic from service down. so that service can always the send the request to healthy pod whereas the liveness probe restarts the pod in case of failure. It does not do anything for the service. Service continues to send the request to the pods as usual if it is in ‘available’ status.

Summary:

After seeing Readiness probe and Liveness probe behaviors, It is recommended to use both probes in the deployment yaml as shown below. The reason being, in case of any pod failure or during deployment, with the help of the probes, we would be sending all the requests only to a healthy pods and pods would be restarted immediately in case of failure.

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    run: probe-demo
  name: probe-demo
spec:
  replicas: 2
  selector:
    matchLabels:
      run: probe-demo
  template:
    metadata:
      labels:
        run: probe-demo
    spec:
      containers:
      - image: vinsdocker/probe-demo
        name: probe-demo
        env:
        - name: START_DELAY
          value: "60"        
        ports:
        - containerPort: 8080
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
          initialDelaySeconds: 60
          periodSeconds: 5
          successThreshold: 1
          failureThreshold: 3
          timeoutSeconds: 1  
        readinessProbe:
          httpGet:
            path: /health
            port: 8080
          initialDelaySeconds: 60
          periodSeconds: 5
          successThreshold: 1
          failureThreshold: 3
          timeoutSeconds: 1

 

Share This:

Exit mobile version