Part 4

Topics

  • Kubernetes Probes (Container Health check)
    • Startup Probe
    • Liveness probe
    • Readiness Probe
  • Kubernetes Storage
    • Secret
    • ConfigMap
    • emptyDIr
    • hostPath
  • StatefulSet
  • Kubernetes IngressController
  • Kubectl Set
  • Kubectl Patch

Probes

The kubelet uses liveness probes to know when to restart a container.
Liveness probes can be a powerful way to recover from application failures, but they should be used with caution.

Liveness Probe

  • It checks whether the application running inside the container is still alive and functioning properly.

Readiness Probe:

It determines whether the application inside the container is ready to accept traffic or requests. When a readiness probe fails, Kubernetes stops sending traffic to the container until it passes the probe. This is useful during application startup or when the application needs some time to initialize before serving traffic.

  • Readiness probe removes the defected pod from service endpoint.

Startup Probe (introduced in Kubernetes 1.16):

It is similar to a readiness probe, but it only runs during the initial startup of a container. The purpose of the startup probe is to differentiate between a container that’s starting up and one that’s in a crashed state. Once the startup probe succeeds, it is disabled, and the readiness probe takes over.

  • Create a pod with startup probe
  • Make a test and monitor the pods and logs as well
  • Create a pod with liveness prode
  • Create a pod with readiness probe

Probes have a number of fields that you can use to more precisely control the behavior of startup, liveness and readiness checks:

  • initialDelaySeconds: Number of seconds after the container has started before startup, liveness or readiness probes are initiated. Defaults to 0 seconds. Minimum value is 0.
  • periodSeconds: How often (in seconds) to perform the probe. Default to 10 seconds. The minimum value is 1.
  • timeoutSeconds: Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1.
  • successThreshold: Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup Probes. Minimum value is 1.
  • failureThreshold: After a probe fails failureThreshold times in a row,
  • terminationGracePeriodSeconds: configure a grace period for the kubelet to wait between triggering a shut down of the failed container, and then forcing the container runtime to stop that container. The default is to inherit the Pod-level value for terminationGracePeriodSeconds (30 seconds if not specified), and the minimum value is 1. See probe-level terminationGracePeriodSeconds for more detail.

probe can be defined in multiple ways

  • TCP
  • gRPC
  • httpGet
  • exec

A pod with Startup Probe

kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: mycontainer
        image: nginx
        ports:
        - containerPort: 80
        startupProbe:
          exec:
            command:
            - /bin/sh
            - -c
            - "ls -l /usr/share/nginx/html"
          initialDelaySeconds: 15
          periodSeconds: 10
          failureThreshold: 30
          timeoutSeconds: 5
        readinessProbe:
          exec:
            command:
            - /bin/sh
            - -c
            - "ls -l /usr/share/nginx/html"
          initialDelaySeconds: 20
          periodSeconds: 5
        livenessProbe:
          exec:
            command:
            - /bin/sh
            - -c
            - "ls -l /usr/share/nginx/html"
          initialDelaySeconds: 30
          periodSeconds: 10
EOF
  • Example with command exec
startupProbe:
  exec:
    command:
    - /bin/sh
    - -c
    - cat /tmp/healthy
  initialDelaySeconds: 10
  periodSeconds: 5
  failureThreshold: 6
  timeoutSeconds: 1
  • create Deployment in place of pod with 3 replicas
  • Create a svc
  • delete the index.html file or modify this \
  • Now try to access the app

A Pod with Liveness probe

kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: pod-with-liveness-probe
spec:
  containers:
  - name: nginx-container
    image: nginx:latest
    livenessProbe:
      httpGet:
        path: /
        port: 80
      initialDelaySeconds: 10
      periodSeconds: 5
EOF

A Pod with Readiness probe

kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: pod-with-readiness-probe
spec:
  containers:
  - name: nginx-container
    image: nginx:latest
    readinessProbe:
      httpGet:
        path: /
        port: 80
      initialDelaySeconds: 15
      periodSeconds: 5
EOF

Key Points about kubernetes Probes

  • initialDelaySeconds 60 seconds specifies the delay before the probe starts executing after the container starts.
  • periodSeconds 10 seconds means that the probe will be executed every 10 seconds.
  • failureThreshold 3 indicates that the container will be marked as unhealthy after three consecutive failed probes.

LAB 1

  • Create pod with type of probes and see how it behaves
  • Check what happens if the statup probe fails
  • check what happend if Livenss probe fails
  • Try different option of probes

Kubernetes Storage

Container Storage Interface (CSI) Overview

What is CSI?

The Container Storage Interface (CSI) is a standard interface developed to allow container orchestration systems to manage storage. It provides a unified API for interacting with various storage systems, enabling consistent storage management across different platforms and storage backends.

Key Components of CSI Drivers

1. CSI Controller Service

  • Role: Manages the lifecycle of volumes, including creation, deletion, and listing.
  • Functions:
    • Volume Provisioning: Handles dynamic volume provisioning and ensures volumes are created and deleted as requested.
    • Volume Snapshots: Manages volume snapshots and clones if supported.
    • Volume Attachment: Manages the attachment of volumes to nodes.

2. CSI Node Service

  • Role: Runs on each node and handles mounting and unmounting volumes to/from containers.
  • Functions:
    • Volume Mounting: Mounts volumes on the node where the pod is scheduled.
    • Volume Unmounting: Handles clean-up and unmounting of volumes.
    • Volume Attach/Detach: Manages attachment and detachment of volumes.

3. CSI Driver DaemonSet (or Deployment)

  • Role: Deploys the CSI Node Service across the Kubernetes cluster.
  • Functions:
    • Deployment: Ensures the CSI Node Service is running on all nodes.
    • Configuration: Configures the driver with access credentials and storage backend details.

4. CSI Controller Pod

  • Role: Runs the CSI Controller Service.
  • Functions:
    • Provisioning Requests: Handles provisioning requests and interacts with the storage backend.
    • Management Operations: Manages volume-related operations such as resizing, snapshotting, and cloning.

5. CSI Plugin

  • Role: Contains the core logic for interacting with the storage backend.
  • Functions:
    • Driver Code: Interfaces with the storage backend’s API.
    • Configuration Files: Configures the driver with details like endpoints and credentials.

6. CSI Provisioner

  • Role: Listens for PVCs and handles volume provisioning according to StorageClass configuration.
  • Functions:
    • Dynamic Provisioning: Automatically provisions storage volumes.

7. CSI Snapshot Controller

  • Role: Manages volume snapshots and is often part of the CSI Controller Service.
  • Functions:
    • Snapshot Operations: Handles creation, deletion, and listing of volume snapshots.

8. CSI Node Plugin

  • Role: Provides the interface between the Kubernetes Node Service and the CSI driver.
  • Functions:
    • Node Communication: Ensures communication between the Node Service and the CSI driver.

9. CSI Volume Driver

  • Role: The actual driver interacting directly with the storage backend’s API.
  • Functions:
    • Storage Operations: Handles volume creation, deletion, attachment, and mounting.

10. Kubernetes Resource Definitions

  • StorageClass: Defines parameters for volume provisioning, including which CSI driver to use.
  • PersistentVolume (PV): Represents a storage resource and is created by the CSI Controller Service.
  • PersistentVolumeClaim (PVC): Requests storage from a StorageClass and uses the CSI driver to provision it.

What Happens Without a CSI Driver?

If Kubernetes does not have a CSI driver installed or configured, the following issues will arise:

1. Inability to Provision Dynamic Volumes

  • Issue: Kubernetes will not be able to dynamically provision storage volumes as specified by PersistentVolumeClaims (PVCs).
  • Consequence: Users will need to manually create PersistentVolumes (PVs) and bind them to PVCs, which is less flexible and scalable.

2. No Support for Advanced Storage Features

  • Issue: Features like volume snapshots, cloning, and resizing require CSI support.
  • Consequence: Advanced storage functionalities will not be available, limiting the storage capabilities within the cluster.

3. Manual Storage Management

  • Issue: Without a CSI driver, storage management becomes more manual and error-prone.
  • Consequence: Increased operational overhead and potential for misconfiguration or inconsistencies.

4. Limited Integration with Storage Backends

  • Issue: Kubernetes will not be able to integrate with various storage backends that rely on CSI.
  • Consequence: You may be restricted to using only static storage configurations or outdated integrations that are less feature-rich.

5. Incompatibility with Storage Classes

  • Issue: StorageClass resources, which are used to define storage policies and configurations, depend on CSI drivers to be effective.
  • Consequence: Dynamic provisioning based on StorageClass configurations will fail, and pods requiring specific storage classes will not receive the appropriate storage.

CSI Driver Examples

1. NetApp ONTAP CSI Driver

  • Driver Name: ontap.csi.netapp.com
  • Features:
    • Dynamic provisioning of NFS and iSCSI volumes.
    • Snapshots and clones.
    • Volume resizing.

2. NetApp SolidFire CSI Driver

  • Driver Name: solidfire.csi.netapp.com
  • Features:
    • Dynamic provisioning for SolidFire arrays.
    • Block storage with QoS features.

3. VMware vSphere CSI Driver

  • Driver Name: csi.vsphere.vmware.com
  • Features:
    • Dynamic provisioning for vSAN and VMFS/NFS datastores.
    • Persistent volume management.

Installation and Configuration

Steps to Install a CSI Driver

  1. Choose a CSI Driver:
    • Select a CSI driver suitable for your storage solution.
  2. Install the Driver:
    • Deploy the CSI driver using Helm charts or Kubernetes manifests.
  3. Configure the Driver:
    • Provide necessary credentials and storage backend details.
  4. Create StorageClasses:
    • Define StorageClass resources that use the CSI driver.
  5. Create PersistentVolumeClaims (PVCs):
    • Define PVCs that request storage from the StorageClass.
  6. Use PVCs in Pods:
    • Mount PVCs in your Pods to use the allocated storage.

Summary

The Container Storage Interface (CSI) provides a standardized way for Kubernetes to interact with various storage systems, allowing for flexible, consistent, and scalable storage management. Without a CSI driver, Kubernetes will face limitations in dynamic volume provisioning, advanced storage features, and integration with modern storage backends.

  • Set up NFS storage
git clone https://gitlab.com/container-and-kubernetes/kubernetes-2024.git
cd kubernetes-2024/nfs-subdir-external-provisioner
  • Run the script on all nodes
#!/usr/bin/env bash

export DEBIAN_FRONTEND=noninteractive

readonly NFS_SHARE="/srv/nfs/kubedata"

echo "[TASK 1] apt update"
sudo apt-get update -qq >/dev/null

if [[ $HOSTNAME == "kmaster" ]]; then
  echo "[TASK 2] install nfs server"
  sudo -E apt-get install -y -qq nfs-kernel-server >/dev/null
  echo "[TASK 3] creating nfs exports"
  sudo mkdir -p $NFS_SHARE
  sudo chown nobody:nogroup $NFS_SHARE
  echo "$NFS_SHARE *(rw,sync,no_subtree_check)" | sudo tee /etc/exports >/dev/null
  sudo systemctl restart nfs-kernel-server
else
  echo "[TASK 2] install nfs common"
  sudo -E apt-get install -y -qq nfs-common >/dev/null
fi
  • Now apply nfs on kubernetes
kubectl apply -f 01-setup-nfs-provisioner.yaml
  • Create a pvc using 02-test-claim.yaml
kubectl apply -f 02-test-claim.yaml
  • Check newly created pvc
kubectl get pvc

Storage Classes

  • A StorageClass provides a way for administrators to describe the classes of storage they offer.
  • Different classes might map to quality-of-service levels, or to backup policies, or to arbitrary policies determined by the cluster administrators.
  • You can define default storageclass
  • You can also decide whether the pvc can be expanded.

Example of storage class

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
  archiveOnDelete: "false"
  • Check available storageClass
 kubectl get sc
  • Enable Expansion of pvc
query='{.items[?(@.metadata.annotations.storageclass\.kubernetes\.io/is-default-class=="true")].metadata.name}'
default_sc=$(kubectl get sc -o=jsonpath="$query")

echo patching storage class "[$default_sc]"

kubectl patch storageclass $default_sc -p '{"allowVolumeExpansion": true}'

A PersistentVolume

  • A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes.
  • It is a resource in the cluster just like a node is a cluster resource.

A PersistentVolumeClaim (PVC)

  • A PersistentVolumeClaim (PVC) is a request for storage by a user.
  • It is similar to a Pod. Pods consume node resources and PVCs consume PV resources.

Lifecycle of a volume and claim

  • PVs are resources in the cluster.
  • PVCs are requests for those resources and also act as claim checks to the resource. The interaction between PVs and PVCs follows this lifecycle:

Provisioning

There are two ways PVs may be provisioned: statically or dynamically.

Static

A cluster administrator creates a number of PVs. They carry the details of the real storage, which is available for use by cluster users. They exist in the Kubernetes API and are available for consumption.

Dynamic
  • When none of the static PVs the administrator created match a user’s PersistentVolumeClaim, the cluster may try to dynamically provision a volume specially for the PVC.
  • This provisioning is based on StorageClasses: the PVC must request a storage class
  • The administrator must have created and configured that class for dynamic provisioning to occur

Access Modes

The access modes are:
ReadWriteOnce

the volume can be mounted as read-write by a single node. ReadWriteOnce access mode still can allow multiple pods to access the volume when the pods are running on the same node. For single pod access, please see ReadWriteOncePod.

ReadOnlyMany

the volume can be mounted as read-only by many nodes.

ReadWriteMany

the volume can be mounted as read-write by many nodes.

ReadWriteOncePod

the volume can be mounted as read-write by a single Pod. Use ReadWriteOncePod access mode if you want to ensure that only one pod across the whole cluster can read that PVC or write to it.

In the CLI, the access modes are abbreviated to:

  • RWO - ReadWriteOnce

  • ROX - ReadOnlyMany

  • RWX - ReadWriteMany

  • RWOP - ReadWriteOncePod

  • Example of a pvc

kubectl apply -f - <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: task-pv-claim
spec:
  storageClassName: managed-nfs-storage
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3Gi
EOF
  • Create a pod and use the pvc there
apiVersion: v1
kind: Pod
metadata:
  name: task-pv-pod
spec:
  volumes:
    - name: task-pv-storage
      persistentVolumeClaim:
        claimName: task-pv-claim
  containers:
    - name: task-pv-container
      image: nginx
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: task-pv-storage
  • Test pod and pvc
  • Also check the storage class
kubectl get sc

LAB 2

  • Install nfs on master node and nfs clients on worker node
  • Create a storageclass
  • Create a pvc
  • Check if the pv is also created
  • create a pod with newly created pvc.
  • See if the same pvc can be attach to some other pod also .

emptyDir

  • emptyDir are volumes that get created empty when a Pod is created.
  • While a Pod is running its emptyDir exists. If a container in a Pod crashes the emptyDir content is unaffected. Deleting a Pod deletes all its emptyDirs. An emptyDir volume is first created when a Pod is assigned to a Node and initially its empty
  • A Volume of type emptyDir that lasts for the life of the Pod, even if the Container terminates and restarts.
  • If a container in a Pod crashes the emptyDir content is unaffected.
  • All containers in a Pod share use of the emptyDir volume .
  • Each container can independently mount the emptyDir at the same / or different path.
  • Using emptyDir, The Kubelet will create the directory in the container, but not mount any storage.
  • Containers in the Pod can all read/write the same files in the emptyDir volume, though that volume can be mounted at the same or different paths in each Container.
  • When a Pod is removed from a node for any reason, the data in the emptyDir is deleted forever along with the container.
  • A Container crashing does NOT remove a Pod from a node, so the data in an emptyDir volume is safe across Container crashes.
  • By default, emptyDir volumes are stored on whatever medium is backing the node – that might be disk or SSD or network storage.
  • You can set the emptyDir.medium field to “Memory” to tell Kubernetes to mount a tmpfs (RAM-backed filesystem) for you instead.
  • The location should of emptyDir should be in /var/lib/kubelet/pods/{podid}/volumes/kubernetes.io~empty-dir/ on the given node where your pod is running.
  • Example:
apiVersion: v1
kind: Pod
metadata:
  name: test-pd
spec:
  containers:
  - image: nginx
    name: test-container
    volumeMounts:
    - mountPath: /cache
      name: cache-volume
  - image: redis
    name: redis
    volumeMounts:
    - mountPath: /data
      name: cache-volume
  volumes:
  - name: cache-volume
    emptyDir: {}

hostPath:

  • The hostPath volume mounts a resource from the host node filesystem. the resources could be directory, file socket, character, or block device. These resources mu
  • A hostPath volume mounts a file or directory from the host node’s filesystem into your pod.
  • A hostPath PersistentVolume must be used only in a single-node cluster. Kubernetes does not support hostPath on a multi-node cluster currently.
  • The directories created on the underlying hosts are only writable by root. You either need to run your process as root in a privileged container or modify the file permissions on the host to be able to write to a hostPath volume
  • Example:
apiVersion: v1
kind: Pod
metadata:
  name: test-pd
spec:
  containers:
  - image: k8s.gcr.io/test-webserver
    name: test-container
    volumeMounts:
    - mountPath: /test-pd
      name: test-volume
  volumes:
  - name: test-volume
    hostPath:
      # directory location on host
      path: /data
      # this field is optional
      type: DirectoryOrCreate

LAB 3

  • Create a pod with emptyDir volume
  • create a pod with hostPath
  • check if the data persistent after the pod is deleted.

Kubernetes Secrets

Kubernetes Secrets are used to manage sensitive information, such as passwords, OAuth tokens, SSH keys, and other confidential data. Secrets are intended to hold sensitive data securely and to make it available to applications in a controlled manner.

Key Concepts

1. What is a Secret?

A Secret is a Kubernetes resource that stores sensitive information. Secrets are encoded in Base64 and can be used to provide confidential data to Pods and other Kubernetes resources in a secure manner.

2. Key Features of Secrets

  • Secure Storage: Secrets are stored in etcd in an encoded form.
  • Controlled Access: Access to Secrets is controlled through Kubernetes RBAC (Role-Based Access Control).
  • Environment Variables: Secrets can be injected into Pods as environment variables.
  • Volume Mounts: Secrets can be mounted as files inside Pods.

3. Secret Manifest

A Secret is defined using a YAML manifest. Here’s a basic example of a Secret configuration:

1. Basic Secret

This example demonstrates creating a Secret with username and password:

apiVersion: v1
kind: Secret
metadata:
  name: my-secret
type: Opaque
data:
  username: dXNlcg==   # Base64 encoded value of "user"
  password: cGFzc3dvcmQ= # Base64 encoded value of "password"

Secret

  • To create a secret, you can use the kubectl create secret command.
kubectl create secret generic my-secret --from-literal=MYSQL_ROOT_PASSWORD=root 
  • To create secret of type TLS
 kubectl create secret tls my-tls-secret --cert=path/to/tls.crt --key=path/to/tls.key
  • Check secret
kubectl get secrets
  • use the secret with pod as an environment variable
apiVersion: v1
kind: Pod
metadata:
 name: mypod
spec:
 containers:
 - name: mycontainer
   image: mysql
   env:
   - name: MYSQL_ROOT_PASSWORD
     valueFrom:
      secretKeyRef:
        name: my-secret
        key: MYSQL_ROOT_PASSWORD   
  • Use secret as a file
apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
  - name: mycontainer
    image: myimage
    volumeMounts:
    - name: secret-volume
      mountPath: /etc/secrets
  volumes:
  - name: secret-volume
    secret:
      secretName: my-secret
  • How to extract the value from a secret
kubectl get secret my-secret -o jsonpath='{.data.username}' | base64 --decode > username.txt
kubectl get secret my-secret -o jsonpath='{.data.password}' | base64 --decode > password.txt

Kubernetes ConfigMap

A Kubernetes ConfigMap is a resource used to store non-sensitive configuration data in key-value pairs. ConfigMaps are intended to provide configuration settings to applications running in Kubernetes Pods. They help decouple configuration from application code, making applications easier to manage and deploy.

Key Concepts

1. What is a ConfigMap?

A ConfigMap is a Kubernetes resource that allows you to store configuration data separately from application code. This data can be injected into Pods as environment variables, command-line arguments, or mounted as files.

2. Key Features of ConfigMaps

  • Decoupling Configuration: Separates configuration from application code.
  • Flexible Usage: Provides configuration data as environment variables, command-line arguments, or file mounts.
  • Dynamic Updates: Can be updated independently of the application.

3. ConfigMap Manifest

A ConfigMap is defined using a YAML manifest. Here’s a basic example of a ConfigMap configuration:

1. Basic ConfigMap

This example demonstrates creating a ConfigMap with a few key-value pairs:

apiVersion: v1
kind: ConfigMap
metadata:
  name: my-configmap
data:
  APP_MODE: production
  LOG_LEVEL: debug
  • Create a configmap
kubectl create configmap my-configmap --from-literal=key1=value1
  • Create cm with yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: my-configmap
data:
  key1: value1
  key2: value2
  • Configmap for a config file
kubectl create configmap my-configmap --from-file=config-file.txt
  • use configmap as a file in a pod
apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
  - name: mycontainer
    image: myimage
    envFrom:
    - configMapRef:
        name: my-configmap
---
apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
  - name: mycontainer
    image: myimage
    volumeMounts:
    - name: config-volume
      mountPath: /etc/config
  volumes:
  - name: config-volume
    configMap:
      name: my-configmap

LAB 4

  • Creating a secret
  • Checking the value of a secret
  • Use jsonpath to fetch the value
  • Use secret in pod an environment value
  • Use secret as a file in pod
  • Creating a secret of type tls
  • Create a config map
  • use cm as a file in pod

IngressController

Documentation Referred:

https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/

https://kubernetes.github.io/ingress-nginx/deploy/

Ingress Controller

What is an Ingress Controller?

An Ingress Controller is a Kubernetes resource that manages access to services in a Kubernetes cluster from outside the cluster. It acts as a bridge between external traffic and the services running within the cluster, often providing features like load balancing, SSL termination, and name-based virtual hosting.

Key Concepts

  • Ingress Resource: A Kubernetes API object that defines rules for accessing services. It specifies how to route external requests to different services based on the request’s host and path.
  • Ingress Controller: A controller that watches Ingress Resources and implements their rules. It typically runs as a pod within the cluster and configures a reverse proxy or load balancer based on the Ingress rules.

Benefits

  • Centralized Management: Manage access rules for multiple services from a single point.
  • Load Balancing: Distribute incoming traffic across multiple instances of a service.
  • SSL Termination: Handle SSL/TLS encryption and decryption at the edge, offloading this task from your application services.
  • Path-Based Routing: Route traffic to different services based on the request URL path.

Common Ingress Controllers

  1. NGINX Ingress Controller: Widely used, supports many features and is highly configurable.
  2. Traefik: Provides a dynamic and modern approach with automatic configuration and integration with other components.
  3. HAProxy: Known for high performance and advanced routing capabilities.
  4. Istio IngressGateway: Part of the Istio service mesh, offering advanced traffic management and observability features.

Basic Usage

  1. Install an Ingress Controller: You need to deploy an Ingress Controller in your cluster. For example, to install the NGINX Ingress Controller,
  • Step 1: Install Nginx Ingress Controller:
helm upgrade --install ingress-nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --namespace ingress-nginx --create-namespace
  • Step 2: Verify the Ingress Controller Resource
helm list --all-namespaces
kubectl get ingressclass
kubectl get service -n ingress-nginx
  • check if the IngressClass is available
kubectl get ingressClassName
  • Add the Application dns to your hosts in case you are not using DNS service
nano /etc/hosts   for windows it is under C:\Windows\System32\drivers\etc
<ADD-LOAD-BALANCER-IP> website01.example.internal website02.example.internal
  • Step 1: Create two Pods
kubectl run service-pod-1 --image=nginx
kubectl run service-pod-2 --image=nginx
  • Step 2: Create Service for above created pods
kubectl expose pod service-pod-1 --name service1 --port=80 --target-port=80
kubectl expose pod service-pod-2 --name service2 --port=80 --target-port=80
kubectl get services
  • Step 3: Verify Service to POD connectivity
kubectl run frontend-pod --image=ubuntu --command -- sleep 36000
kubectl exec -it frontend-pod -- bash
apt-get update && apt-get -y install curl nano
curl <SERVER-1-IP>
curl <SERVER-1-IP>
  • Check if the application is reachable
  • Step 5: Change the Default Nginx Page for Each Service
kubectl exec -it service-pod-1 -- bash
cd /usr/share/nginx/html
echo "This is Website 1" > index.html
kubectl exec -it service-pod-2 -- bash
cd /usr/share/nginx/html
echo "This is Website 2" > index.html
  • Step 6: Verification
kubectl exec -it frontend-pod -- bash
curl service1
curl service2

pathType in Kubernetes Ingress

In Kubernetes, the pathType field in an Ingress resource determines how the path specified in the ingress rules is matched. The pathType can have the following values:

1. Prefix (Prefix)

  • Description: Matches paths based on a prefix. For example, if the path is /foo, then /foo, /foo/bar, /foo/baz, etc., will be matched.
  • Use Case: Suitable for routing traffic for a base path and all its subpaths to a backend service.

2. Exact (Exact)

  • Description: The path must match the request path exactly. For instance, if the path is /foo, only /foo will be matched, not /foo/anything or /bar/foo.
  • Use Case: Ideal when you need an exact match for routing.

3. ImplementationSpecific (ImplementationSpecific)

  • Description: The interpretation of the path is dependent on the ingress controller. Different ingress controllers might handle this path type in various ways.
  • Use Case: Useful when the specific behavior is determined by the ingress controller being used.

Documentation Referred:

Lean About name based virtual Hosting

  • Step 7: Create Ingress Resource

    kubectl apply -f - <<EOF
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: name-virtual-host-ingress
    spec:
      ingressClassName: nginx
      rules:
      - host: website01.example.internal
        http:
          paths:
          - pathType: Prefix
            path: "/"
            backend:
              service:
                name: service1
                port:
                  number: 80
      - host: website02.example.internal
        http:
          paths:
          - pathType: Prefix
            path: "/"
            backend:
              service:
                name: service2
                port:
                  number: 80
    EOF
    
  • Check newlly created ingress rules

    kubectl get ingress
    
  • Check more information for ingress

    kubectl describe ingress name-virtual-host-ingress
    
  • Now check if the application is opening in browser

LAB 5

  • Install helm command
  • Install ingressController using helm
  • Create two pods and two svc associate to this
  • Create ingress rule for these 2 services
  • map the name with hosts file
  • Acces your application with ingressController

Kubernetes Patch command

Description

Briefly describe the purpose of the issue.

Examples using kubectl patch

  • Patch a Deployment (Update Replicas)
  kubectl patch deployment <deployment-name> -p '{"spec": {"replicas": 3}}'
  • Patch a Pod (Update Container Image)
kubectl patch pod <pod-name> -p '{"spec": {"containers": [{"name": "container-name", "image": "new-image:tag"}]}}'
  • Update configmap
kubectl patch configmap <configmap-name> -p '{"data": {"new-key": "new-value"}}'
  • Patch a Service (Update Service Type)
kubectl patch service <service-name> -p '{"spec": {"type": "LoadBalancer"}}'
  • Patch a PVC (Update Storage Size)
kubectl patch pvc <pvc-name> -p '{"spec": {"resources": {"requests": {"storage": "5Gi"}}}}'

StatefulSets

  • StatefulSet is the workload API object used to manage stateful applications.
  • Like a Deployment, a StatefulSet manages Pods that are based on an identical container spec.
  • Unlike a Deployment, a StatefulSet maintains a sticky identity for each of its Pods.

Using StatefulSets

StatefulSets are valuable for applications that require one or more of the following.
  • Stable, unique network identifiers.
  • Stable, persistent storage.
  • Ordered, graceful deployment and scaling.
  • Ordered, automated rolling updates.

Example of statefulset

kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx
  ---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: nginx # has to match .spec.template.metadata.labels
  serviceName: "nginx"
  replicas: 3 # by default is 1
  minReadySeconds: 10 # by default is 0
  template:
    metadata:
      labels:
        app: nginx # has to match .spec.selector.matchLabels
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: nginx
        image: registry.k8s.io/nginx-slim:0.8
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "my-storage-class" <Change this as per your sc>
      resources:
        requests:
          storage: 1Gi
  EOF
  • check statefulset
kubectl  get sts
  • Also check for service type
kubectl get svc
  • Check for pvc
kubectl get pvc
  • Check pods
kubectl get pods
  • Test after deleting a pod if the pod is taking the same name again

How to Access StatefulSet Pods

DNS Names: Each Pod in the StatefulSet can be accessed via a DNS name. For a StatefulSet named my-statefulset with 3 replicas, the Pods would be accessible using DNS names like:

web-1.nginx.default.svc.cluster.local
web-2.nginx.default.svc.cluster.local
web-3.nginx.default.svc.cluster.local

Direct Pod Access: You can also access Pods directly via their DNS names without using the Service. This is useful for applications where Pods need to communicate with each other directly and rely on their stable network identity.

LAB 6

  • Create a statefulset
  • Check if they are creating parallely or sequentially
  • create a svc for sts (type none)