Part 3
Topics
- Service
- CLusterIP
- NodePort
- LoadBalancer
- Headless Service
- METALLB
- DNS Troubleshooting
- Deployment
- ReplicaSet
- Daemonset
- Labels and Selector
- Type of Selector
- Equality based
- Set based selector
- Job
- CronJob
- Metric Server
- HPA(pod Autoscaling)
- Helm Overview
- RBAC
Service
A Service is a method for exposing a network application that is running as one or more Pods in your cluster.
- A key aim of Services in Kubernetes is that you don’t need to modify your existing application to use an unfamiliar service discovery mechanism.
- A Kubernetes service is a logical abstraction for a deployed group of pods in a cluster
What are the types of Kubernetes services?
- ClusterIP: Exposes a service which is only accessible from within the cluster.
- NodePort: Exposes a service via a static port on each node’s IP. NodePorts are in the 30000-32767 range by default
- LoadBalancer: Exposes the service via the cloud provider’s load balancer.
- ExternalName Maps a service to a predefined externalName field by returning a value for the CNAME record.
Create a Pod
kubectl run myapp --image=nginx
Create a service of Type(ClusterIp)
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
selector:
run: myapp
ports:
- protocol: TCP
port: 80
targetPort: 80
EOF
Create a service of Type (ClusterIP)
kubectl expose pod/myapp --port 80 --target-port=80 --name cip
Login to a pod and try to access app
kubectl exec -it myapp sh
curl cip
Create a svc of type nodePort
kubectl expose pod/myapp --port 80 --target-port=80 --name myapp-np --type=NodePort
Access your app from the ips of the node
172.16.16.100:port
Create a type of service (LoadBalancer)
kubectl expose pod/myapp --port=80 --type=LoadBalancer --name=lb
- Check the endpoint
kubectl get ep
- Achieve the blue green
- Delete a service
- Exchange the service selector
LAB 1
- Create a pod
- Create a service for a pod
- try to access from the pod
- try to access the svc from a diffeent pod
- try to access svc from your laptop
- Check what happen if the conatiner and svc port are different.
- test changing the label
- Check the endpoint of a svc
- try to create a svc type LoadBalancer (it should be in Pending state)
- Also create a svc nodePort and access from your browser.
Metallb A LoadBalancer Solution for OnPrem Kubernetes
Deploying Metallb
Steps:
git clone https://gitlab.com/container-and-kubernetes/kubernetes-2024.git
cd kubernetes-2024/
cd metallb
kubectl apply -f 01_metallb.yaml
kubectl apply -f 02_metallb-config.yaml
- Make a test after creating a service
kubectl apply -f 03_test-load-balancer.yaml
- Check the service
kubectl get svc
LAB 2
- Deploy metric server
- Check the status of speaker pod
- check for any error in pod related to metalb
- Create a svc of type LoadBalancer (should be successful this time)
- Access your app using this LoadBalancer ip
Metric Server
Steps
git clone https://gitlab.com/container-and-kubernetes/kubernetes-2024.git
cd kubernetes-2024
cd metricserver
kubectl apply -f .
- wait for few mins and check if the metric server pods are up
kubectl top nodes
kubectl top pods
LAB 3
- Deploy the metric server .
- Check the resource utilizartion for your nodes and pods
HPA (Pod Autoscaling)
Steps
Prerequisites
- Resources should be defined for a pod
- Metric Api has to be available .
git clone https://gitlab.com/container-and-kubernetes/kubernetes-2024.git
cd kubernetes-2024
cd hpa
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
kubectl apply -f hpa.yaml
- check hpa
kubectl get hpa
Kubernetes Deployment
A Kubernetes Deployment is a resource object used to manage a set of replicas of a Pod. It provides declarative updates to Pods and ReplicaSets, ensuring that a specified number of Pods are running at any given time. Deployments are a key abstraction in Kubernetes that allows you to manage stateless applications.
Key Concepts
1. What is a Deployment?
A Deployment is a Kubernetes resource that describes the desired state of your application. It ensures that the correct number of Pods are running and handles updates to the Pods without downtime.
2. Key Features of Deployments
- Declarative Updates: Define the desired state of your Pods and the Deployment handles the updates.
- Scaling: Easily scale the number of Pods up or down.
- Rolling Updates: Automatically updates Pods with new versions of the application with zero downtime.
- Rollback: Revert to a previous version if something goes wrong with the new version.
- Replica Management: Ensures that the specified number of Pod replicas are running.
3. Deployment Manifest
A Deployment is defined using a YAML manifest. Here’s a basic example of a Deployment configuration:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
labels:
app: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-container
image: nginx:latest
ports:
- containerPort: 80
- Create a deployment
kubectl create deployment myapp-deployment --image=nginx
- Create a Svc for above Deployment
kubectl expose deployment/myapp-deployment --port 80 --type=LoadBalancer
- Deployment with Yaml file
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
spec:
replicas: 3 # Number of desired replicas
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: nginx-container
image: nginx:latest
ports:
- containerPort: 80
EOF
- Perform Rollout
kubectl set image deployment/myapp-deployment nginx=httpd
- Check rollout status
kubectl rollout status deployment myapp-deployment
- Check the rollout history
kubectl rollout history deployment myapp-deployment
- Perform Rollback
kubectl rollout undo deployment myapp-deployment
- Perform scale up/down
kubectl scale deployment myapp-deployment --replicas=5
- Set Environment Variables in Deployment
kubectl set env deployment/myapp-deployment KEY=VALUE
- Set Resources
kubectl set resources deployment/myapp-deployment --limits=cpu=200m,memory=512Mi --requests=cpu=100m,memory=256Mi
- Update SA for a deployment
kubectl set serviceaccount deployment/myapp-deployment my-service-account
- Change the revision history
spec:
revisionHistoryLimit: 20
replicas: 3
selector:
matchLabels:
app: your-app
template:
metadata:
labels:
app: your-app
spec:
containers:
- name: your-container
image: your-image
- How to check if the change has been accepted
kubectl get deployment <deployment-name> -o=jsonpath='{.spec.revisionHistoryLimit}
- How to use record option with rollback depoloyment
kubectl rollout undo deployment/<deployment-name> --to-revision=<revision-number> --record
LAB 3
- Create Deployment
- Update secret for a deployment
- Set resource block for a deployment
- Set Environment Variables in Deployment
- Perform Rollout
- Perform Rollback
- Rolled back to another revision
- Check the max rs for a deployment
- Deployment strategy
Job
Overview
A Kubernetes Job creates one or more Pods and ensures that a specified number of them successfully terminate. As pods successfully complete, the Job tracks the successful completions. When a specified number of successful completions is reached, the Job itself is complete.
Example Job YAML
Below is an example YAML file for a Kubernetes Job that runs a simple task.
apiVersion: batch/v1
kind: Job
metadata:
name: example-job
spec:
template:
spec:
containers:
- name: example
image: busybox
command: ["sh", "-c", "echo Hello, Kubernetes! && sleep 30"]
restartPolicy: Never
backoffLimit: 4
- Create a job
- Create a cronjob
- Clean up finished jobs automatically
- ttlSecondsAfterFinished: 100
- check different option in cronjob as job like no of retries
- backoffLimit
Max pod restart in case of failure
- completions
How many containers of the job are created one after another ?
- parallelism:
it defines how many pods will be created at once
Job Creation yaml
kubectl apply -f - <<EOF
apiVersion: batch/v1
kind: Job
metadata:
name: pi
spec:
template:
spec:
containers:
- name: pi
image: busybox
command: ["/bin/echo", "Hello World"]
restartPolicy: Never
backoffLimit: 4
EOF
cronjob
Overview
A Kubernetes CronJob creates Jobs on a repeating schedule. The CronJob resource is like a Job, but it is run periodically based on a specified schedule.
Example CronJob YAML
Below is an example YAML file for a Kubernetes CronJob that runs a simple task every minute.
apiVersion: batch/v1
kind: CronJob
metadata:
name: example-cronjob
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: example
image: busybox
command: ["sh", "-c", "echo Hello, Kubernetes! && sleep 30"]
restartPolicy: OnFailure
- Create a cronjob
kubectl apply -f - <<EOF
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: simple-cronjob
spec:
schedule: "*/1 * * * *" # Run every minute
jobTemplate:
spec:
template:
spec:
containers:
- name: simple-cronjob-container
image: busy
EOF
- Example 2
kubectl apply -f - <<EOF
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: simple-cronjob
spec:
schedule: "*/1 * * * *" # Run every minute
jobTemplate:
spec:
template:
spec:
containers:
- name: simple-cronjob-container
image: busybox
command: ["echo", "Hello, Kubernetes!"]
EOF
- Example 3
kubectl apply -f - <<EOF
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: parallel-cronjob
spec:
schedule: "0 */6 * * *" # Run every 6 hours
jobTemplate:
spec:
completions: 2
parallelism: 1
template:
spec:
containers:
- name: parallel-cronjob-container
image: busybox
command: ["echo", "Running parallel cronjob"]
EOF
- Example 4
kubectl apply -f - <<EOF
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: cronjob-volume-mounts
spec:
schedule: "*/5 * * * *" # Run every 5 minutes
jobTemplate:
spec:
template:
spec:
containers:
- name: cronjob-volume-mounts-container
image: busybox
command: ["/bin/sh", "-c"]
args: ["echo Hello > /data/hello.txt"]
volumeMounts:
- name: data-volume
mountPath: /data
volumes:
- name: data-volume
emptyDir: {}
EOF
- Cronjob/job with imperative way
kubectl create job myjob --image=busybox --command -- echo "Hello, Kubernetes!"
kubectl create cronjob mycronjob --image=busybox --schedule="*/5 * * * *" --command -- echo "Scheduled Job"
- Suspend an active Job:
kubectl patch job/myjob --type=strategic --patch '{"spec":{"suspend":true}}'
- Resume a suspended Job:
kubectl patch job/myjob --type=strategic --patch '{"spec":{"suspend":false}}'
LAB 4
- Create a job
- Create a job with completions
- Create a job with parallelism:
- Create a cronjob with to run daily
- Create a cronjob with backoffLimit
Kubernetes ReplicaSet Resource
A Kubernetes ReplicaSet is a resource that ensures a specified number of Pod replicas are running at any given time. ReplicaSets are used to maintain a stable set of replica Pods running at any given time, ensuring high availability and redundancy of applications.
Key Concepts
1. What is a ReplicaSet?
A ReplicaSet is a Kubernetes resource that ensures a certain number of identical Pods are running at all times. It is used to maintain a stable set of Pods and handle scaling up or down.
2. Key Features of ReplicaSets
- Pod Replication: Ensures that a specified number of Pods are running and available.
- Scaling: Adjust the number of Pods based on demand.
- Self-Healing: Automatically replaces Pods that are deleted or fail.
3. ReplicaSet Manifest
A ReplicaSet is defined using a YAML manifest. Here’s a basic example of a ReplicaSet configuration:
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: my-replicaset
labels:
app: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-container
image: nginx:latest
ports:
- containerPort: 80
- View Replicaset
kubectl get replicasets
kubectl get rs
- Scale up/down a replicaset
kubectl scale replicaset my-replicaset --replicas=5
- Create a service for Replicaset
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: myapp
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
EOF
- configure port forward for replicaset
kubectl port-forward replicaset/my-replicaset 8080:80
LAB 5
- Create a replicaset
- Scale Replicaset
- Check the image used for replicaset
- Change the image for a rs
- Create a service for rs
- Forward the port of a rs
- Test if the app is opening
Daemonset
A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage collected. Deleting a DaemonSet will clean up the Pods it created.
- Some typical uses of a DaemonSet are:
- running a cluster storage daemon on every node
- running a logs collection daemon on every node
- running a node monitoring daemon on every node
- Create a daemonset
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd-elasticsearch
labels:
k8s-app: fluentd-logging
spec:
selector:
matchLabels:
name: fluentd-elasticsearch
template:
metadata:
labels:
name: fluentd-elasticsearch
spec:
tolerations:
# these tolerations are to have the daemonset runnable on control plane nodes
# remove them if your control plane nodes should not run pods
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: NoSchedule
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
containers:
- name: fluentd-elasticsearch
image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
# it may be desirable to set a high priority class to ensure that a DaemonSet Pod
# preempts running Pods
# priorityClassName: important
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
EOF
- Check the daemonset
kubectl get ds
- test if the daemonset is as expected.
kubectl get pods -n kube-system -o wide |grep -i fluentd-elasticsearch
- check the pod if they are creating on all nodes\
LAB 6
- Create a ds
- Check the pod on all nodes
DNS Troubleshooting:
Follow this for DNS Troubleshooting
- Create a simple Pod to use as a test environment
kubectl apply -f - <<EOF apiVersion: v1 kind: Pod metadata: name: dnsutils spec: containers: - name: dnsutils image: registry.k8s.io/e2e-test-images/jessie-dnsutils:1.3 command: - sleep - "infinity" imagePullPolicy: IfNotPresent restartPolicy: Always EOF
- Run nslookup for kuberntes cluster
kubectl exec -i -t dnsutils -- nslookup kubernetes.default
- Check the local DNS configuration first
kubectl exec -ti dnsutils -- cat /etc/resolv.conf
- if you get some error Check if the DNS pod is running
kubectl get pods --namespace=kube-system -l k8s-app=kube-dns
- Check for errors in the DNS pod
kubectl logs --namespace=kube-system -l k8s-app=kube-dns
- Is DNS service up?
kubectl get svc --namespace=kube-system
- Are DNS endpoints exposed?
kubectl get endpoints kube-dns --namespace=kube-system
DNS for Services and Pods
- Kubernetes creates DNS records for Services and Pods.
- You can contact Services with consistent DNS names instead of IP addresses.
- Services defined in the cluster are assigned DNS names.
- By default, a client Pod’s DNS search list includes the Pod’s own namespace and the cluster’s default domain.
LAB 7
- Complete all above steps
- Create 2 pods in different ns
- Create two svc in two different ns
- Try to connect svc from pod in a different ns
- check the pods of dns
- Check the endpoint associated with dns svc available in kuberntes cluster.
Kubernetes Node Commands
This document provides a comprehensive list of kubectl
commands for managing and inspecting nodes in a Kubernetes cluster.
List Nodes
To list all nodes in the cluster with basic information:
kubectl get nodes
Describe a Node
kubectl describe node <node-name>
Get Node Resource Usage
kubectl top nodes
Get Node Labels
kubectl get nodes --show-labels
Add a Label to a Node
kubectl label nodes <node-name> <label-key>=<label-value>
Remove a Label from a Node
kubectl label nodes <node-name> <label-key>-
Cordon a Node
- To mark a node as unschedulable, preventing new pods from being scheduled on it:
kubectl cordon <node-name>
Uncordon a Node
- To mark a node as schedulable, allowing new pods to be scheduled on it:
kubectl uncordon <node-name>
Drain a Node
- To safely evict all pods from a node (use with caution as it will terminate the pods):
kubectl drain <node-name> --ignore-daemonsets --delete-local-data
- –ignore-daemonsets: Ensures that daemonset-managed pods are not evicted.
- –delete-local-data: Ensures that pods with local storage are deleted.
10. Delete a Node
- To delete a node from the cluster:
kubectl delete node <node-name>