Common issues and solutions related to Kubernetes
Even after running the command kubectl delete ingress <ingress_name> --force --grace-period=0
, the Ingress is not getting removed.
Solution:
You can patch the Ingress using the following command:
kubectl patch ingress <name-of-the-ingress> -n <your-namespace> -p '{"metadata":{"finalizers":[]}}' --type=merge
Solution:
You can create a new Pod with the curl image and open a shell in it using the following command:
kubectl run curlpod --image=curlimages/curl -i --tty -- sh
Once you’ve finished testing, you can press Ctrl+D to escape the terminal session in the Pod. The Pod will continue running afterwards. You can check the status of the Pod using:
$ kubectl get pod curlpod
NAME READY STATUS RESTARTS AGE
curlpod 1/1 Running 1 72s
The Pod is still there!
You can re-enter the Pod again using the kubectl exec
command:
kubectl exec -it curlpod sh
kubectl attach curlpod -c curlpod -i -t
Or, you can delete the Pod with the kubectl delete pod command:
kubectl delete po curlpod
Solution:
You can patch the Ingress using the following command:
kubectl cordon <node_name>
kubectl drain <node_name>
kubectl delete node <node_name>
Solution:
kubectl cordon <node-name>
kubectl get nodes
Schedule the node back so that the pods can be scheduled
kubectl uncordon <node-name>
Solution:
kubectl exec -i -t <pod_name> -- cat /etc/resolv.conf
kubectl exec -i -t <pod_name> -- nslookup kubernetes.default
Solution:
kubectl top pods -a --sort-by=memory
kubectl top pods -n <namespace> --selector=app=<app-lablename> --sort-by=memory
Solution:
kubectl scale deployments <deployment-name> --replicas=<new-replicas>
kubectl scale statefulsets <stateful-set-name> --replicas=<new-replicas>
Solution:
kubectl get po -A --sort-by=.metadata.creationTimestamp
Solution:
kubectl alpha debug -it <podname> --image=busybox --target=containername
Solution:
kubectl debug
provides a way to create a temporary duplicate of a pod and replace its containers with debug versions or add new troubleshooting tools without affecting the original pod. This is incredibly useful for debugging issues in a live environment without impacting the running state of your application
kubectl debug pod/myapp-pod -it --copy-to=myapp-debug --container=myapp-container --image=busybox
Solution:
curl -X GET https://<kubernetes-api-server>/api/v1/namespaces/default/pods \
-H "Authorization: Bearer <your-access-token>" \
-H 'Accept: application/json'
Solution:
value.txt
Hello world
kubectl create configmap test-cm --from-file=greeting=value.txt --namespace test
Solution:
kubectl create deployment sample-deployment --image=busybox -n test --replicas=5 --dry-run=client -o yaml > sample-deployment.yaml
Solution:
kubectl run nginx --image=nginx --restart=Never --port=80 --dry-run=client -o yaml
Solution:
kubectl set image pod/nginx nginx=nginx:1.15-alpine
Solution:
kubectl get po nginx -o jsonpath='{.spec.containers[].image}{"\n"}'
Solution:
kubectl run busybox --image=busybox --restart=Never -- /bin/sh -c "sleep 3600"
Solution:
Use the
--rm
flagkubectl run busybox --image=nginx --restart=Never -it --rm -- echo "How are you"
Solution:
Use the
--v
flag
kubectl get po nginx --v=7
kubectl get po nginx --v=8
kubectl get po nginx --v=9
Solution:
kubectl get po -o=custom-columns="POD_NAME:.metadata.name, POD_STATUS:.status.containerStatuses[].state"
Solution:
kubectl logs busybox -c busybox2 --previous
Solution:
kubectl taint nodes <node-name> gpu=true:NoSchedule
This taint would prevent any new pods from being scheduled on this node unless they tolerate the “gpu=true” taint.
Tolerations are applied to pods and indicate that the pod can be scheduled on nodes with specific taints. A pod with toleration will only be scheduled on nodes that have a matching taint. By setting tolerations, you can make sure that certain pods are placed on nodes with specific attributes or restrictions, even if those nodes are tainted.
apiVersion: v1
kind: Pod
metadata:
name: gpu-pod
spec:
containers:
- name: my-app
image: my-app-image
tolerations:
- key: "gpu"
operator: "Exists"
effect: "NoSchedule"
Solution:
kubectl top pod --containers
Solution:
kubectl apply --prune -l app=<label>