Skip to content

Instantly share code, notes, and snippets.

@omps
Last active March 4, 2025 10:38
Show Gist options
  • Save omps/b928534fccadc3d554eb85934fb8a74b to your computer and use it in GitHub Desktop.
Save omps/b928534fccadc3d554eb85934fb8a74b to your computer and use it in GitHub Desktop.
CKAD Questions

Kubernetes Practice questions

Core Concepts

  1. Create a namespace called 'mynamespace' and a pod with image nginx called nginx on this namespace
  2. Create the pod that was just described using YAML
  3. Create a busybox pod (using kubectl command) that runs the command "env". Run it and see the output
  4. Create a busybox pod (using YAML) that runs the command "env". Run it and see the output
  5. Get the YAML for a new namespace called 'myns' without creating it
  6. Create the YAML for a new ResourceQuota called 'myrq' with hard limits of 1 CPU, 1G memory and 2 pods without creating it
  7. Get pods on all namespaces
  8. Create a pod with image nginx called nginx and expose traffic on port 80
  9. Change pod's image to nginx:1.24.0. Observe that the container will be restarted as soon as the image gets pulled
  10. Get nginx pod's ip created in previous step, use a temp busybox image to wget its '/'
  11. Get pod's YAML
  12. Get information about the pod, including details about potential issues (e.g. pod hasn't started)
  13. Get pod logs
  14. If pod crashed and restarted, get logs about the previous instance
  15. Execute a simple shell on the nginx pod
  16. Create a busybox pod that echoes 'hello world' and then exits
  17. Do the same, but have the pod deleted automatically when it's completed
  18. Create an nginx pod and set an env value as 'var1=val1'. Check the env value existence within the pod

Multi-container pods

  1. Create a Pod with two containers, both with image busybox and command "echo hello; sleep 3600". Connect to the second container and run 'ls'
  2. Create a pod with an nginx container exposed on port 80. Add a busybox init container which downloads a page using 'echo "Test" > /work-dir/index.html'. Make a volume of type emptyDir and mount it in both containers. For the nginx container, mount it on "/usr/share/nginx/html" and for the initcontainer, mount it on "/work-dir". When done, get the IP of the created pod and create a busybox pod and run "wget -O- IP"

Pod Design

  1. Create 3 pods with names nginx1,nginx2,nginx3. All of them should have the label app=v1
  2. Show all labels of the pods
  3. Change the labels of pod 'nginx2' to be app=v2
  4. Get the label 'app' for the pods (show a column with APP labels)
  5. Get only the 'app=v2' pods
  6. Add a new label tier=web to all pods having 'app=v2' or 'app=v1' labels
  7. Add an annotation 'owner: marketing' to all pods having 'app=v2' label
  8. Add an annotation 'owner: marketing' to all pods having 'app=v2' label
  9. Add an annotation 'owner: marketing' to all pods having 'app=v2' label
  10. Check the annotations for pod nginx1
  11. Remove the annotations for these three pods
  12. Remove these pods to have a clean state in your cluster

Pod Placement

  1. Create a pod that will be deployed to a Node that has the label 'accelerator=nvidia-tesla-p100'
  2. Taint a node with key tier and value frontend with the effect NoSchedule. Then, create a pod that tolerates this taint.
  3. Create a pod that will be placed on node controlplane. Use nodeSelector and tolerations.

Deployments

kubernetes.io > Documentation > Concepts > Workloads > Workload Resources > Deployments

  1. Create a deployment with image nginx:1.18.0, called nginx, having 2 replicas, defining port 80 as the port that this container exposes (don't create a service for this deployment)
  2. View the YAML of this deployment
  3. View the YAML of the replica set that was created by this deployment
  4. Get the YAML for one of the pods
  5. Check how the deployment rollout is going
  6. Update the nginx image to nginx:1.19.8
  7. Check the rollout history and confirm that the replicas are OK
  8. Undo the latest rollout and verify that new pods have the old image (nginx:1.18.0)
  9. Do an on-purpose update of the deployment with a wrong image nginx:1.91
  10. Verify that something's wrong with the rollout
  11. Return the deployment to the second revision (number 2) and verify the image is nginx:1.19.8
  12. Check the details of the fourth revision (number 4)
  13. Scale the deployment to 5 replicas
  14. Autoscale the deployment, pods between 5 and 10, targeting CPU utilization at 80%
  15. Pause the rollout of the deployment
  16. Update the image to nginx:1.19.9 and check that there's nothing going on, since we paused the rollout
  17. Resume the rollout and check that the nginx:1.19.9 image has been applied
  18. Delete the deployment and the horizontal pod autoscaler you created
  19. Implement canary deployment by running two instances of nginx marked as version=v1 and version=v2 so that the load is balanced at 75%-25% ratio

Jobs

  1. Create a job named pi with image perl:5.34 that runs the command with arguments "perl -Mbignum=bpi -wle 'print bpi(2000)'"
  2. Wait till it's done, get the output
  3. Create a job with the image busybox that executes the command 'echo hello;sleep 30;echo world'
  4. Follow the logs for the pod (you'll wait for 30 seconds)
  5. See the status of the job, describe it and see the logs
  6. Delete the job
  7. Create a job but ensure that it will be automatically terminated by kubernetes if it takes more than 30 seconds to execute
  8. Create the same job, make it run 5 times, one after the other. Verify its status and delete it
  9. Create the same job, but make it run 5 parallel times

Cron jobs

kubernetes.io > Documentation > Tasks > Run Jobs > Running Automated Tasks with a CronJob

  1. Create a cron job with image busybox that runs on a schedule of "*/1 * * * *" and writes 'date; echo Hello from the Kubernetes cluster' to standard output
  2. See its logs and delete it
  3. Create the same cron job again, and watch the status. Once it ran, check which job ran by the created cron job. Check the log, and delete the cron job
  4. Create a cron job with image busybox that runs every minute and writes 'date; echo Hello from the Kubernetes cluster' to standard output. The cron job should be terminated if it takes more than 17 seconds to start execution after its scheduled time (i.e. the job missed its scheduled time).
  5. Create a cron job with image busybox that runs every minute and writes 'date; echo Hello from the Kubernetes cluster' to standard output. The cron job should be terminated if it successfully starts but takes more than 12 seconds to complete execution.
  6. Create a job from cronjob.
export do="--dry-run=client -oyaml"

ConfigMaps

kubernetes.io > Documentation > Tasks > Configure Pods and Containers > Configure a Pod to Use a ConfigMap

  1. Create a configmap named config with values foo=lala,foo2=lolo
  2. Display its values
  3. Create and display a configmap from a file
  4. Create the file with
echo -e "foo3=lili\nfoo4=lele" > config.txt
  1. Create and display a configmap from a .env file
  2. Create the file with the command
echo -e "var1=val1\n# this is a comment\n\nvar2=val2\n#anothercomment" > config.env
  1. Create and display a configmap from a file, giving the key 'special'
  2. Create the file with
echo -e "var3=val3\nvar4=val4" > config4.txt
  1. Create a configMap called 'options' with the value var5=val5. Create a new nginx pod that loads the value from variable 'var5' in an env variable called 'option'
  2. Create a configMap 'anotherone' with values 'var6=val6', 'var7=val7'. Load this configMap as env variables into a new nginx pod
  3. Create a configMap 'cmvolume' with values 'var8=val8', 'var9=val9'. Load this as a volume inside an nginx pod on path '/etc/lala'. 12. Create the pod and 'ls' into the '/etc/lala' directory.

SecurityContext

kubernetes.io > Documentation > Tasks > Configure Pods and Containers > Configure a Security Context for a Pod or Container

  1. Create the YAML for an nginx pod that runs with the user ID 101. No need to create the pod
  2. Create the YAML for an nginx pod that has the capabilities "NET_ADMIN", "SYS_TIME" added to its single container

Resource requests and limits

kubernetes.io > Documentation > Tasks > Configure Pods and Containers > Assign CPU Resources to Containers and Pods

  1. Create an nginx pod with requests cpu=100m,memory=256Mi and limits cpu=200m,memory=512Mi

Limit Ranges

kubernetes.io > Documentation > Concepts > Policies > Limit Ranges (https://kubernetes.io/docs/concepts/policy/limit-range/)

  1. Create a namespace named limitrange with a LimitRange that limits pod memory to a max of 500Mi and min of 100Mi
  2. Describe the namespace limitrange
  3. Create an nginx pod that requests 250Mi of memory in the limitrange namespace

Resource Quotas

kubernetes.io > Documentation > Concepts > Policies > Resource Quotas (https://kubernetes.io/docs/concepts/policy/resource-quotas/)

  1. Create ResourceQuota in namespace one with hard requests cpu=1, memory=1Gi and hard limits cpu=2, memory=2Gi.
  2. Attempt to create a pod with resource requests cpu=2, memory=3Gi and limits cpu=3, memory=4Gi in namespace one
  3. Create a pod with resource requests cpu=0.5, memory=1Gi and limits cpu=1, memory=2Gi in namespace one

Secrets

kubernetes.io > Documentation > Concepts > Configuration > Secrets | kubernetes.io > Documentation > Tasks > Inject Data Into Applications > Distribute Credentials Securely Using Secrets

  1. Create a secret called mysecret with the values password=mypass
  2. Create a secret called mysecret2 that gets key/value from a file
  3. Create a file called username with the value admin:
echo -n admin > username
  1. Get the value of mysecret2
  2. Create an nginx pod that mounts the secret mysecret2 in a volume on path /etc/foo
  3. Delete the pod you just created and mount the variable 'username' from secret mysecret2 onto a new nginx pod in env variable called 'USERNAME'
  4. Create a Secret named 'ext-service-secret' in the namespace 'secret-ops'. Then, provide the key-value pair API_KEY=LmLHbYhsgWZwNifiqaRorH8T as literal.
  5. Consuming the Secret. Create a Pod named 'consumer' with the image 'nginx' in the namespace 'secret-ops' and consume the Secret as an environment variable. Then, open an interactive shell to the Pod, and print all environment variables.
  6. Create a Secret named 'my-secret' of type 'kubernetes.io/ssh-auth' in the namespace 'secret-ops'. Define a single key named 'ssh-privatekey', and point it to the file 'id_rsa' in this directory.
  7. Create a Pod named 'consumer' with the image 'nginx' in the namespace 'secret-ops', and consume the Secret as Volume. Mount the Secret as Volume to the path /var/app with read-only access. Open an interactive shell to the Pod, and render the contents of the file.

ServiceAccounts

kubernetes.io > Documentation > Tasks > Configure Pods and Containers > Configure Service Accounts for Pods

  1. See all the service accounts of the cluster in all namespaces
  2. Create a new serviceaccount called 'myuser'
  3. Create an nginx pod that uses 'myuser' as a service account
  4. Generate an API token for the service account 'myuser'

Observability (18%)

Liveness, readiness and startup probes

kubernetes.io > Documentation > Tasks > Configure Pods and Containers > Configure Liveness, Readiness and Startup Probes

  1. Create an nginx pod with a liveness probe that just runs the command 'ls'. Save its YAML in pod.yaml. Run it, check its probe status, delete it.
  2. Modify the pod.yaml file so that liveness probe starts kicking in after 5 seconds whereas the interval between probes would be 5 seconds. Run it, check the probe, delete it.
  3. Create an nginx pod (that includes port 80) with an HTTP readinessProbe on path '/' on port 80. Again, run it, check the readinessProbe, delete it.
  4. Lots of pods are running in qa,alan,test,production namespaces. All of these pods are configured with liveness probe. Please list all pods whose liveness probe are failed in the format of / per line.

Logging

  1. Create a busybox pod that runs i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done. Check its logs

Debugging

  1. Creat1e a busybox pod that runs 'ls /notexist'. Determine if there's an error (of course there is), see it. In the end, delete the pod
  2. Create a busybox pod that runs 'notexist'. Determine if there's an error (of course there is), see it. In the end, delete the pod forcefully with a 0 grace period
  3. Get CPU/memory utilization for nodes (metrics-server must be running)

Services and Networking (13%)

  1. Create a pod with image nginx called nginx and expose its port 80
  2. Confirm that ClusterIP has been created. Also check endpoints
  3. Get service's ClusterIP, create a temp busybox pod and 'hit' that IP with wget
  4. Convert the ClusterIP to NodePort for the same service and find the NodePort port. Hit service using Node's IP. Delete the service and the pod at the end.
  5. Create a deployment called foo using image 'dgkanatsios/simpleapp' (a simple server that returns hostname) and 3 replicas. Label it as 'app=foo'. Declare that containers in this pod will accept traffic on port 8080 (do NOT create a service yet)
  6. Get the pod IPs. Create a temp busybox pod and try hitting them on port 8080
  7. Create a service that exposes the deployment on port 6262. Verify its existence, check the endpoints
  8. Create a temp busybox pod and connect via wget to foo service. Verify that each time there's a different hostname returned. Delete deployment and services to cleanup the cluster
  9. Create an nginx deployment of 2 replicas, expose it via a ClusterIP service on port 80. Create a NetworkPolicy so that only pods with labels 'access: granted' can access the pods in this deployment and apply it

~ kubernetes.io > Documentation > Concepts > Services, Load Balancing, and Networking > Network Policies ~ ~ Note that network policies may not be enforced by default, depending on your k8s implementation. E.g. Azure AKS by default won't have policy enforcement, the cluster must be created with an explicit support for netpol https://docs.microsoft.com/en-us/azure/aks/use-network-policies#overview-of-network-policy ~

State Persistence (8%)

kubernetes.io > Documentation > Tasks > Configure Pods and Containers > Configure a Pod to Use a Volume for Storage kubernetes.io > Documentation > Tasks > Configure Pods and Containers > Configure a Pod to Use a PersistentVolume for Storage

Define volumes

  1. Create busybox pod with two containers, each one will have the image busybox and will run the 'sleep 3600' command. Make both containers mount an emptyDir at '/etc/foo'. Connect to the second busybox, write the first column of '/etc/passwd' file to '/etc/foo/passwd'. Connect to the first busybox and write '/etc/foo/passwd' file to standard output. Delete pod.
  2. Create a PersistentVolume of 10Gi, called 'myvolume'. Make it have accessMode of 'ReadWriteOnce' and 'ReadWriteMany', storageClassName 'normal', mounted on hostPath '/etc/foo'. Save it on pv.yaml, add it to the cluster. Show the PersistentVolumes that exist on the cluster
  3. Create a PersistentVolumeClaim for this PersistentVolume, called 'mypvc', a request of 4Gi and an accessMode of ReadWriteOnce, with the storageClassName of normal, and save it on pvc.yaml. Create it on the cluster. Show the PersistentVolumeClaims of the cluster. Show the PersistentVolumes of the cluster
  4. Create a busybox pod with command 'sleep 3600', save it on pod.yaml. Mount the PersistentVolumeClaim to '/etc/foo'. Connect to the 'busybox' pod, and copy the '/etc/passwd' file to '/etc/foo/passwd'
  5. Create a second pod which is identical with the one you just created (you can easily do it by changing the 'name' property on pod.yaml). Connect to it and verify that '/etc/foo' contains the 'passwd' file. Delete pods to cleanup. Note: If you can't see the file from the second pod, can you figure out why? What would you do to fix that?
  6. Create a busybox pod with 'sleep 3600' as arguments. Copy '/etc/passwd' from the pod to your local folder

Managing Kubernetes with Helm

Note: Helm is part of the new CKAD syllabus. Here are a few examples of using Helm to manage Kubernetes.

Helm in K8s

  1. Creating a basic Helm chart
  2. Running a Helm chart
  3. Find pending Helm deployments on all namespaces
  4. Uninstall a Helm release
  5. Upgrading a Helm chart
  6. Using Helm repo
  7. Download a Helm chart from a repository
  8. Add the Bitnami repo at https://charts.bitnami.com/bitnami to Helm
  9. Write the contents of the values.yaml file of the bitnami/node chart to standard output
  10. Install the bitnami/node chart setting the number of replicas to 5

Extend the Kubernetes API with CRD (CustomResourceDefinition)

Note: CRD is part of the new CKAD syllabus. Here are a few examples of installing custom resource into the Kubernetes API by creating a CRD.

CRD in K8s

  1. Create a CustomResourceDefinition manifest file for an Operator with the following specifications :
Name : operators.stable.example.com
Group : stable.example.com
Schema: <email: string><name: string><age: integer>
Scope: Namespaced
Names: <plural: operators><singular: operator><shortNames: op>
Kind: Operator
  1. Create the CRD resource in the K8S API
  2. Create custom object from the CRD
Name : operator-sample
Kind: Operator
Spec:
email: [email protected]
name: operator sample
age: 30
  1. Listing operator

Define, build and modify container images

Note: The topic is part of the new CKAD syllabus. Here are a few examples of using podman to manage the life cycle of container images. The use of docker had been the industry standard for many years, but now large companies like Red Hat are moving to a new suite of open source tools: podman, skopeo and buildah. Also Kubernetes has moved in this direction. In particular, podman is meant to be the replacement of the docker command: so it makes sense to get familiar with it, although they are quite interchangeable considering that they use the same syntax.

Podman basics

  1. Create a Dockerfile to deploy an Apache HTTP Server which hosts a custom main page
  2. Build and see how many layers the image consists of
  3. Run the image locally, inspect its status and logs, finally test that it responds as expected
  4. Run a command inside the pod to print out the index.html file
  5. Tag the image with ip and port of a private local registry and then push the image to this registry
  6. Verify that the registry contains the pushed image and that you can pull it
  7. Create a container without running/starting it
  8. Export a container to output.tar file
  9. Run a pod with the image pushed to the registry
  10. Log into a remote registry server and then read the credentials from the default file
  11. Create a secret both from existing login credentials and from the CLI
  12. Create the manifest for a Pod that uses one of the two secrets just created to pull an image hosted on the relative private remote registry
  13. Clean up all the images and containers
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment