- Create a namespace called 'mynamespace' and a pod with image nginx called nginx on this namespace
- Create the pod that was just described using YAML
- Create a busybox pod (using kubectl command) that runs the command "env". Run it and see the output
- Create a busybox pod (using YAML) that runs the command "env". Run it and see the output
- Get the YAML for a new namespace called 'myns' without creating it
- Create the YAML for a new ResourceQuota called 'myrq' with hard limits of 1 CPU, 1G memory and 2 pods without creating it
- Get pods on all namespaces
- Create a pod with image nginx called nginx and expose traffic on port 80
- Change pod's image to nginx:1.24.0. Observe that the container will be restarted as soon as the image gets pulled
- Get nginx pod's ip created in previous step, use a temp busybox image to wget its '/'
- Get pod's YAML
- Get information about the pod, including details about potential issues (e.g. pod hasn't started)
- Get pod logs
- If pod crashed and restarted, get logs about the previous instance
- Execute a simple shell on the nginx pod
- Create a busybox pod that echoes 'hello world' and then exits
- Do the same, but have the pod deleted automatically when it's completed
- Create an nginx pod and set an env value as 'var1=val1'. Check the env value existence within the pod
- Create a Pod with two containers, both with image busybox and command "echo hello; sleep 3600". Connect to the second container and run 'ls'
- Create a pod with an nginx container exposed on port 80. Add a busybox init container which downloads a page using 'echo "Test" > /work-dir/index.html'. Make a volume of type emptyDir and mount it in both containers. For the nginx container, mount it on "/usr/share/nginx/html" and for the initcontainer, mount it on "/work-dir". When done, get the IP of the created pod and create a busybox pod and run "wget -O- IP"
- Create 3 pods with names nginx1,nginx2,nginx3. All of them should have the label app=v1
- Show all labels of the pods
- Change the labels of pod 'nginx2' to be app=v2
- Get the label 'app' for the pods (show a column with APP labels)
- Get only the 'app=v2' pods
- Add a new label tier=web to all pods having 'app=v2' or 'app=v1' labels
- Add an annotation 'owner: marketing' to all pods having 'app=v2' label
- Add an annotation 'owner: marketing' to all pods having 'app=v2' label
- Add an annotation 'owner: marketing' to all pods having 'app=v2' label
- Check the annotations for pod nginx1
- Remove the annotations for these three pods
- Remove these pods to have a clean state in your cluster
- Create a pod that will be deployed to a Node that has the label 'accelerator=nvidia-tesla-p100'
- Taint a node with key tier and value frontend with the effect NoSchedule. Then, create a pod that tolerates this taint.
- Create a pod that will be placed on node controlplane. Use nodeSelector and tolerations.
kubernetes.io > Documentation > Concepts > Workloads > Workload Resources > Deployments
- Create a deployment with image nginx:1.18.0, called nginx, having 2 replicas, defining port 80 as the port that this container exposes (don't create a service for this deployment)
- View the YAML of this deployment
- View the YAML of the replica set that was created by this deployment
- Get the YAML for one of the pods
- Check how the deployment rollout is going
- Update the nginx image to nginx:1.19.8
- Check the rollout history and confirm that the replicas are OK
- Undo the latest rollout and verify that new pods have the old image (nginx:1.18.0)
- Do an on-purpose update of the deployment with a wrong image nginx:1.91
- Verify that something's wrong with the rollout
- Return the deployment to the second revision (number 2) and verify the image is nginx:1.19.8
- Check the details of the fourth revision (number 4)
- Scale the deployment to 5 replicas
- Autoscale the deployment, pods between 5 and 10, targeting CPU utilization at 80%
- Pause the rollout of the deployment
- Update the image to nginx:1.19.9 and check that there's nothing going on, since we paused the rollout
- Resume the rollout and check that the nginx:1.19.9 image has been applied
- Delete the deployment and the horizontal pod autoscaler you created
- Implement canary deployment by running two instances of nginx marked as version=v1 and version=v2 so that the load is balanced at 75%-25% ratio
- Create a job named pi with image perl:5.34 that runs the command with arguments "perl -Mbignum=bpi -wle 'print bpi(2000)'"
- Wait till it's done, get the output
- Create a job with the image busybox that executes the command 'echo hello;sleep 30;echo world'
- Follow the logs for the pod (you'll wait for 30 seconds)
- See the status of the job, describe it and see the logs
- Delete the job
- Create a job but ensure that it will be automatically terminated by kubernetes if it takes more than 30 seconds to execute
- Create the same job, make it run 5 times, one after the other. Verify its status and delete it
- Create the same job, but make it run 5 parallel times
kubernetes.io > Documentation > Tasks > Run Jobs > Running Automated Tasks with a CronJob
- Create a cron job with image busybox that runs on a schedule of "*/1 * * * *" and writes 'date; echo Hello from the Kubernetes cluster' to standard output
- See its logs and delete it
- Create the same cron job again, and watch the status. Once it ran, check which job ran by the created cron job. Check the log, and delete the cron job
- Create a cron job with image busybox that runs every minute and writes 'date; echo Hello from the Kubernetes cluster' to standard output. The cron job should be terminated if it takes more than 17 seconds to start execution after its scheduled time (i.e. the job missed its scheduled time).
- Create a cron job with image busybox that runs every minute and writes 'date; echo Hello from the Kubernetes cluster' to standard output. The cron job should be terminated if it successfully starts but takes more than 12 seconds to complete execution.
- Create a job from cronjob.
export do="--dry-run=client -oyaml"
kubernetes.io > Documentation > Tasks > Configure Pods and Containers > Configure a Pod to Use a ConfigMap
- Create a configmap named config with values foo=lala,foo2=lolo
- Display its values
- Create and display a configmap from a file
- Create the file with
echo -e "foo3=lili\nfoo4=lele" > config.txt
- Create and display a configmap from a .env file
- Create the file with the command
echo -e "var1=val1\n# this is a comment\n\nvar2=val2\n#anothercomment" > config.env
- Create and display a configmap from a file, giving the key 'special'
- Create the file with
echo -e "var3=val3\nvar4=val4" > config4.txt
- Create a configMap called 'options' with the value var5=val5. Create a new nginx pod that loads the value from variable 'var5' in an env variable called 'option'
- Create a configMap 'anotherone' with values 'var6=val6', 'var7=val7'. Load this configMap as env variables into a new nginx pod
- Create a configMap 'cmvolume' with values 'var8=val8', 'var9=val9'. Load this as a volume inside an nginx pod on path '/etc/lala'. 12. Create the pod and 'ls' into the '/etc/lala' directory.
kubernetes.io > Documentation > Tasks > Configure Pods and Containers > Configure a Security Context for a Pod or Container
- Create the YAML for an nginx pod that runs with the user ID 101. No need to create the pod
- Create the YAML for an nginx pod that has the capabilities "NET_ADMIN", "SYS_TIME" added to its single container
kubernetes.io > Documentation > Tasks > Configure Pods and Containers > Assign CPU Resources to Containers and Pods
- Create an nginx pod with requests cpu=100m,memory=256Mi and limits cpu=200m,memory=512Mi
kubernetes.io > Documentation > Concepts > Policies > Limit Ranges (https://kubernetes.io/docs/concepts/policy/limit-range/)
- Create a namespace named limitrange with a LimitRange that limits pod memory to a max of 500Mi and min of 100Mi
- Describe the namespace limitrange
- Create an nginx pod that requests 250Mi of memory in the limitrange namespace
kubernetes.io > Documentation > Concepts > Policies > Resource Quotas (https://kubernetes.io/docs/concepts/policy/resource-quotas/)
- Create ResourceQuota in namespace one with hard requests cpu=1, memory=1Gi and hard limits cpu=2, memory=2Gi.
- Attempt to create a pod with resource requests cpu=2, memory=3Gi and limits cpu=3, memory=4Gi in namespace one
- Create a pod with resource requests cpu=0.5, memory=1Gi and limits cpu=1, memory=2Gi in namespace one
kubernetes.io > Documentation > Concepts > Configuration > Secrets | kubernetes.io > Documentation > Tasks > Inject Data Into Applications > Distribute Credentials Securely Using Secrets
- Create a secret called mysecret with the values password=mypass
- Create a secret called mysecret2 that gets key/value from a file
- Create a file called username with the value admin:
echo -n admin > username
- Get the value of mysecret2
- Create an nginx pod that mounts the secret mysecret2 in a volume on path /etc/foo
- Delete the pod you just created and mount the variable 'username' from secret mysecret2 onto a new nginx pod in env variable called 'USERNAME'
- Create a Secret named 'ext-service-secret' in the namespace 'secret-ops'. Then, provide the key-value pair API_KEY=LmLHbYhsgWZwNifiqaRorH8T as literal.
- Consuming the Secret. Create a Pod named 'consumer' with the image 'nginx' in the namespace 'secret-ops' and consume the Secret as an environment variable. Then, open an interactive shell to the Pod, and print all environment variables.
- Create a Secret named 'my-secret' of type 'kubernetes.io/ssh-auth' in the namespace 'secret-ops'. Define a single key named 'ssh-privatekey', and point it to the file 'id_rsa' in this directory.
- Create a Pod named 'consumer' with the image 'nginx' in the namespace 'secret-ops', and consume the Secret as Volume. Mount the Secret as Volume to the path /var/app with read-only access. Open an interactive shell to the Pod, and render the contents of the file.
kubernetes.io > Documentation > Tasks > Configure Pods and Containers > Configure Service Accounts for Pods
- See all the service accounts of the cluster in all namespaces
- Create a new serviceaccount called 'myuser'
- Create an nginx pod that uses 'myuser' as a service account
- Generate an API token for the service account 'myuser'
Liveness, readiness and startup probes
kubernetes.io > Documentation > Tasks > Configure Pods and Containers > Configure Liveness, Readiness and Startup Probes
- Create an nginx pod with a liveness probe that just runs the command 'ls'. Save its YAML in pod.yaml. Run it, check its probe status, delete it.
- Modify the pod.yaml file so that liveness probe starts kicking in after 5 seconds whereas the interval between probes would be 5 seconds. Run it, check the probe, delete it.
- Create an nginx pod (that includes port 80) with an HTTP readinessProbe on path '/' on port 80. Again, run it, check the readinessProbe, delete it.
- Lots of pods are running in qa,alan,test,production namespaces. All of these pods are configured with liveness probe. Please list all pods whose liveness probe are failed in the format of / per line.
- Create a busybox pod that runs i=0; while true; do echo "$i:
$(date)"; i=$ ((i+1)); sleep 1; done. Check its logs
- Creat1e a busybox pod that runs 'ls /notexist'. Determine if there's an error (of course there is), see it. In the end, delete the pod
- Create a busybox pod that runs 'notexist'. Determine if there's an error (of course there is), see it. In the end, delete the pod forcefully with a 0 grace period
- Get CPU/memory utilization for nodes (metrics-server must be running)
- Create a pod with image nginx called nginx and expose its port 80
- Confirm that ClusterIP has been created. Also check endpoints
- Get service's ClusterIP, create a temp busybox pod and 'hit' that IP with wget
- Convert the ClusterIP to NodePort for the same service and find the NodePort port. Hit service using Node's IP. Delete the service and the pod at the end.
- Create a deployment called foo using image 'dgkanatsios/simpleapp' (a simple server that returns hostname) and 3 replicas. Label it as 'app=foo'. Declare that containers in this pod will accept traffic on port 8080 (do NOT create a service yet)
- Get the pod IPs. Create a temp busybox pod and try hitting them on port 8080
- Create a service that exposes the deployment on port 6262. Verify its existence, check the endpoints
- Create a temp busybox pod and connect via wget to foo service. Verify that each time there's a different hostname returned. Delete deployment and services to cleanup the cluster
- Create an nginx deployment of 2 replicas, expose it via a ClusterIP service on port 80. Create a NetworkPolicy so that only pods with labels 'access: granted' can access the pods in this deployment and apply it
~ kubernetes.io > Documentation > Concepts > Services, Load Balancing, and Networking > Network Policies ~ ~ Note that network policies may not be enforced by default, depending on your k8s implementation. E.g. Azure AKS by default won't have policy enforcement, the cluster must be created with an explicit support for netpol https://docs.microsoft.com/en-us/azure/aks/use-network-policies#overview-of-network-policy ~
kubernetes.io > Documentation > Tasks > Configure Pods and Containers > Configure a Pod to Use a Volume for Storage kubernetes.io > Documentation > Tasks > Configure Pods and Containers > Configure a Pod to Use a PersistentVolume for Storage
- Create busybox pod with two containers, each one will have the image busybox and will run the 'sleep 3600' command. Make both containers mount an emptyDir at '/etc/foo'. Connect to the second busybox, write the first column of '/etc/passwd' file to '/etc/foo/passwd'. Connect to the first busybox and write '/etc/foo/passwd' file to standard output. Delete pod.
- Create a PersistentVolume of 10Gi, called 'myvolume'. Make it have accessMode of 'ReadWriteOnce' and 'ReadWriteMany', storageClassName 'normal', mounted on hostPath '/etc/foo'. Save it on pv.yaml, add it to the cluster. Show the PersistentVolumes that exist on the cluster
- Create a PersistentVolumeClaim for this PersistentVolume, called 'mypvc', a request of 4Gi and an accessMode of ReadWriteOnce, with the storageClassName of normal, and save it on pvc.yaml. Create it on the cluster. Show the PersistentVolumeClaims of the cluster. Show the PersistentVolumes of the cluster
- Create a busybox pod with command 'sleep 3600', save it on pod.yaml. Mount the PersistentVolumeClaim to '/etc/foo'. Connect to the 'busybox' pod, and copy the '/etc/passwd' file to '/etc/foo/passwd'
- Create a second pod which is identical with the one you just created (you can easily do it by changing the 'name' property on pod.yaml). Connect to it and verify that '/etc/foo' contains the 'passwd' file. Delete pods to cleanup. Note: If you can't see the file from the second pod, can you figure out why? What would you do to fix that?
- Create a busybox pod with 'sleep 3600' as arguments. Copy '/etc/passwd' from the pod to your local folder
Note: Helm is part of the new CKAD syllabus. Here are a few examples of using Helm to manage Kubernetes.
- Creating a basic Helm chart
- Running a Helm chart
- Find pending Helm deployments on all namespaces
- Uninstall a Helm release
- Upgrading a Helm chart
- Using Helm repo
- Download a Helm chart from a repository
- Add the Bitnami repo at https://charts.bitnami.com/bitnami to Helm
- Write the contents of the values.yaml file of the bitnami/node chart to standard output
- Install the bitnami/node chart setting the number of replicas to 5
Note: CRD is part of the new CKAD syllabus. Here are a few examples of installing custom resource into the Kubernetes API by creating a CRD.
- Create a CustomResourceDefinition manifest file for an Operator with the following specifications :
Name : operators.stable.example.com
Group : stable.example.com
Schema: <email: string><name: string><age: integer>
Scope: Namespaced
Names: <plural: operators><singular: operator><shortNames: op>
Kind: Operator
- Create the CRD resource in the K8S API
- Create custom object from the CRD
Name : operator-sample
Kind: Operator
Spec:
email: [email protected]
name: operator sample
age: 30
- Listing operator
Note: The topic is part of the new CKAD syllabus. Here are a few examples of using podman to manage the life cycle of container images. The use of docker had been the industry standard for many years, but now large companies like Red Hat are moving to a new suite of open source tools: podman, skopeo and buildah. Also Kubernetes has moved in this direction. In particular, podman is meant to be the replacement of the docker command: so it makes sense to get familiar with it, although they are quite interchangeable considering that they use the same syntax.
- Create a Dockerfile to deploy an Apache HTTP Server which hosts a custom main page
- Build and see how many layers the image consists of
- Run the image locally, inspect its status and logs, finally test that it responds as expected
- Run a command inside the pod to print out the index.html file
- Tag the image with ip and port of a private local registry and then push the image to this registry
- Verify that the registry contains the pushed image and that you can pull it
- Create a container without running/starting it
- Export a container to output.tar file
- Run a pod with the image pushed to the registry
- Log into a remote registry server and then read the credentials from the default file
- Create a secret both from existing login credentials and from the CLI
- Create the manifest for a Pod that uses one of the two secrets just created to pull an image hosted on the relative private remote registry
- Clean up all the images and containers