kubectl completion bash /etc/bash_completion.d/kubectl alias k=kubectl
kubeadm simplifies installation of kubernetes cluster
sudo tee -a /etc/modules-load.d/containerd.conf <<EOF overlay br_netfilter EOF
sudo modprobe overlay sudo modprobe br_netfilter
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF sudo sysctl --system
apt-get update && apt-get install -y containerd
mkdir -p /etc/containerd/ containerd config default | sudo tee /etc/containerd/config.toml
swapoff -a sudo sed -i '/ swap / s/^(.*)$/#\1/g' /etc/fstab
apt install -y apt-transport-https curl
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update sudo apt-get install -y kubelet=1.20.1-00 kubeadm=1.20.1-00 kubectl=1.20.1-00 sudo apt-mark hold kubelet kubeadm kubectl
Master
kubeadm init --pod-network-cidr=192.168.0.0/16 kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
kubeadm token create --print-join-command
Which flag can you use with kubeadm to supply a custom configuration file? What is a Namespace? - A virtual Kubernetes cluster.
Stacked etcd = etcd running on the same node as control plane
Draining a node. kubectl drain (--ignore-daemonsets) kubectl uncordon
Upgrade cluster apt-get install -y --allow-change-held-packages kubelet=1.20.2-00 kubectl=1.20.2-00 kubeadm upgrade plan v1.20.2
kubeadm upgrade node
Backing up etcd with etcdctl
etcd snapshot restore
creates a new logical cluster
verify the connectivity ETCDCTL_API=3 etcdctl get cluster.name --endpoints=https://10.0.1.101:2379 --cacert=/home/cloud_user/etcd-certs/etcd-ca.pem --cert=/home/cloud_user/etcd-certs/etcd-server.crt --key=/home/cloud_user/etcd-certs/etcd-server.key
ETCDCTL_API=3 etcdctl snapshot restore=backup.db --initial-cluster="etcd-restore=https://10.0.1.101:2380" --initial-advertise-peer-urls https://10.0.1.101:2380 --name etcd-restore --data-dir /var/lib/etcd
chown -R etcd:etcd /var/lib/etcd/
quick creation of yaml kubectl create deployment my-dep --image=nginx --dry-run -o yaml
--record flag stores the kubectl command used as an annotation on the object
--- RBAC Role / ClusterRole = objects defining set of permissions RoleBinding / ClusterRoleBindings
service account = account used by container processes within pods to authenticate with k8s api we can bind service accounts with cluster roles/cluster role bindings
-- kubernetes metric server is an optional addon kubectl top pod --sort-by xxx --selector kubectl top pod --sort-by cpu kubectl top node
raw access kubectl get --raw /apis/metrics.k8s.io
configmap and secrets can be passed to containers as a env var or configuration volume, in that case each top-level key will appear as a file containing all keys below that top-level key
apiVersion: v1 kind: Pod metadata: name: env-pod spec: containers:
- name: busybox
image: busybox
command: ['sh', '-c', 'echo "configmap: $CONFIGMAPVAR secret: $SECRETVAR"']
env:
- name: CONFIGMAPVAR valueFrom: configMapKeyRef: name: my-configmap key: key1
- name: SECRETVAR valueFrom: secretKeyRef: name: my-secret key: secretkey1
Resource requests allow you to define an amount of resources (cpu/memory) you expect a container to use. The scheduler will use that information to avoid scheduling on nodes which do not have enough available resources. ONLY affects the scheduling cpu is expressed in 1/1000 of cpu. 250m = 1/4 cpu containers:
- name: nginx resources: requests: xxx limits: cpu: 250m memory: "128mi"
liveness / readiness probe startupProbe
-- nodeselector - my label spec: nodeselector: keylabel: "value"
spec: nodeName: "nodename"
static pod = automatically created from yaml manifest files localted in the manifest path of the node. mirror pod = kubeclet will create a mirror pod for each static pod to allow to see the status of the static pod via the api, but you cannot manage them through the api, it has to be managed directly through the kubelet
Deployment scalement
- change replica attribute in the yaml
- kubectl scale deployment.v1.apps/my-deployment --replicas=5
to check the status of a deployment
you can change the image
network policy = an object that allows you to control the flow of network communication to and from pods. It can be applied to ingress and/or egress
By default, pods are wide open. But if there is any policy attached, they are isolated and only whitelisted traffic is allowed
Available selectors:
- podSelector
- namespaceSelector
- ipBlock
port
kubectl label namespace np-test team=tmp-test
Pod domain names are of the form pod-ip-address.namespace-name.pod.cluster.local.
Service is an abstraction layer permitting to our clients to interact with the service without need of knowing anything about underlying pods Service routes traffic in load-balance manner Endpoints are the backend entities, to which services route traffic. There is 1 endpoint for each pod
Service Types: ClusterIP - Exposes applications inside the cluster networks NodePort - Esposes application outside the cluster network LoadBalancer - Exposes application to outside through the usage of cloud load balancer ExternalName (*not in cka)
Service FQDN Dns service-name.namespace.svc.cluster-domain.example This fqdn can be used from any namespace, in the same namespace you can simply use the short svc name
Volume types
- hostPath
- emptyDir
#########################
kubectl run nginx --image=nginx --restart=Never
kubectl delete po nginx --grace-period=0 --force
k get po redis -w
kubectl get po nginx -o jsonpath='{.spec.containers[].image}{"\n"}'
kubectl run busybox --image=busybox --restart=Never -- ls
kubectl logs busybox -p # previous logs
k run --image busybox busybox --restart=Never -- sleep 3600
kubectl get pods --sort-by=.metadata.name
kubectl exec busybox -c busybox3 -- ls
kubectl get pods --show-labels
kubectl get pods -l env=dev
kubectl get pods -l 'env in (dev,prod)'
k create deploy deploy1 --image=nginx -oyaml --dry-run=client
k run tmp --rm --image=busybox -it -- wget -O- google.com