Skip to content

Instantly share code, notes, and snippets.

@MarWeUMR
Last active April 4, 2024 06:42
Show Gist options
  • Save MarWeUMR/c1b006ed3cd630680d3f13f3e11ac462 to your computer and use it in GitHub Desktop.
Save MarWeUMR/c1b006ed3cd630680d3f13f3e11ac462 to your computer and use it in GitHub Desktop.
Kubernetes Cheat Sheet

How to get kubernetes secrets decoded right away

kubectl get secret my-secret-name -o json | jq '.data|map_values(@base64d)'

How to install Rancher Local-Path Provisioner

What does this do? If, for instance, a bitnami helm chart installs a database pod, the pod needs storage. By using this method here, the persistant volumes can be created on the fly without having to manually set them up.

https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml

Here is the content of the above manifest:

apiVersion: v1
kind: Namespace
metadata:
  name: local-path-storage

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: local-path-provisioner-service-account
  namespace: local-path-storage

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: local-path-provisioner-role
rules:
  - apiGroups: [ "" ]
    resources: [ "nodes", "persistentvolumeclaims", "configmaps" ]
    verbs: [ "get", "list", "watch" ]
  - apiGroups: [ "" ]
    resources: [ "endpoints", "persistentvolumes", "pods" ]
    verbs: [ "*" ]
  - apiGroups: [ "" ]
    resources: [ "events" ]
    verbs: [ "create", "patch" ]
  - apiGroups: [ "storage.k8s.io" ]
    resources: [ "storageclasses" ]
    verbs: [ "get", "list", "watch" ]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: local-path-provisioner-bind
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: local-path-provisioner-role
subjects:
  - kind: ServiceAccount
    name: local-path-provisioner-service-account
    namespace: local-path-storage

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: local-path-provisioner
  namespace: local-path-storage
spec:
  replicas: 1
  selector:
    matchLabels:
      app: local-path-provisioner
  template:
    metadata:
      labels:
        app: local-path-provisioner
    spec:
      serviceAccountName: local-path-provisioner-service-account
      containers:
        - name: local-path-provisioner
          image: rancher/local-path-provisioner:master-head
          imagePullPolicy: IfNotPresent
          command:
            - local-path-provisioner
            - --debug
            - start
            - --config
            - /etc/config/config.json
          volumeMounts:
            - name: config-volume
              mountPath: /etc/config/
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
      volumes:
        - name: config-volume
          configMap:
            name: local-path-config

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-path
provisioner: rancher.io/local-path
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: local-path-config
  namespace: local-path-storage
data:
  config.json: |-
    {
            "nodePathMap":[
            {
                    "node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
                    "paths":["/opt/local-path-provisioner"]
            }
            ]
    }
  setup: |-
    #!/bin/sh
    set -eu
    mkdir -m 0777 -p "$VOL_DIR"
  teardown: |-
    #!/bin/sh
    set -eu
    rm -rf "$VOL_DIR"
  helperPod.yaml: |-
    apiVersion: v1
    kind: Pod
    metadata:
      name: helper-pod
    spec:
      containers:
      - name: helper-pod
        image: busybox
        imagePullPolicy: IfNotPresent

To deploy this manifest run:

kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml

And now set the local-path as the default:

kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Finally, create a PVC that suits your needs.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: local-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

How to generate a tar from an image in a kubernetes cluster and send it to other nodes

This might come in handy in a small setting when pulling from a remote does not work.

# On the source machine
 sudo /var/lib/rancher/rke2/bin/ctr -n k8s.io -a /run/k3s/containerd/containerd.sock image export ~/schema.tar docker.io/confluentinc/cp-schema-registry:7.5.0
 
 scp /home/source/ksql.tar user@destination-ip:/home/destination
 
 # On the destination machine
 sudo /var/lib/rancher/rke2/bin/ctr -n k8s.io -a /run/k3s/containerd/containerd.sock image import /path/to/image.tar

How to setup a container to use kubernetes native log aggregation

RUN ln -sf /dev/stdout /var/log/app/access.log
RUN ln -sf /dev/stderr /var/log/app/error.log

How to find resources, that are stuck on deletion

Helpful in cases where e.g. the namespace is stuck on terminating state.

kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get --show-kind --ignore-not-found -n <NAMESPACE>

Reference

https://www.ibm.com/docs/en/cloud-private/3.2.0?topic=console-namespace-is-stuck-in-terminating-state

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment