Skip to content

Instantly share code, notes, and snippets.

@gterdem
Last active April 8, 2022 13:45
Show Gist options
  • Save gterdem/6415d4b120bb052d8cff1f4227b20876 to your computer and use it in GitHub Desktop.
Save gterdem/6415d4b120bb052d8cff1f4227b20876 to your computer and use it in GitHub Desktop.
Kubernetes Reminders

Kubernetes: kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.7.0/cert-manager.yaml Helm install: helm install --name cert-manager --namespace cert-manager --version v1.4.0 jetstack/cert-manager --set installCRDs=true

Create Issuer

Use either issuer or cluster-issuer.

  • issuer.yaml:

    apiVersion: cert-manager.io/v1
    kind: Issuer
    metadata:
      name: selfsigned-issuer
      # namespace: eshop
    spec:
      selfSigned: {}

    Apply issuer: kubectl apply issuer.yaml

  • the ClusterIssuer is a cluster-wide (non-namespaced) resource, you only need to create one for the whole cluster. cluster-issuer.yaml:

    apiVersion: cert-manager.io/v1
    kind: ClusterIssuer
    metadata:  
      name: selfsigned-issuer-cluster
    spec:  
      selfSigned: {}

    Apply issuer: kubectl apply cluster-issuer.yaml

Create Certificate

Use issuerRef based on your issuer certificate.yaml:

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: selfsigned-cert
  # namespace: eshop
spec:
  dnsNames:
    - "*.eshop-st-web"
    - "*.eshop-st-public-web"
    - "*.eshop-st-authserver"
    - "*.eshop-st-identity"
    - "*.eshop-st-administration"
    - "*.eshop-st-basket"
    - "*.eshop-st-catalog"
    - "*.eshop-st-ordering"
    - "*.eshop-st-payment"
    - "*.eshop-st-gateway-web"
    - "*.eshop-st-gateway-web-public"
  secretName: eshop-staging-tls
  issuerRef:
    name: selfsigned-issuer
    # name: selfsigned-issuer-cluster

Apply issuer: kubectl apply certificate.yaml

Check secrets: kubectl get secrets -n=namespace

Check certificates: kubectl get certificates -n=namespace kubectl describe certificate selfsigned-cert -n=namespace

Ingress configuration

Use annotation based on your issuer.

Ingress configuration

apiVersion: extensions/v1beta1 
kind: Ingress 
metadata: 
  annotations: 
    # cert-manager.io/cluster-issuer: selfsigned-cluster-issuer
    cert-manager.io/issuer: selfsigned-issuer
  name: local-ingress 
  namespace: my-app-namespace 
spec: 
  rules: 
  - host: test-app.com 
    http: 
      paths: 
      - backend: 
          serviceName: my-app-service
          servicePort: 80 
        path: / 
  tls: 
  - hosts: 
    - test-app.com 
    secretName: eshop-staging-tls

Troubleshoot

kubectl get Issuers,ClusterIssuers,Certificates,CertificateRequests,Orders,Challenges --all-namespaces

Creating Alias

  • Powershell: Set-Alias -Name k -Value kubectl
  • Mac/Linux: alias k="kubectl"

Running - Deleting a pod

  • kubectl run my-nginx --image=nginx:alpine
  • kubectl get pods
  • kubectl get -A or kubectl get --all-namespaces
  • kubectl port-forward my-nginx 8080:80
  • kubectl delete pod my-nginx

Defining a Pod with YAML

  • kubectl create -f nginx.pod.yml --dry-run --validate=true (validation is true by default) (dry-run is don't run but show result on console)
  • kubectl apply -f nginx.pod.yml
  • kubectl create -f nginx.pod.yml --save-config (or apply)
  • kubectl edit -f nginx.pod.yml (edit running pod config yaml file)
  • kubectl delete -f nginx.pod.yml

General

  • Check pod status: kubectl describe pod my-nginx
  • Get pod info as yaml: kubectl get [pod] -o yaml
  • Check deployment status: kubectl describe deployment my-nginx
  • Execute into pod: kubectl exec my-nginx -it sh
  • Get all namespaces: kubectl get ns
  • Get pods of specific namespace in wide display: kubectl get pods -n [namespace] -o wide
  • Get used ingress in all resources : kubectl get ingress --all-namespaces

Deployments with resources and health checks

  • kubectl get deployment --show-labels # List all the Deployments and their labels
  • kubectl get deployment -l app=nginx # Get all Deployments with a specific label
  • kubectl delete deployment my-nginx # Delete Deployment
  • kubectl delete -f nginx.deployment.yml # Delete Deployment with file
  • kubectl scale deployment my-nginx --replicas=5 # Scale the Deployment Pods to 5
  • kubectl scale -f nginx.deployment.yml --replicas=5 # Scale by referencing file

Services

  • kubectl create -f nginx.service.yml --save-config # Create a Service
  • kubectl apply -f nginx.service.yml # Create or Update a Service
  • kubectl delete service [service-name] # Delete Service
  • kubectl delete -f nginx.service.yml # Delete Service with file
  • kubectl port-forward pod/[pod-name] 8080:80 # Listen on port 8080 locally and forward to port 80 in Pod
  • kubectl port-forward deployment/[deployment-name] 8080 # Listen on port 8080 locally and forward to Deployment's port
  • kubectl port-forward service/[service-name] 8080 # Listen on port 8080 locally and forward to Service Pod

Testing a Service and Pod with curl

kubectl exec [pod-name] -- curl -s http://podIP #Shell into a Pod and test a url. Add -c [containerID] where multiple containers are running in the Pod

Install and use curl if not available:

  1. kubectl exec [pod-name] -it sh
  2. apk add curl
  3. curl -s http://podIP

Service Types

  • ClusterIP (internal to cluster - default)
  • NodePort (exposes Service on each's Node's IP)
  • LoadBalancer (exposes a Service externally)
  • ExternalName (proxies to an external service)

Notes

  • Services do not work if they are not in the same namespaces (like separated rabbitmq and rabbitmq-admin services)
apiVersion: v1
kind: Service
metadata:
name: nginx-loadbalancer
spec:
type: LoadBalancer #Exposes a Service externally
selector:
app: my-nginx
ports:
- name: "80"
port: 80
targetPort: 80
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
labels:
app: nginx
spec:
replicas: 1 #Replicas
selector:
matchLabels:
app: my-nginx
template:
metadata:
labels:
app: my-nginx
spec:
containers:
- name: my-nginx
image: nginx:alpine
ports:
- containerPort: 80
resources:
limits:
memory: "128Mi" #128 MB
cpu: "200m" #200 millicpu (.2 cpu or %20 of the cpu)
livenessProbe:
httpGet:
path: /index.html #Check index.html on port 80
port: 80
initialDelaySeconds: 15 #Wait 15 seconds
timeoutSeconds: 2 #Timeout after 2 seconds
periodSeconds: 5 #Check every 5 seconds
failureThreshold: 1 #Allow 1 failure before failing Pod
readinessProbe:
httpGet:
path: /index.html
port: 80
initialDelaySeconds: 3
periodSeconds: 5 #Default is 10
failureThreshold: 1 #Default is 3
apiVersion: v1
kind: Pod
metadata:
name: my-nginx
labels:
app: nginx
rel: stable
spec:
containers:
- name: my-nginx
image: nginx:alpine
ports:
- containerPort: 80
apiVersion: v1
kind: Service
metadata:
name: nginx-clusterip
spec:
type: ClusterIP #Internal to cluster - default
selector:
app: my-nginx
ports:
- port: 8080
targetPort: 80
apiVersion: v1
kind: Service
metadata:
name: nginx-nodeport
spec:
type: NodePort #Exposes Service on each's Node's IP
selector:
app: my-nginx
ports:
- port: 80
targetPort: 80
nodePort: 31000 #localhost:31000

Install web-ui

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml

Authorization

Find access token using command kubectl describe secret -n kube-system Type: kubernetes.io/service-account-token (generally the first one). Copy access token

Run

kubectl proxy and navigate to http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

Paste the access token

Official Docs: https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment