Wednesday 26/07/2017
tl;dr - An Ingress is a collection of rules that allow inbound connections to reach the cluster services.
tl;dr - The ingress-controller is just a reverse proxy that forwards incoming requests based on the URL and host header (if used).
Credit to Lucas Käldström for this material Link : https://github.com/luxas/kubeadm-workshop
kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml
Execute this command in another terminal to watch the status of the kube-system namespace services.
watch -t -n1 'echo kube-system Pods && kubectl get pods -o wide -n kube-system && echo && echo kube-system Services && kubectl get svc -n kube-system && echo && echo Nodes && kubectl get nodes -o wide'
An Ingress Controller is a daemon, deployed as a Kubernetes Pod, that watches the apiserver's /ingresses endpoint for updates to the Ingress resource.
Its job is to satisfy requests for ingress.
One solution might be making your Services of the NodePort type, but that's not a good long-term solution.
Instead, there is the Ingress object in Kubernetes that let's you create rules for how Services in your cluster should be exposed to the world.
Before one can create Ingress rules, you need a Ingress Controller that watches for rules, applies them and forwards requests as specified.
One Ingress Controller provider is Traefik, nginx also provide Ingress Controllers.
Normally in order to expose your app you have locally to the internet requires that one of your machines has a public Internet address.
We can workaround this very smoothly in a Kubernetes cluster by letting Ngrok forward requests from a public subdomain of ngrok.io to the Traefik Ingress Controller that's running in our cluster.
Using ngrok here is perfect for hybrid clusters where you have no control over the network you're connected to... you just have internet access.
Also, this method is can be used in nearly any environment and will behave the same.
But for production deployments (which we aren't dealing with here), you should of course expose a real loadbalancer node with a public IP.
vi traefik-common.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: traefik-cfg
namespace: kube-system
labels:
app: traefik
data:
traefik.toml: |
defaultEntryPoints = ["http"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.auth.basic]
# Login using simply "kubernetes:rocks!"
users = ["kubernetes:$apr1$G8MQMx/M$5SsH5VwBiGRH4bcbauEk61"]
# Enable the kubernetes integration
[kubernetes]
[web]
address = ":8080"
[web.statistics]
[web.metrics.prometheus]
buckets=[0.1,0.3,1.2,5.0]
traefik-acme.toml: |
defaultEntryPoints = ["http", "https"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.auth.basic]
# Login using simply "kubernetes:rocks!"
users = ["kubernetes:$apr1$G8MQMx/M$5SsH5VwBiGRH4bcbauEk61"]
[entryPoints.https.tls]
[acme]
email = "[email protected]"
storageFile = "acme.json"
onDemand = true
onHostRule = true
caServer = "https://acme-v01.api.letsencrypt.org/directory"
entryPoint = "https"
# Enable the kubernetes integration
[kubernetes]
[web]
address = ":8080"
[web.statistics]
[web.metrics.prometheus]
buckets=[0.1,0.3,1.2,5.0]
kubectl apply -f traefik-common.yaml
vi traefik-ngrok.yaml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
app: traefik-ingress-controller
spec:
replicas: 1
selector:
matchLabels:
app: traefik-ingress-controller
template:
metadata:
labels:
app: traefik-ingress-controller
spec:
tolerations:
- key: beta.kubernetes.io/arch
value: arm
effect: NoSchedule
- key: beta.kubernetes.io/arch
value: arm64
effect: NoSchedule
serviceAccountName: traefik-ingress-controller
containers:
- image: luxas/traefik:v1.2.0
name: traefik-ingress-controller
resources:
limits:
cpu: 200m
memory: 30Mi
requests:
cpu: 100m
memory: 20Mi
ports:
- name: http
containerPort: 80
- name: web
containerPort: 8080
args:
- --configfile=/etc/traefik/traefik.toml
volumeMounts:
- name: traefik-cfg
mountPath: /etc/traefik/traefik.toml
volumes:
- name: traefik-cfg
configMap:
name: traefik-cfg
---
apiVersion: v1
kind: Service
metadata:
name: traefik-ingress-controller
labels:
app: traefik-ingress-controller
namespace: kube-system
spec:
ports:
- port: 80
targetPort: 80
selector:
app: traefik-ingress-controller
---
apiVersion: v1
kind: Service
metadata:
name: traefik-web
labels:
app: traefik-ingress-controller
namespace: kube-system
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: traefik-ingress-controller
---
kind: ConfigMap
apiVersion: v1
metadata:
name: ngrok-cfg
namespace: kube-system
labels:
app: ngrok
data:
ngrok.yaml: |
web_addr: 0.0.0.0:4040
log: stdout
log_level: debug
log_format: logfmt
tunnels:
traefik:
proto: http
addr: traefik-ingress-controller.kube-system:80
---
apiVersion: v1
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: ngrok
namespace: kube-system
labels:
app: ngrok
spec:
replicas: 1
selector:
matchLabels:
app: ngrok
template:
metadata:
labels:
app: ngrok
spec:
tolerations:
- key: beta.kubernetes.io/arch
value: arm
effect: NoSchedule
- key: beta.kubernetes.io/arch
value: arm64
effect: NoSchedule
containers:
- image: luxas/ngrok:v2.1.18
name: ngrok
ports:
- name: web
containerPort: 4040
args:
- start
- -config=/etc/ngrok/ngrok.yaml
- traefik
volumeMounts:
- name: ngrok-cfg
mountPath: /etc/ngrok/
volumes:
- name: ngrok-cfg
configMap:
name: ngrok-cfg
---
apiVersion: v1
kind: Service
metadata:
name: ngrok
namespace: kube-system
spec:
ports:
- port: 80
# Run this command in order to get the public URL for this ingress controller
# curl -sSL $(kubectl -n kube-system get svc ngrok -o template --template "{{.spec.clusterIP}}")/api/tunnels | jq ".tunnels[].public_url" | sed 's/"//g;/http:/d'
targetPort: 4040
selector:
app: ngrok
kubectl apply -f traefik-ngrok.yaml
Install the JSON processor
apt install jq -y
Obtain the public URL for this ingress controller :
curl -sSL $(kubectl -n kube-system get svc ngrok -o template --template "{{.spec.clusterIP}}")/api/tunnels | jq ".tunnels[].public_url" | sed 's/"//g;/http:/d'
Example Output (yours will be different)
root@ubuntu-2gb-sgp1-01:~/code# curl -sSL $(kubectl -n kube-system get svc ngrok -o template --template "{{.spec.clusterIP}}")/api/tunnels | jq ".tunnels[].public_url" | sed 's/"//g;/http:/d'
https://cbbada16.ngrok.io
We want to expose the dashboard to our newly-created public URL, under the /dashboard path.
vi ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kubernetes-dashboard
namespace: kube-system
annotations:
traefik.frontend.rule.type: pathprefixstrip
spec:
rules:
- http:
paths:
- path: /dashboard
backend:
serviceName: kubernetes-dashboard
servicePort: 80
kubectl apply -f ingress.yaml
The Traefik Ingress Controller is set up to require basic auth before one can access the services.
The username to kubernetes
and the password to rocks!
Change this if you want by editing the traefik-common.yaml
before deploying the Ingress Controller.
When you've signed in to https://{ngrok url}/dashboard/ (note the / in the end, it's required), you'll see a dashboard like this:
Example : https://cbbada16.ngrok.io/dashboard/
End of Section