Last active
July 18, 2024 14:23
-
-
Save gmaslowski/117f3535173d733e007d0c6c83564888 to your computer and use it in GitHub Desktop.
k8s-cassandra-nodeLabel2pod
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
--- | |
apiVersion: v1 | |
kind: ConfigMap | |
metadata: | |
name: cassandra-rackdc | |
data: | |
cassandra-rackdc.properties: | | |
dc= datacenter | |
rack= RACK | |
--- | |
apiVersion: apps/v1beta1 | |
kind: Deployment | |
metadata: | |
name: node2pod | |
spec: | |
replicas: 1 | |
template: | |
metadata: | |
labels: | |
name: node2pod | |
app: node2pod | |
spec: | |
initContainers: | |
- name: node2pod | |
imagePullPolicy: IfNotPresent | |
image: <image-with-k8s-access> | |
command: | |
- "sh" | |
- "-c" | |
- "cp /config/cassandra-rackdc.properties /shared/cassandra-rackdc.properties && \ | |
sed -i.bak s/RACK/$(kubectl get no -Llabel | grep ${NODE_NAME} | awk '{print $6}')/g /shared/cassandra-rackdc.properties" | |
env: | |
- name: NODE_NAME | |
valueFrom: | |
fieldRef: | |
fieldPath: spec.nodeName | |
volumeMounts: | |
- name: cassandra-rackdc | |
mountPath: /config/ | |
- name: shared | |
mountPath: /shared/ | |
containers: | |
- name: cassandra | |
image: cassandra | |
volumeMounts: | |
- name: shared | |
mountPath: /etc/cassandra-rackdc.properties | |
subPath: cassandra-rackdc.properties | |
volumes: | |
- name: cassandra-rackdc | |
configMap: | |
name: cassandra-rackdc | |
- name: shared | |
emptyDir: {} |
Hi @koslib. Yes, k8s access is important to be setup in "a way". Any that is convenient and secure of course. Hence the image: <image-with-k8s-access>
is just there. So you either create an image with keys (not recommended) or you can inject the keys from within securely provided secrets.
Yeah maybe a ClusterRole
, ClusterRoleBinding
and ServiceAccount
combo.
Pasting for posterity what worked for me while hacking something quick:
apiVersion: v1
kind: ServiceAccount
metadata:
name: node-listing-sa
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
namespace: default
name: node-listing-clusterrole
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: node-listing-rolebinding
namespace: default
subjects:
- kind: ServiceAccount
name: node-listing-sa
namespace: default
roleRef:
kind: ClusterRole
name: node-listing-clusterrole
apiGroup: rbac.authorization.k8s.io
Thanks again for the article and the gist!
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hey @gmaslowski thanks for sharing that! Wondering, don't you need to setup keys etc for accessing the kube api and execute a
kubectl
command through theinitContainer
? As it's not part of the downward API, but you execute a fullkubectl
command, hence it goes through the Kubernetes API