Skip to content

Instantly share code, notes, and snippets.

@Ladicle
Last active March 12, 2021 06:07
Show Gist options
  • Save Ladicle/fcab44279992eebb8ed7ab9a42a706b0 to your computer and use it in GitHub Desktop.
Save Ladicle/fcab44279992eebb8ed7ab9a42a706b0 to your computer and use it in GitHub Desktop.
Manual Testing Log for kubernetes/autoscaler#3902
$ minikube start --memory=8Gi --cpus=4
๐Ÿ˜„ minikube v1.17.1 on Darwin 10.15.7
โœจ Automatically selected the docker driver. Other choices: hyperkit, virtualbox, ssh
๐Ÿ‘ Starting control plane node minikube in cluster minikube
๐Ÿšœ Pulling base image ...
๐Ÿ’พ Downloading Kubernetes v1.20.2 preload ...
> preloaded-images-k8s-v8-v1....: 491.22 MiB / 491.22 MiB 100.00% 5.91 MiB
๐Ÿ”ฅ Creating docker container (CPUs=4, Memory=8192MB) ...
๐Ÿณ Preparing Kubernetes v1.20.2 on Docker 20.10.2 ...
โ–ช Generating certificates and keys ...
โ–ช Booting up control plane ...
โ–ช Configuring RBAC rules ...
๐Ÿ”Ž Verifying Kubernetes components...
๐ŸŒŸ Enabled addons: storage-provisioner, default-storageclass
๐Ÿ„ Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
$ minikube addons enable metrics-server
๐ŸŒŸ The 'metrics-server' addon is enabled
$ cd autoscaler/vertical-pod-autoscaler
$ ./hack/vpa-up.sh
customresourcedefinition.apiextensions.k8s.io/verticalpodautoscalercheckpoints.autoscaling.k8s.io created
customresourcedefinition.apiextensions.k8s.io/verticalpodautoscalers.autoscaling.k8s.io created
clusterrole.rbac.authorization.k8s.io/system:metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:vpa-actor created
clusterrole.rbac.authorization.k8s.io/system:vpa-checkpoint-actor created
clusterrole.rbac.authorization.k8s.io/system:evictioner created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/system:vpa-actor created
clusterrolebinding.rbac.authorization.k8s.io/system:vpa-checkpoint-actor created
clusterrole.rbac.authorization.k8s.io/system:vpa-target-reader created
clusterrolebinding.rbac.authorization.k8s.io/system:vpa-target-reader-binding created
clusterrolebinding.rbac.authorization.k8s.io/system:vpa-evictionter-binding created
serviceaccount/vpa-admission-controller created
clusterrole.rbac.authorization.k8s.io/system:vpa-admission-controller created
clusterrolebinding.rbac.authorization.k8s.io/system:vpa-admission-controller created
clusterrole.rbac.authorization.k8s.io/system:vpa-status-reader created
clusterrolebinding.rbac.authorization.k8s.io/system:vpa-status-reader-binding created
serviceaccount/vpa-updater created
deployment.apps/vpa-updater created
serviceaccount/vpa-recommender created
deployment.apps/vpa-recommender created
Generating certs for the VPA Admission Controller in /tmp/vpa-certs.
Generating RSA private key, 2048 bit long modulus (2 primes)
...................................+++++
...............+++++
e is 65537 (0x010001)
Generating RSA private key, 2048 bit long modulus (2 primes)
...............................................+++++
..........................................................................+++++
e is 65537 (0x010001)
Signature ok
subject=CN = vpa-webhook.kube-system.svc
Getting CA Private Key
Uploading certs to the cluster.
secret/vpa-tls-certs created
Deleting /tmp/vpa-certs.
deployment.apps/vpa-admission-controller created
service/vpa-webhook created
$ kubectl edit deploy -n kube-system vpa-admission-controller
Waiting for Emacs...
deployment.apps/vpa-admission-controller edited
$ kubectl get deploy -n kube-system -o wide|grep vpa
vpa-admission-controller 1/1 1 1 3m15s admission-controller ladicle/vpa-admission-controller:pr3903 app=vpa-admission-controller
vpa-recommender 1/1 1 1 3m16s recommender k8s.gcr.io/autoscaling/vpa-recommender:0.9.2 app=vpa-recommender
vpa-updater 1/1 1 1 3m16s updater k8s.gcr.io/autoscaling/vpa-updater:0.9.2 app=vpa-updater
apiVersion: "autoscaling.k8s.io/v1"
kind: VerticalPodAutoscaler
metadata:
name: hamster-vpa
spec:
targetRef:
apiVersion: "apps/v1"
kind: Deployment
name: hamster
resourcePolicy:
containerPolicies:
- containerName: '*'
controlledResources: ["cpu"] # only cpu
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hamster
spec:
selector:
matchLabels:
app: hamster
replicas: 2
template:
metadata:
labels:
app: hamster
spec:
securityContext:
runAsNonRoot: true
runAsUser: 65534 # nobody
containers:
- name: hamster
image: k8s.gcr.io/ubuntu-slim:0.1
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 1
memory: 1Gi
command: ["/bin/sh"]
args:
- "-c"
- "while true; do timeout 0.5s yes >/dev/null; sleep 0.5s; done"
---
apiVersion: v1
kind: LimitRange
metadata:
name: limit-mem-cpu-per-container
namespace: default
spec:
limits:
- default:
cpu: 240m
memory: 256Mi
defaultRequest:
cpu: 110m
memory: 111Mi
max:
cpu: "1"
memory: 1Gi
type: Container
$ kubectl apply -f test.yaml
verticalpodautoscaler.autoscaling.k8s.io/hamster-vpa created
deployment.apps/hamster created
$ kubectl get vpa -o custom-columns="NAME:.metadata.name,PROVIDED:.status.conditions[?(@.type=='RecommendationProvided')].status"
NAME PROVIDED
hamster-vpa True
$ kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
hamster 2/2 2 2 2m40s
$ kubectl scale --replicas=3 deploy hamster
deployment.apps/hamster scaled
# Scaled out successfully
$ kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
hamster 3/3 3 3 3m10s
# As expected, the patch was applied only to the cpu
# kubectl logs -n kube-system vpa-admission-controller-5c664b7654-r5cvw |grep 'Sending patches'|tail -n1
I0312 05:59:30.632797 1 server.go:110] Sending patches: [{add /metadata/annotations map[]} {add /spec/containers/0/resources/requests/cpu 100m} {add /spec/containers/0/resources/limits/cpu 1} {add /metadata/annotations/vpaUpdates Pod resources updated by hamster-vpa: container 0: cpu capped to fit Max in container LimitRange, cpu request, cpu limit} {add /metadata/annotations/vpaObservedContainers hamster}]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment