Created
October 28, 2020 08:36
-
-
Save conradwt/119c44910b15a7788c494186df73c8b4 to your computer and use it in GitHub Desktop.
docker logs k3d-testcluster-server-0
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
➜ docker logs k3d-testcluster-server-0 | |
time="2020-10-28T08:29:41.067359892Z" level=info msg="Starting k3s v1.18.9+k3s1 (630bebf9)" | |
time="2020-10-28T08:29:41.206026208Z" level=info msg="Active TLS secret (ver=) (count 8): map[listener.cattle.io/cn-0.0.0.0:0.0.0.0 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.18.0.2:172.18.0.2 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/hash:487d94c3733c645ce6dc9677524068e59fa12dab6573a055718cc933c7fefd97]" | |
time="2020-10-28T08:29:41.206421047Z" level=info msg="Testing connection to peers [172.18.0.2:6443]" | |
time="2020-10-28T08:29:41.207136736Z" level=info msg="Connection OK to peers [172.18.0.2:6443]" | |
time="2020-10-28T08:29:41.216198797Z" level=info msg="Kine listening on unix://kine.sock" | |
time="2020-10-28T08:29:41.217604276Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" | |
Flag --basic-auth-file has been deprecated, Basic authentication mode is deprecated and will be removed in a future release. It is not recommended for production environments. | |
I1028 08:29:41.218944 7 server.go:645] external host was not specified, using 172.18.0.2 | |
I1028 08:29:41.219413 7 server.go:162] Version: v1.18.9+k3s1 | |
I1028 08:29:42.163258 7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. | |
I1028 08:29:42.163391 7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. | |
I1028 08:29:42.164350 7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. | |
I1028 08:29:42.164504 7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. | |
I1028 08:29:42.186308 7 master.go:270] Using reconciler: lease | |
I1028 08:29:42.212102 7 rest.go:113] the default service ipfamily for this cluster is: IPv4 | |
W1028 08:29:42.522201 7 genericapiserver.go:409] Skipping API batch/v2alpha1 because it has no resources. | |
W1028 08:29:42.532309 7 genericapiserver.go:409] Skipping API discovery.k8s.io/v1alpha1 because it has no resources. | |
W1028 08:29:42.544144 7 genericapiserver.go:409] Skipping API node.k8s.io/v1alpha1 because it has no resources. | |
W1028 08:29:42.593155 7 genericapiserver.go:409] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. | |
W1028 08:29:42.596405 7 genericapiserver.go:409] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. | |
W1028 08:29:42.608038 7 genericapiserver.go:409] Skipping API storage.k8s.io/v1alpha1 because it has no resources. | |
W1028 08:29:42.623959 7 genericapiserver.go:409] Skipping API apps/v1beta2 because it has no resources. | |
W1028 08:29:42.624002 7 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources. | |
I1028 08:29:42.633155 7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. | |
I1028 08:29:42.633174 7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. | |
I1028 08:29:44.626392 7 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt | |
I1028 08:29:44.626445 7 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt | |
I1028 08:29:44.626831 7 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key | |
I1028 08:29:44.627086 7 secure_serving.go:178] Serving securely on 127.0.0.1:6444 | |
I1028 08:29:44.627293 7 tlsconfig.go:240] Starting DynamicServingCertificateController | |
I1028 08:29:44.627770 7 apiservice_controller.go:94] Starting APIServiceRegistrationController | |
I1028 08:29:44.627801 7 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller | |
I1028 08:29:44.627828 7 autoregister_controller.go:141] Starting autoregister controller | |
I1028 08:29:44.627832 7 cache.go:32] Waiting for caches to sync for autoregister controller | |
I1028 08:29:44.627844 7 controller.go:81] Starting OpenAPI AggregationController | |
I1028 08:29:44.627898 7 crd_finalizer.go:266] Starting CRDFinalizer | |
I1028 08:29:44.628351 7 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller | |
I1028 08:29:44.628381 7 shared_informer.go:223] Waiting for caches to sync for cluster_authentication_trust_controller | |
I1028 08:29:44.628748 7 crdregistration_controller.go:111] Starting crd-autoregister controller | |
I1028 08:29:44.628777 7 shared_informer.go:223] Waiting for caches to sync for crd-autoregister | |
I1028 08:29:44.628794 7 controller.go:86] Starting OpenAPI controller | |
I1028 08:29:44.628807 7 customresource_discovery_controller.go:209] Starting DiscoveryController | |
I1028 08:29:44.628844 7 naming_controller.go:291] Starting NamingConditionController | |
I1028 08:29:44.628856 7 establishing_controller.go:76] Starting EstablishingController | |
I1028 08:29:44.628866 7 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController | |
I1028 08:29:44.628911 7 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController | |
I1028 08:29:44.628941 7 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt | |
I1028 08:29:44.628961 7 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt | |
I1028 08:29:44.628754 7 available_controller.go:387] Starting AvailableConditionController | |
I1028 08:29:44.629427 7 cache.go:32] Waiting for caches to sync for AvailableConditionController controller | |
E1028 08:29:44.682331 7 controller.go:151] Unable to perform initial Kubernetes service initialization: Service "kubernetes" is invalid: spec.clusterIP: Invalid value: "10.43.0.1": cannot allocate resources of type serviceipallocations at this time | |
E1028 08:29:44.683537 7 controller.go:156] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.18.0.2, ResourceVersion: 0, AdditionalErrorMsg: | |
I1028 08:29:44.729863 7 cache.go:39] Caches are synced for autoregister controller | |
I1028 08:29:44.731184 7 cache.go:39] Caches are synced for APIServiceRegistrationController controller | |
I1028 08:29:44.729866 7 cache.go:39] Caches are synced for AvailableConditionController controller | |
I1028 08:29:44.729924 7 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller | |
I1028 08:29:44.729939 7 shared_informer.go:230] Caches are synced for crd-autoregister | |
I1028 08:29:45.626718 7 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue). | |
I1028 08:29:45.626947 7 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). | |
I1028 08:29:45.636132 7 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000 | |
I1028 08:29:45.647574 7 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000 | |
I1028 08:29:45.647781 7 storage_scheduling.go:143] all system priority classes are created successfully or already exist. | |
I1028 08:29:46.114089 7 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io | |
I1028 08:29:46.159476 7 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io | |
W1028 08:29:46.315401 7 lease.go:224] Resetting endpoints for master service "kubernetes" to [172.18.0.2] | |
I1028 08:29:46.316472 7 controller.go:606] quota admission added evaluator for: endpoints | |
I1028 08:29:46.322355 7 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io | |
I1028 08:29:46.678135 7 registry.go:150] Registering EvenPodsSpread predicate and priority function | |
I1028 08:29:46.678208 7 registry.go:150] Registering EvenPodsSpread predicate and priority function | |
time="2020-10-28T08:29:46.678956416Z" level=info msg="Running kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --port=10251 --secure-port=0" | |
time="2020-10-28T08:29:46.682465848Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --port=10252 --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true" | |
I1028 08:29:46.686929 7 controllermanager.go:161] Version: v1.18.9+k3s1 | |
I1028 08:29:46.687621 7 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252 | |
I1028 08:29:46.687672 7 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-controller-manager... | |
time="2020-10-28T08:29:46.690864129Z" level=info msg="Waiting for cloudcontroller rbac role to be created" | |
I1028 08:29:46.693143 7 registry.go:150] Registering EvenPodsSpread predicate and priority function | |
I1028 08:29:46.693254 7 registry.go:150] Registering EvenPodsSpread predicate and priority function | |
W1028 08:29:46.694974 7 authorization.go:47] Authorization is disabled | |
W1028 08:29:46.695107 7 authentication.go:40] Authentication is disabled | |
I1028 08:29:46.695178 7 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 | |
time="2020-10-28T08:29:46.708209879Z" level=info msg="Creating CRD addons.k3s.cattle.io" | |
I1028 08:29:46.714133 7 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io | |
time="2020-10-28T08:29:46.714881022Z" level=info msg="Creating CRD helmcharts.helm.cattle.io" | |
I1028 08:29:46.716730 7 leaderelection.go:252] successfully acquired lease kube-system/kube-controller-manager | |
I1028 08:29:46.716870 7 event.go:278] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"3bb3007b-55ed-4f0f-b641-77882d3c1d32", APIVersion:"v1", ResourceVersion:"155", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' k3d-testcluster-server-0_077d4a19-4a1b-4c87-a4e9-953ae4c2a3c0 became leader | |
I1028 08:29:46.716892 7 event.go:278] Event(v1.ObjectReference{Kind:"Lease", Namespace:"kube-system", Name:"kube-controller-manager", UID:"25f5a4df-35f4-46d6-80c3-cd336d611bc7", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"157", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' k3d-testcluster-server-0_077d4a19-4a1b-4c87-a4e9-953ae4c2a3c0 became leader | |
time="2020-10-28T08:29:46.724418287Z" level=info msg="Waiting for CRD addons.k3s.cattle.io to become available" | |
I1028 08:29:46.796303 7 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler... | |
I1028 08:29:46.854167 7 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler | |
I1028 08:29:47.090770 7 plugins.go:100] No cloud provider specified. | |
I1028 08:29:47.095142 7 shared_informer.go:223] Waiting for caches to sync for tokens | |
I1028 08:29:47.101252 7 controller.go:606] quota admission added evaluator for: serviceaccounts | |
I1028 08:29:47.195260 7 shared_informer.go:230] Caches are synced for tokens | |
time="2020-10-28T08:29:47.231810448Z" level=info msg="Done waiting for CRD addons.k3s.cattle.io to become available" | |
time="2020-10-28T08:29:47.231932513Z" level=info msg="Waiting for CRD helmcharts.helm.cattle.io to become available" | |
I1028 08:29:47.457936 7 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch | |
I1028 08:29:47.458294 7 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for helmcharts.helm.cattle.io | |
I1028 08:29:47.458585 7 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints | |
I1028 08:29:47.458839 7 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for serviceaccounts | |
I1028 08:29:47.459062 7 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions | |
I1028 08:29:47.459287 7 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy | |
I1028 08:29:47.459510 7 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io | |
I1028 08:29:47.459759 7 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.apps | |
I1028 08:29:47.460000 7 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for events.events.k8s.io | |
I1028 08:29:47.460202 7 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling | |
I1028 08:29:47.460405 7 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io | |
I1028 08:29:47.460589 7 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates | |
I1028 08:29:47.460793 7 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io | |
I1028 08:29:47.460969 7 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.apps | |
I1028 08:29:47.461163 7 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io | |
I1028 08:29:47.461341 7 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io | |
I1028 08:29:47.461440 7 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for limitranges | |
I1028 08:29:47.461656 7 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for controllerrevisions.apps | |
I1028 08:29:47.461730 7 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for addons.k3s.cattle.io | |
I1028 08:29:47.461773 7 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for leases.coordination.k8s.io | |
I1028 08:29:47.461791 7 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.apps | |
I1028 08:29:47.461809 7 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch | |
I1028 08:29:47.461868 7 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for statefulsets.apps | |
I1028 08:29:47.461879 7 controllermanager.go:533] Started "resourcequota" | |
I1028 08:29:47.461898 7 resource_quota_controller.go:272] Starting resource quota controller | |
I1028 08:29:47.461912 7 shared_informer.go:223] Waiting for caches to sync for resource quota | |
I1028 08:29:47.461945 7 resource_quota_monitor.go:303] QuotaMonitor running | |
I1028 08:29:47.470510 7 controllermanager.go:533] Started "csrsigning" | |
W1028 08:29:47.470966 7 controllermanager.go:512] "bootstrapsigner" is disabled | |
I1028 08:29:47.471265 7 certificate_controller.go:119] Starting certificate controller "csrsigning" | |
I1028 08:29:47.471392 7 shared_informer.go:223] Waiting for caches to sync for certificate-csrsigning | |
I1028 08:29:47.471633 7 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/rancher/k3s/server/tls/server-ca.crt::/var/lib/rancher/k3s/server/tls/server-ca.key | |
I1028 08:29:47.482906 7 controllermanager.go:533] Started "pvc-protection" | |
I1028 08:29:47.483018 7 pvc_protection_controller.go:101] Starting PVC protection controller | |
I1028 08:29:47.483047 7 shared_informer.go:223] Waiting for caches to sync for PVC protection | |
I1028 08:29:47.502151 7 controllermanager.go:533] Started "podgc" | |
I1028 08:29:47.502272 7 gc_controller.go:89] Starting GC controller | |
I1028 08:29:47.502301 7 shared_informer.go:223] Waiting for caches to sync for GC | |
I1028 08:29:47.523770 7 controllermanager.go:533] Started "replicaset" | |
I1028 08:29:47.524138 7 replica_set.go:182] Starting replicaset controller | |
I1028 08:29:47.524239 7 shared_informer.go:223] Waiting for caches to sync for ReplicaSet | |
I1028 08:29:47.536126 7 node_lifecycle_controller.go:384] Sending events to api server. | |
I1028 08:29:47.536268 7 taint_manager.go:163] Sending events to api server. | |
I1028 08:29:47.536349 7 node_lifecycle_controller.go:512] Controller will reconcile labels. | |
I1028 08:29:47.536389 7 controllermanager.go:533] Started "nodelifecycle" | |
I1028 08:29:47.537385 7 node_lifecycle_controller.go:546] Starting node controller | |
I1028 08:29:47.537449 7 shared_informer.go:223] Waiting for caches to sync for taint | |
E1028 08:29:47.550703 7 core.go:89] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail | |
W1028 08:29:47.550787 7 controllermanager.go:525] Skipping "service" | |
W1028 08:29:47.572736 7 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. | |
I1028 08:29:47.573188 7 controllermanager.go:533] Started "attachdetach" | |
I1028 08:29:47.573337 7 attach_detach_controller.go:348] Starting attach detach controller | |
I1028 08:29:47.573367 7 shared_informer.go:223] Waiting for caches to sync for attach detach | |
I1028 08:29:47.598484 7 controllermanager.go:533] Started "job" | |
I1028 08:29:47.599072 7 job_controller.go:145] Starting job controller | |
I1028 08:29:47.599190 7 shared_informer.go:223] Waiting for caches to sync for job | |
I1028 08:29:47.617740 7 controllermanager.go:533] Started "disruption" | |
I1028 08:29:47.618106 7 disruption.go:331] Starting disruption controller | |
I1028 08:29:47.618295 7 shared_informer.go:223] Waiting for caches to sync for disruption | |
I1028 08:29:47.620908 7 controllermanager.go:533] Started "csrapproving" | |
I1028 08:29:47.621232 7 certificate_controller.go:119] Starting certificate controller "csrapproving" | |
I1028 08:29:47.621267 7 shared_informer.go:223] Waiting for caches to sync for certificate-csrapproving | |
I1028 08:29:47.630474 7 controllermanager.go:533] Started "persistentvolume-binder" | |
I1028 08:29:47.630602 7 pv_controller_base.go:295] Starting persistent volume controller | |
I1028 08:29:47.630724 7 shared_informer.go:223] Waiting for caches to sync for persistent volume | |
time="2020-10-28T08:29:47.695606103Z" level=info msg="Waiting for cloudcontroller rbac role to be created" | |
I1028 08:29:47.717078 7 controllermanager.go:533] Started "pv-protection" | |
I1028 08:29:47.717142 7 pv_protection_controller.go:83] Starting PV protection controller | |
I1028 08:29:47.717148 7 shared_informer.go:223] Waiting for caches to sync for PV protection | |
time="2020-10-28T08:29:47.734004254Z" level=info msg="Done waiting for CRD helmcharts.helm.cattle.io to become available" | |
time="2020-10-28T08:29:47.741465439Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.81.0.tgz" | |
time="2020-10-28T08:29:47.741890909Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/ccm.yaml" | |
time="2020-10-28T08:29:47.742144376Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/local-storage.yaml" | |
time="2020-10-28T08:29:47.742412359Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml" | |
time="2020-10-28T08:29:47.742742599Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml" | |
time="2020-10-28T08:29:47.744106317Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml" | |
time="2020-10-28T08:29:47.744447859Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml" | |
time="2020-10-28T08:29:47.744763369Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml" | |
time="2020-10-28T08:29:47.744984807Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml" | |
time="2020-10-28T08:29:47.745175448Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml" | |
time="2020-10-28T08:29:47.745465965Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml" | |
time="2020-10-28T08:29:47.745693625Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml" | |
time="2020-10-28T08:29:47.745919567Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml" | |
time="2020-10-28T08:29:47.853439455Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token" | |
time="2020-10-28T08:29:47.853938593Z" level=info msg="To join node to cluster: k3s agent -s https://172.18.0.2:6443 -t ${NODE_TOKEN}" | |
time="2020-10-28T08:29:47.854369129Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller" | |
time="2020-10-28T08:29:47.855282606Z" level=info msg="Waiting for master node startup: resource name may not be empty" | |
I1028 08:29:47.855486 7 leaderelection.go:242] attempting to acquire leader lease kube-system/k3s... | |
2020-10-28 08:29:47.873869 I | http: TLS handshake error from 127.0.0.1:45430: remote error: tls: bad certificate | |
time="2020-10-28T08:29:47.884169938Z" level=info msg="Wrote kubeconfig /output/kubeconfig.yaml" | |
time="2020-10-28T08:29:47.884262851Z" level=info msg="Run: k3s kubectl" | |
time="2020-10-28T08:29:47.884324791Z" level=info msg="k3s is up and running" | |
time="2020-10-28T08:29:47.884738112Z" level=info msg="module overlay was already loaded" | |
time="2020-10-28T08:29:47.884803540Z" level=info msg="module nf_conntrack was already loaded" | |
time="2020-10-28T08:29:47.885384377Z" level=warning msg="failed to start br_netfilter module" | |
2020-10-28 08:29:47.890933 I | http: TLS handshake error from 127.0.0.1:45438: remote error: tls: bad certificate | |
I1028 08:29:47.893204 7 controllermanager.go:533] Started "endpoint" | |
I1028 08:29:47.893376 7 endpoints_controller.go:181] Starting endpoint controller | |
I1028 08:29:47.893603 7 shared_informer.go:223] Waiting for caches to sync for endpoint | |
I1028 08:29:47.900029 7 leaderelection.go:252] successfully acquired lease kube-system/k3s | |
2020-10-28 08:29:48.049288 I | http: TLS handshake error from 127.0.0.1:45444: remote error: tls: bad certificate | |
time="2020-10-28T08:29:48.104811885Z" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChart controller" | |
time="2020-10-28T08:29:48.104878800Z" level=info msg="Starting batch/v1, Kind=Job controller" | |
time="2020-10-28T08:29:48.104902022Z" level=info msg="Starting /v1, Kind=Node controller" | |
time="2020-10-28T08:29:48.105175659Z" level=info msg="Starting /v1, Kind=Service controller" | |
time="2020-10-28T08:29:48.105403179Z" level=info msg="Starting /v1, Kind=Pod controller" | |
time="2020-10-28T08:29:48.105458758Z" level=info msg="Starting /v1, Kind=Endpoints controller" | |
I1028 08:29:48.125477 7 controller.go:606] quota admission added evaluator for: addons.k3s.cattle.io | |
I1028 08:29:48.131451 7 controllermanager.go:533] Started "daemonset" | |
I1028 08:29:48.131653 7 daemon_controller.go:286] Starting daemon sets controller | |
I1028 08:29:48.131707 7 shared_informer.go:223] Waiting for caches to sync for daemon sets | |
time="2020-10-28T08:29:48.177173212Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log" | |
time="2020-10-28T08:29:48.178272117Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd" | |
time="2020-10-28T08:29:48.195879844Z" level=info msg="Waiting for containerd startup: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: no such file or directory\"" | |
I1028 08:29:48.202378 7 controllermanager.go:533] Started "ttl" | |
W1028 08:29:48.202449 7 core.go:243] configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes. | |
W1028 08:29:48.202468 7 controllermanager.go:525] Skipping "route" | |
W1028 08:29:48.202535 7 controllermanager.go:525] Skipping "ttl-after-finished" | |
W1028 08:29:48.202594 7 controllermanager.go:525] Skipping "root-ca-cert-publisher" | |
I1028 08:29:48.202697 7 ttl_controller.go:118] Starting TTL controller | |
I1028 08:29:48.202756 7 shared_informer.go:223] Waiting for caches to sync for TTL | |
I1028 08:29:48.298517 7 controller.go:606] quota admission added evaluator for: deployments.apps | |
time="2020-10-28T08:29:48.341851554Z" level=info msg="Starting /v1, Kind=Secret controller" | |
time="2020-10-28T08:29:48.347131783Z" level=info msg="Active TLS secret k3s-serving (ver=223) (count 8): map[listener.cattle.io/cn-0.0.0.0:0.0.0.0 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.18.0.2:172.18.0.2 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/hash:487d94c3733c645ce6dc9677524068e59fa12dab6573a055718cc933c7fefd97]" | |
I1028 08:29:48.604709 7 controller.go:606] quota admission added evaluator for: helmcharts.helm.cattle.io | |
I1028 08:29:48.611765 7 request.go:621] Throttling request took 1.053597439s, request: GET:https://127.0.0.1:6444/apis/rbac.authorization.k8s.io/v1?timeout=32s | |
I1028 08:29:48.629050 7 controller.go:606] quota admission added evaluator for: jobs.batch | |
time="2020-10-28T08:29:48.713448394Z" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --allow-untagged-cloud=true --bind-address=127.0.0.1 --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --node-status-update-frequency=1m --secure-port=0" | |
Flag --allow-untagged-cloud has been deprecated, This flag is deprecated and will be removed in a future release. A cluster-id will be required on cloud instances. | |
I1028 08:29:48.719304 7 controllermanager.go:120] Version: v1.18.9+k3s1 | |
W1028 08:29:48.719370 7 controllermanager.go:132] detected a cluster without a ClusterID. A ClusterID will be required in the future. Please tag your cluster to avoid any future issues | |
I1028 08:29:48.719404 7 leaderelection.go:242] attempting to acquire leader lease kube-system/cloud-controller-manager... | |
I1028 08:29:48.740288 7 leaderelection.go:252] successfully acquired lease kube-system/cloud-controller-manager | |
I1028 08:29:48.743328 7 node_controller.go:110] Sending events to api server. | |
I1028 08:29:48.743447 7 controllermanager.go:247] Started "cloud-node" | |
I1028 08:29:48.744986 7 node_lifecycle_controller.go:78] Sending events to api server | |
I1028 08:29:48.745064 7 controllermanager.go:247] Started "cloud-node-lifecycle" | |
E1028 08:29:48.746896 7 core.go:90] Failed to start service controller: the cloud provider does not support external load balancers | |
W1028 08:29:48.747034 7 controllermanager.go:244] Skipping "service" | |
W1028 08:29:48.747058 7 core.go:108] configure-cloud-routes is set, but cloud provider does not support routes. Will not configure cloud provider routes. | |
W1028 08:29:48.747072 7 controllermanager.go:244] Skipping "route" | |
I1028 08:29:48.747667 7 event.go:278] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"cloud-controller-manager", UID:"3c2999d9-3626-45fb-a6db-c911af07e4d6", APIVersion:"v1", ResourceVersion:"278", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' k3d-testcluster-server-0_4112780c-588d-4aa2-a474-81de769ed4a9 became leader | |
I1028 08:29:48.747733 7 event.go:278] Event(v1.ObjectReference{Kind:"Lease", Namespace:"kube-system", Name:"cloud-controller-manager", UID:"08bd16cc-72d5-48d3-89cd-63c473c35d24", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"279", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' k3d-testcluster-server-0_4112780c-588d-4aa2-a474-81de769ed4a9 became leader | |
E1028 08:29:48.869079 7 memcache.go:206] couldn't get resource list for metrics.k8s.io/v1beta1: the server could not find the requested resource | |
time="2020-10-28T08:29:48.909963859Z" level=info msg="Waiting for master node k3d-testcluster-server-0 startup: nodes \"k3d-testcluster-server-0\" not found" | |
I1028 08:29:49.205209 7 controllermanager.go:533] Started "garbagecollector" | |
I1028 08:29:49.205670 7 garbagecollector.go:133] Starting garbage collector controller | |
I1028 08:29:49.205727 7 shared_informer.go:223] Waiting for caches to sync for garbage collector | |
I1028 08:29:49.205744 7 graph_builder.go:282] GraphBuilder running | |
time="2020-10-28T08:29:49.213949284Z" level=info msg="Connecting to proxy" url="wss://172.18.0.2:6443/v1-k3s/connect" | |
time="2020-10-28T08:29:49.216807254Z" level=info msg="Handling backend connection request [k3d-testcluster-server-0]" | |
time="2020-10-28T08:29:49.217257299Z" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us" | |
time="2020-10-28T08:29:49.222022168Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=/run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --cpu-cfs-quota=false --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=k3d-testcluster-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/systemd --node-labels= --read-only-port=0 --resolv-conf=/tmp/k3s-resolv.conf --runtime-cgroups=/systemd --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key" | |
Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed. | |
I1028 08:29:49.222511 7 server.go:413] Version: v1.18.9+k3s1 | |
W1028 08:29:49.237215 7 info.go:51] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id" | |
I1028 08:29:49.237468 7 server.go:645] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / | |
I1028 08:29:49.237879 7 container_manager_linux.go:277] container manager verified user specified cgroup-root exists: [] | |
I1028 08:29:49.237891 7 container_manager_linux.go:282] Creating Container Manager object based on Node Config: {RuntimeCgroupsName:/systemd SystemCgroupsName: KubeletCgroupsName:/systemd ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:false CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none} | |
I1028 08:29:49.237958 7 topology_manager.go:126] [topologymanager] Creating topology manager with none policy | |
I1028 08:29:49.237965 7 container_manager_linux.go:312] [topologymanager] Initializing Topology Manager with none policy | |
I1028 08:29:49.237969 7 container_manager_linux.go:317] Creating device plugin manager: true | |
W1028 08:29:49.238087 7 util_unix.go:103] Using "/run/k3s/containerd/containerd.sock" as endpoint is deprecated, please consider using full url format "unix:///run/k3s/containerd/containerd.sock". | |
W1028 08:29:49.238132 7 util_unix.go:103] Using "/run/k3s/containerd/containerd.sock" as endpoint is deprecated, please consider using full url format "unix:///run/k3s/containerd/containerd.sock". | |
I1028 08:29:49.238166 7 kubelet.go:317] Watching apiserver | |
time="2020-10-28T08:29:49.243084118Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --healthz-bind-address=127.0.0.1 --hostname-override=k3d-testcluster-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables" | |
W1028 08:29:49.243200 7 server.go:225] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP. | |
W1028 08:29:49.245899 7 proxier.go:625] Failed to read file /lib/modules/5.4.39-linuxkit/modules.builtin with error open /lib/modules/5.4.39-linuxkit/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules | |
W1028 08:29:49.247737 7 proxier.go:635] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules | |
time="2020-10-28T08:29:49.255586028Z" level=info msg="waiting for node k3d-testcluster-server-0: nodes \"k3d-testcluster-server-0\" not found" | |
I1028 08:29:49.257513 7 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt | |
W1028 08:29:49.262236 7 proxier.go:635] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules | |
W1028 08:29:49.263855 7 proxier.go:635] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules | |
W1028 08:29:49.264175 7 proxier.go:635] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules | |
W1028 08:29:49.264479 7 proxier.go:635] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules | |
I1028 08:29:49.267692 7 kuberuntime_manager.go:217] Container runtime containerd initialized, version: v1.3.3-k3s2, apiVersion: v1alpha2 | |
I1028 08:29:49.268177 7 server.go:1124] Started kubelet | |
I1028 08:29:49.272277 7 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer | |
I1028 08:29:49.274155 7 server.go:145] Starting to listen on 0.0.0.0:10250 | |
I1028 08:29:49.275249 7 server.go:393] Adding debug handlers to kubelet server. | |
I1028 08:29:49.278703 7 volume_manager.go:265] Starting Kubelet Volume Manager | |
I1028 08:29:49.279897 7 desired_state_of_world_populator.go:139] Desired state populator starts to run | |
I1028 08:29:49.284096 7 controllermanager.go:533] Started "statefulset" | |
W1028 08:29:49.284111 7 controllermanager.go:512] "tokencleaner" is disabled | |
I1028 08:29:49.284199 7 stateful_set.go:146] Starting stateful set controller | |
I1028 08:29:49.284206 7 shared_informer.go:223] Waiting for caches to sync for stateful set | |
E1028 08:29:49.284670 7 cri_stats_provider.go:375] Failed to get the info of the filesystem with mountpoint "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs": unable to find data in memory cache. | |
E1028 08:29:49.284682 7 kubelet.go:1308] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem | |
I1028 08:29:49.316744 7 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach | |
I1028 08:29:49.322244 7 node_ipam_controller.go:94] Sending events to api server. | |
E1028 08:29:49.330204 7 node.go:125] Failed to retrieve node info: nodes "k3d-testcluster-server-0" not found | |
I1028 08:29:49.330332 7 cpu_manager.go:184] [cpumanager] starting with none policy | |
I1028 08:29:49.330352 7 cpu_manager.go:185] [cpumanager] reconciling every 10s | |
I1028 08:29:49.330368 7 state_mem.go:36] [cpumanager] initializing new in-memory state store | |
I1028 08:29:49.331617 7 policy_none.go:43] [cpumanager] none policy: Start | |
I1028 08:29:49.384260 7 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach | |
E1028 08:29:49.385025 7 kubelet.go:2270] node "k3d-testcluster-server-0" not found | |
I1028 08:29:49.395022 7 kubelet_node_status.go:70] Attempting to register node k3d-testcluster-server-0 | |
I1028 08:29:49.492481 7 kubelet_node_status.go:73] Successfully registered node k3d-testcluster-server-0 | |
W1028 08:29:49.531360 7 manager.go:597] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found | |
I1028 08:29:49.533198 7 plugin_manager.go:114] Starting Kubelet Plugin Manager | |
I1028 08:29:49.546868 7 node_controller.go:325] Initializing node k3d-testcluster-server-0 with cloud provider | |
time="2020-10-28T08:29:49.664921636Z" level=info msg="couldn't find node internal ip label on node k3d-testcluster-server-0" | |
time="2020-10-28T08:29:49.665090166Z" level=info msg="couldn't find node hostname label on node k3d-testcluster-server-0" | |
time="2020-10-28T08:29:49.674473111Z" level=info msg="Updated coredns node hosts entry [172.18.0.2 k3d-testcluster-server-0]" | |
I1028 08:29:49.759708 7 status_manager.go:158] Starting to sync pod status with apiserver | |
I1028 08:29:49.759891 7 kubelet.go:1824] Starting kubelet main sync loop. | |
E1028 08:29:49.760378 7 kubelet.go:1848] skipping pod synchronization - PLEG is not healthy: pleg has yet to be successful | |
I1028 08:29:49.952946 7 reconciler.go:157] Reconciler: start to sync state | |
time="2020-10-28T08:29:49.989649167Z" level=info msg="couldn't find node internal ip label on node k3d-testcluster-server-0" | |
time="2020-10-28T08:29:49.989855206Z" level=info msg="couldn't find node hostname label on node k3d-testcluster-server-0" | |
I1028 08:29:49.989919 7 node_controller.go:397] Successfully initialized node k3d-testcluster-server-0 with cloud provider | |
I1028 08:29:49.989975 7 node_controller.go:325] Initializing node k3d-testcluster-server-0 with cloud provider | |
I1028 08:29:50.052002 7 node_controller.go:325] Initializing node k3d-testcluster-server-0 with cloud provider | |
W1028 08:29:50.249355 7 handler_proxy.go:102] no RequestInfo found in the context | |
E1028 08:29:50.249480 7 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable | |
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] | |
I1028 08:29:50.249663 7 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. | |
time="2020-10-28T08:29:50.491378894Z" level=info msg="Updating TLS secret for k3s-serving (count: 9): map[listener.cattle.io/cn-0.0.0.0:0.0.0.0 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.18.0.2:172.18.0.2 listener.cattle.io/cn-k3d-testcluster-server-0:k3d-testcluster-server-0 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/hash:b23426ae52f2593a05dbad6b6f821f80af88b5994fecf4a02f72bec551b808b6]" | |
time="2020-10-28T08:29:50.513314110Z" level=info msg="Active TLS secret k3s-serving (ver=311) (count 9): map[listener.cattle.io/cn-0.0.0.0:0.0.0.0 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.18.0.2:172.18.0.2 listener.cattle.io/cn-k3d-testcluster-server-0:k3d-testcluster-server-0 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/hash:b23426ae52f2593a05dbad6b6f821f80af88b5994fecf4a02f72bec551b808b6]" | |
time="2020-10-28T08:29:50.537501284Z" level=info msg="Active TLS secret k3s-serving (ver=311) (count 9): map[listener.cattle.io/cn-0.0.0.0:0.0.0.0 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.18.0.2:172.18.0.2 listener.cattle.io/cn-k3d-testcluster-server-0:k3d-testcluster-server-0 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/hash:b23426ae52f2593a05dbad6b6f821f80af88b5994fecf4a02f72bec551b808b6]" | |
I1028 08:29:50.549096 7 node.go:136] Successfully retrieved node IP: 172.18.0.2 | |
I1028 08:29:50.550621 7 server_others.go:187] Using iptables Proxier. | |
I1028 08:29:50.551199 7 server.go:583] Version: v1.18.9+k3s1 | |
I1028 08:29:50.551982 7 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072 | |
I1028 08:29:50.552184 7 conntrack.go:52] Setting nf_conntrack_max to 131072 | |
I1028 08:29:50.552892 7 conntrack.go:83] Setting conntrack hashsize to 32768 | |
E1028 08:29:50.552925 7 conntrack.go:85] failed to set conntrack hashsize to 32768: write /sys/module/nf_conntrack/parameters/hashsize: operation not supported | |
I1028 08:29:50.553041 7 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 | |
I1028 08:29:50.553081 7 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 | |
I1028 08:29:50.554474 7 config.go:315] Starting service config controller | |
I1028 08:29:50.554485 7 shared_informer.go:223] Waiting for caches to sync for service config | |
I1028 08:29:50.554496 7 config.go:133] Starting endpoints config controller | |
I1028 08:29:50.554504 7 shared_informer.go:223] Waiting for caches to sync for endpoints config | |
I1028 08:29:50.595146 7 log.go:172] http: TLS handshake error from 172.18.0.3:42510: remote error: tls: bad certificate | |
I1028 08:29:50.654713 7 shared_informer.go:230] Caches are synced for service config | |
I1028 08:29:50.655819 7 shared_informer.go:230] Caches are synced for endpoints config | |
I1028 08:29:50.665966 7 log.go:172] http: TLS handshake error from 172.18.0.3:42522: remote error: tls: bad certificate | |
time="2020-10-28T08:29:51.104850686Z" level=info msg="master role label has been set succesfully on node: k3d-testcluster-server-0" | |
time="2020-10-28T08:29:51.275391465Z" level=info msg="waiting for node k3d-testcluster-server-0 CIDR not assigned yet" | |
W1028 08:29:52.612354 7 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request] | |
time="2020-10-28T08:29:53.280334104Z" level=info msg="waiting for node k3d-testcluster-server-0 CIDR not assigned yet" | |
time="2020-10-28T08:29:55.306319175Z" level=info msg="waiting for node k3d-testcluster-server-0 CIDR not assigned yet" | |
time="2020-10-28T08:29:57.310138942Z" level=info msg="waiting for node k3d-testcluster-server-0 CIDR not assigned yet" | |
time="2020-10-28T08:29:59.295970924Z" level=info msg="waiting for node k3d-testcluster-server-0 CIDR not assigned yet" | |
I1028 08:29:59.337291 7 range_allocator.go:82] Sending events to api server. | |
I1028 08:29:59.337378 7 range_allocator.go:110] No Service CIDR provided. Skipping filtering out service addresses. | |
I1028 08:29:59.337387 7 range_allocator.go:116] No Secondary Service CIDR provided. Skipping filtering out secondary service addresses. | |
I1028 08:29:59.337405 7 controllermanager.go:533] Started "nodeipam" | |
I1028 08:29:59.337545 7 node_ipam_controller.go:162] Starting ipam controller | |
I1028 08:29:59.337556 7 shared_informer.go:223] Waiting for caches to sync for node | |
I1028 08:29:59.385778 7 controllermanager.go:533] Started "clusterrole-aggregation" | |
I1028 08:29:59.386019 7 clusterroleaggregation_controller.go:149] Starting ClusterRoleAggregator | |
I1028 08:29:59.386030 7 shared_informer.go:223] Waiting for caches to sync for ClusterRoleAggregator | |
I1028 08:29:59.507404 7 controllermanager.go:533] Started "endpointslice" | |
I1028 08:29:59.510585 7 endpointslice_controller.go:213] Starting endpoint slice controller | |
I1028 08:29:59.510996 7 shared_informer.go:223] Waiting for caches to sync for endpoint_slice | |
I1028 08:29:59.626546 7 controllermanager.go:533] Started "deployment" | |
I1028 08:29:59.626993 7 deployment_controller.go:153] Starting deployment controller | |
I1028 08:29:59.627214 7 shared_informer.go:223] Waiting for caches to sync for deployment | |
I1028 08:29:59.755919 7 node_lifecycle_controller.go:78] Sending events to api server | |
E1028 08:29:59.756261 7 core.go:229] failed to start cloud node lifecycle controller: no cloud provider provided | |
W1028 08:29:59.756364 7 controllermanager.go:525] Skipping "cloud-node-lifecycle" | |
I1028 08:30:00.007112 7 controllermanager.go:533] Started "replicationcontroller" | |
I1028 08:30:00.007617 7 replica_set.go:182] Starting replicationcontroller controller | |
I1028 08:30:00.007686 7 shared_informer.go:223] Waiting for caches to sync for ReplicationController | |
I1028 08:30:00.257477 7 controllermanager.go:533] Started "horizontalpodautoscaling" | |
I1028 08:30:00.257624 7 horizontal.go:169] Starting HPA controller | |
I1028 08:30:00.257631 7 shared_informer.go:223] Waiting for caches to sync for HPA | |
I1028 08:30:00.377876 7 controllermanager.go:533] Started "cronjob" | |
I1028 08:30:00.378468 7 cronjob_controller.go:97] Starting CronJob Manager | |
I1028 08:30:00.414594 7 controllermanager.go:533] Started "csrcleaner" | |
I1028 08:30:00.414775 7 cleaner.go:82] Starting CSR cleaner controller | |
I1028 08:30:00.538027 7 controllermanager.go:533] Started "persistentvolume-expander" | |
I1028 08:30:00.538394 7 expand_controller.go:319] Starting expand controller | |
I1028 08:30:00.538611 7 shared_informer.go:223] Waiting for caches to sync for expand | |
E1028 08:30:00.849392 7 namespaced_resources_deleter.go:161] unable to get all supported resources from server: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request | |
I1028 08:30:00.851887 7 controllermanager.go:533] Started "namespace" | |
I1028 08:30:00.852936 7 namespace_controller.go:200] Starting namespace controller | |
I1028 08:30:00.853034 7 shared_informer.go:223] Waiting for caches to sync for namespace | |
I1028 08:30:00.883235 7 controllermanager.go:533] Started "serviceaccount" | |
I1028 08:30:00.883766 7 shared_informer.go:223] Waiting for caches to sync for resource quota | |
I1028 08:30:00.884043 7 serviceaccounts_controller.go:117] Starting service account controller | |
I1028 08:30:00.884202 7 shared_informer.go:223] Waiting for caches to sync for service account | |
I1028 08:30:00.937700 7 shared_informer.go:223] Waiting for caches to sync for garbage collector | |
I1028 08:30:01.054457 7 shared_informer.go:230] Caches are synced for certificate-csrsigning | |
I1028 08:30:01.099863 7 shared_informer.go:230] Caches are synced for certificate-csrapproving | |
I1028 08:30:01.100562 7 shared_informer.go:230] Caches are synced for service account | |
I1028 08:30:01.180081 7 shared_informer.go:230] Caches are synced for namespace | |
I1028 08:30:01.180202 7 shared_informer.go:230] Caches are synced for HPA | |
I1028 08:30:01.180245 7 shared_informer.go:230] Caches are synced for PVC protection | |
I1028 08:30:01.180265 7 shared_informer.go:230] Caches are synced for stateful set | |
I1028 08:30:01.180895 7 shared_informer.go:230] Caches are synced for endpoint | |
I1028 08:30:01.196316 7 shared_informer.go:230] Caches are synced for PV protection | |
I1028 08:30:01.209181 7 shared_informer.go:230] Caches are synced for ReplicationController | |
I1028 08:30:01.238978 7 shared_informer.go:230] Caches are synced for expand | |
I1028 08:30:01.307334 7 shared_informer.go:230] Caches are synced for ClusterRoleAggregator | |
time="2020-10-28T08:30:01.367818172Z" level=info msg="waiting for node k3d-testcluster-server-0 CIDR not assigned yet" | |
W1028 08:30:01.385365 7 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="k3d-testcluster-server-0" does not exist | |
I1028 08:30:01.387887 7 shared_informer.go:230] Caches are synced for node | |
I1028 08:30:01.387927 7 range_allocator.go:172] Starting range CIDR allocator | |
I1028 08:30:01.387932 7 shared_informer.go:223] Waiting for caches to sync for cidrallocator | |
I1028 08:30:01.387936 7 shared_informer.go:230] Caches are synced for cidrallocator | |
I1028 08:30:01.413731 7 shared_informer.go:230] Caches are synced for endpoint_slice | |
I1028 08:30:01.419287 7 shared_informer.go:230] Caches are synced for taint | |
I1028 08:30:01.419393 7 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: | |
W1028 08:30:01.419464 7 node_lifecycle_controller.go:1048] Missing timestamp for Node k3d-testcluster-server-0. Assuming now as a timestamp. | |
I1028 08:30:01.419567 7 node_lifecycle_controller.go:1249] Controller detected that zone is now in state Normal. | |
I1028 08:30:01.419910 7 shared_informer.go:230] Caches are synced for persistent volume | |
I1028 08:30:01.420165 7 taint_manager.go:187] Starting NoExecuteTaintManager | |
I1028 08:30:01.420712 7 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"k3d-testcluster-server-0", UID:"de8925a4-a05a-4645-b31b-1c960f8a42b8", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node k3d-testcluster-server-0 event: Registered Node k3d-testcluster-server-0 in Controller | |
I1028 08:30:01.426020 7 shared_informer.go:230] Caches are synced for daemon sets | |
I1028 08:30:01.478072 7 shared_informer.go:230] Caches are synced for job | |
I1028 08:30:01.482541 7 shared_informer.go:230] Caches are synced for attach detach | |
I1028 08:30:01.485008 7 shared_informer.go:230] Caches are synced for GC | |
I1028 08:30:01.485530 7 shared_informer.go:230] Caches are synced for TTL | |
I1028 08:30:01.545170 7 range_allocator.go:373] Set node k3d-testcluster-server-0 PodCIDR to [10.42.0.0/24] | |
I1028 08:30:01.562502 7 shared_informer.go:230] Caches are synced for deployment | |
I1028 08:30:01.571625 7 shared_informer.go:230] Caches are synced for garbage collector | |
I1028 08:30:01.591301 7 shared_informer.go:230] Caches are synced for resource quota | |
I1028 08:30:01.603772 7 shared_informer.go:230] Caches are synced for garbage collector | |
I1028 08:30:01.603956 7 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage | |
I1028 08:30:01.616491 7 shared_informer.go:230] Caches are synced for disruption | |
I1028 08:30:01.617900 7 disruption.go:339] Sending events to api server. | |
I1028 08:30:01.616863 7 shared_informer.go:230] Caches are synced for ReplicaSet | |
I1028 08:30:01.659308 7 shared_informer.go:230] Caches are synced for resource quota | |
I1028 08:30:01.671932 7 kuberuntime_manager.go:984] updating runtime config through cri with podcidr 10.42.0.0/24 | |
I1028 08:30:01.675910 7 kubelet_network.go:77] Setting Pod CIDR: -> 10.42.0.0/24 | |
E1028 08:30:01.736411 7 memcache.go:206] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request | |
I1028 08:30:01.966751 7 controller.go:606] quota admission added evaluator for: replicasets.apps | |
I1028 08:30:01.994840 7 trace.go:116] Trace[1030164086]: "GuaranteedUpdate etcd3" type:*rbac.ClusterRole (started: 2020-10-28 08:30:01.446107703 +0000 UTC m=+20.599826997) (total time: 548.673063ms): | |
Trace[1030164086]: [548.628576ms] [546.378543ms] Transaction committed | |
I1028 08:30:01.995022 7 trace.go:116] Trace[782581617]: "Update" url:/apis/rbac.authorization.k8s.io/v1/clusterroles/edit,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:clusterrole-aggregation-controller,client:127.0.0.1 (started: 2020-10-28 08:30:01.445918965 +0000 UTC m=+20.599638242) (total time: 549.025808ms): | |
Trace[782581617]: [548.97244ms] [548.856225ms] Object stored in database | |
I1028 08:30:02.033226 7 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"local-path-provisioner", UID:"66d1281e-3433-405b-926f-6e961d69e36f", APIVersion:"apps/v1", ResourceVersion:"239", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set local-path-provisioner-6d59f47c7 to 1 | |
E1028 08:30:02.071537 7 memcache.go:111] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request | |
E1028 08:30:02.076334 7 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again | |
I1028 08:30:02.315005 7 trace.go:116] Trace[1431517741]: "Create" url:/api/v1/namespaces/default/serviceaccounts,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:service-account-controller,client:127.0.0.1 (started: 2020-10-28 08:30:01.421382524 +0000 UTC m=+20.575101797) (total time: 893.177147ms): | |
Trace[1431517741]: [893.079502ms] [892.941375ms] Object stored in database | |
W1028 08:30:02.364185 7 handler_proxy.go:102] no RequestInfo found in the context | |
E1028 08:30:02.366913 7 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable | |
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] | |
I1028 08:30:02.367445 7 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. | |
I1028 08:30:02.374216 7 trace.go:116] Trace[270619580]: "Create" url:/api/v1/namespaces/kube-node-lease/secrets,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/tokens-controller,client:127.0.0.1 (started: 2020-10-28 08:30:01.514861146 +0000 UTC m=+20.668580424) (total time: 859.284368ms): | |
Trace[270619580]: [859.233794ms] [855.813808ms] Object stored in database | |
I1028 08:30:02.489589 7 trace.go:116] Trace[1223793037]: "GuaranteedUpdate etcd3" type:*rbac.ClusterRole (started: 2020-10-28 08:30:01.488243619 +0000 UTC m=+20.641962896) (total time: 1.001324905s): | |
Trace[1223793037]: [1.001287864s] [992.927439ms] Transaction committed | |
I1028 08:30:02.492372 7 trace.go:116] Trace[1437892947]: "Update" url:/apis/rbac.authorization.k8s.io/v1/clusterroles/admin,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:clusterrole-aggregation-controller,client:127.0.0.1 (started: 2020-10-28 08:30:01.450803792 +0000 UTC m=+20.604523064) (total time: 1.041531903s): | |
Trace[1437892947]: [1.038828109s] [1.006257103s] Object stored in database | |
E1028 08:30:02.709440 7 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again | |
I1028 08:30:02.721225 7 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"local-path-provisioner-6d59f47c7", UID:"7654a955-3468-4249-b64d-cd26977f4efa", APIVersion:"apps/v1", ResourceVersion:"387", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: local-path-provisioner-6d59f47c7-zw52t | |
I1028 08:30:03.031006 7 trace.go:116] Trace[1134333654]: "GuaranteedUpdate etcd3" type:*core.Node (started: 2020-10-28 08:30:01.430457545 +0000 UTC m=+20.584176827) (total time: 1.596814379s): | |
Trace[1134333654]: [107.108737ms] [102.112105ms] Transaction committed | |
Trace[1134333654]: [1.022905113s] [910.288232ms] Transaction committed | |
Trace[1134333654]: [1.596484595s] [569.17358ms] Transaction committed | |
I1028 08:30:03.032586 7 trace.go:116] Trace[18268064]: "Patch" url:/api/v1/nodes/k3d-testcluster-server-0,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:node-controller,client:127.0.0.1 (started: 2020-10-28 08:30:01.430358761 +0000 UTC m=+20.584078036) (total time: 1.600874041s): | |
Trace[18268064]: [107.271086ms] [104.515638ms] About to apply patch | |
Trace[18268064]: [1.023124368s] [913.71252ms] About to apply patch | |
Trace[18268064]: [1.60073594s] [575.731527ms] Object stored in database | |
I1028 08:30:03.122298 7 trace.go:116] Trace[390393187]: "Create" url:/apis/discovery.k8s.io/v1beta1/namespaces/kube-system/endpointslices,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:endpointslice-controller,client:127.0.0.1 (started: 2020-10-28 08:30:01.638269875 +0000 UTC m=+20.791989158) (total time: 1.483998118s): | |
Trace[390393187]: [1.483930026s] [1.483883418s] Object stored in database | |
I1028 08:30:03.171309 7 controller.go:606] quota admission added evaluator for: events.events.k8s.io | |
I1028 08:30:03.188767 7 trace.go:116] Trace[978369389]: "GuaranteedUpdate etcd3" type:*core.Endpoints (started: 2020-10-28 08:30:01.808986277 +0000 UTC m=+20.962705557) (total time: 1.379633771s): | |
Trace[978369389]: [1.379522344s] [1.372863146s] Transaction committed | |
I1028 08:30:03.192124 7 trace.go:116] Trace[2073221828]: "Update" url:/api/v1/namespaces/kube-system/endpoints/cloud-controller-manager,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/leader-election,client:127.0.0.1 (started: 2020-10-28 08:30:01.808229241 +0000 UTC m=+20.961948528) (total time: 1.383756786s): | |
Trace[2073221828]: [1.383171972s] [1.382455149s] Object stored in database | |
I1028 08:30:03.235747 7 trace.go:116] Trace[513271802]: "Create" url:/api/v1/namespaces/default/events,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:node-controller,client:127.0.0.1 (started: 2020-10-28 08:30:01.422016684 +0000 UTC m=+20.575735965) (total time: 1.813682742s): | |
Trace[513271802]: [1.813632043s] [1.813134854s] Object stored in database | |
I1028 08:30:03.312415 7 trace.go:116] Trace[36288983]: "Create" url:/apis/apps/v1/namespaces/kube-system/replicasets,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:deployment-controller,client:127.0.0.1 (started: 2020-10-28 08:30:01.965846382 +0000 UTC m=+21.119565656) (total time: 1.346436659s): | |
Trace[36288983]: [1.346065746s] [1.329896185s] Object stored in database | |
I1028 08:30:03.326168 7 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"metrics-server", UID:"3186c114-631d-4b7a-b750-e4536b868b7a", APIVersion:"apps/v1", ResourceVersion:"254", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set metrics-server-7566d596c8 to 1 | |
I1028 08:30:03.423319 7 trace.go:116] Trace[707933958]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:deployment-controller,client:127.0.0.1 (started: 2020-10-28 08:30:02.055563392 +0000 UTC m=+21.209282666) (total time: 1.36769419s): | |
Trace[707933958]: [1.3676208s] [1.366954221s] Object stored in database | |
I1028 08:30:03.448562 7 flannel.go:92] Determining IP address of default interface | |
I1028 08:30:03.449772 7 flannel.go:105] Using interface with name eth0 and address 172.18.0.2 | |
I1028 08:30:03.465926 7 trace.go:116] Trace[1799085029]: "GuaranteedUpdate etcd3" type:*rbac.ClusterRole (started: 2020-10-28 08:30:02.093463314 +0000 UTC m=+21.247182622) (total time: 1.372440725s): | |
Trace[1799085029]: [1.372225743s] [1.370245005s] Transaction committed | |
I1028 08:30:03.466472 7 trace.go:116] Trace[1091925750]: "Update" url:/apis/rbac.authorization.k8s.io/v1/clusterroles/edit,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:clusterrole-aggregation-controller,client:127.0.0.1 (started: 2020-10-28 08:30:02.093263178 +0000 UTC m=+21.246982459) (total time: 1.372942389s): | |
Trace[1091925750]: [1.372836561s] [1.372707654s] Object stored in database | |
I1028 08:30:03.521483 7 kube.go:117] Waiting 10m0s for node controller to sync | |
I1028 08:30:03.521579 7 kube.go:300] Starting kube subnet manager | |
I1028 08:30:03.545143 7 trace.go:116] Trace[351059179]: "GuaranteedUpdate etcd3" type:*apps.Deployment (started: 2020-10-28 08:30:02.036539806 +0000 UTC m=+21.190259082) (total time: 1.508574214s): | |
Trace[351059179]: [1.508463726s] [1.494946251s] Transaction committed | |
I1028 08:30:03.545488 7 trace.go:116] Trace[785941409]: "Update" url:/apis/apps/v1/namespaces/kube-system/deployments/local-path-provisioner/status,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:deployment-controller,client:127.0.0.1 (started: 2020-10-28 08:30:02.035875285 +0000 UTC m=+21.189594558) (total time: 1.509529919s): | |
Trace[785941409]: [1.50936659s] [1.508836676s] Object stored in database | |
E1028 08:30:03.618721 7 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again | |
I1028 08:30:03.636468 7 trace.go:116] Trace[266221752]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:job-controller,client:127.0.0.1 (started: 2020-10-28 08:30:01.960780965 +0000 UTC m=+21.114500237) (total time: 1.675642021s): | |
Trace[266221752]: [1.675138358s] [1.673917496s] Object stored in database | |
I1028 08:30:03.691423 7 trace.go:116] Trace[671347644]: "Create" url:/apis/apps/v1/namespaces/kube-system/replicasets,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:deployment-controller,client:127.0.0.1 (started: 2020-10-28 08:30:01.96155303 +0000 UTC m=+21.115272302) (total time: 1.729830959s): | |
Trace[671347644]: [1.659023516s] [1.649645018s] Object stored in database | |
I1028 08:30:03.697810 7 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"e169e9f0-e360-4e40-981f-3abea8719491", APIVersion:"apps/v1", ResourceVersion:"229", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-7944c66d8d to 1 | |
I1028 08:30:03.698869 7 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"kube-system", Name:"helm-install-traefik", UID:"396c60f8-3525-49b8-b44d-1cc70575cde6", APIVersion:"batch/v1", ResourceVersion:"272", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: helm-install-traefik-qd5h6 | |
I1028 08:30:03.729773 7 trace.go:116] Trace[978683924]: "GuaranteedUpdate etcd3" type:*core.Endpoints (started: 2020-10-28 08:30:02.262987011 +0000 UTC m=+21.416706314) (total time: 1.466681618s): | |
Trace[978683924]: [1.466657067s] [1.464579658s] Transaction committed | |
I1028 08:30:03.730277 7 trace.go:116] Trace[573035723]: "Update" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/leader-election,client:127.0.0.1 (started: 2020-10-28 08:30:02.262019169 +0000 UTC m=+21.415738442) (total time: 1.468015512s): | |
Trace[573035723]: [1.467970654s] [1.467284808s] Object stored in database | |
I1028 08:30:03.797051 7 trace.go:116] Trace[780911571]: "Create" url:/api/v1/namespaces/kube-system/serviceaccounts,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:service-account-controller,client:127.0.0.1 (started: 2020-10-28 08:30:02.327066026 +0000 UTC m=+21.480785301) (total time: 1.469709892s): | |
Trace[780911571]: [1.469493116s] [1.469446526s] Object stored in database | |
I1028 08:30:03.858985 7 trace.go:116] Trace[1977108903]: "GuaranteedUpdate etcd3" type:*core.Endpoints (started: 2020-10-28 08:30:02.284325973 +0000 UTC m=+21.438045271) (total time: 1.574448765s): | |
Trace[1977108903]: [1.574090567s] [1.572946003s] Transaction committed | |
I1028 08:30:03.860301 7 trace.go:116] Trace[284240854]: "Update" url:/api/v1/namespaces/kube-system/endpoints/kube-scheduler,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/leader-election,client:127.0.0.1 (started: 2020-10-28 08:30:02.28110081 +0000 UTC m=+21.434820115) (total time: 1.579011589s): | |
Trace[284240854]: [1.578372366s] [1.575416705s] Object stored in database | |
time="2020-10-28T08:30:03.992097974Z" level=info msg="Tunnel endpoint watch event: [172.18.0.2:6443 172.18.0.3:6443]" | |
time="2020-10-28T08:30:03.992372276Z" level=info msg="Connecting to proxy" url="wss://172.18.0.3:6443/v1-k3s/connect" | |
time="2020-10-28T08:30:04.081865901Z" level=error msg="Failed to connect to proxy" error="websocket: bad handshake" | |
time="2020-10-28T08:30:04.081931998Z" level=error msg="Remotedialer proxy error" error="websocket: bad handshake" | |
I1028 08:30:04.087678 7 trace.go:116] Trace[215440365]: "Create" url:/apis/discovery.k8s.io/v1beta1/namespaces/kube-system/endpointslices,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:endpointslice-controller,client:127.0.0.1 (started: 2020-10-28 08:30:01.654003647 +0000 UTC m=+20.807722925) (total time: 2.43363986s): | |
Trace[215440365]: [2.433580294s] [2.432345962s] Object stored in database | |
I1028 08:30:04.130781 7 trace.go:116] Trace[245156298]: "GuaranteedUpdate etcd3" type:*core.ServiceAccount (started: 2020-10-28 08:30:02.403629741 +0000 UTC m=+21.557349016) (total time: 1.727012474s): | |
Trace[245156298]: [1.726979601s] [1.726028763s] Transaction committed | |
I1028 08:30:04.132366 7 trace.go:116] Trace[375822790]: "Update" url:/api/v1/namespaces/kube-node-lease/serviceaccounts/default,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/tokens-controller,client:127.0.0.1 (started: 2020-10-28 08:30:02.402945281 +0000 UTC m=+21.556664557) (total time: 1.729370949s): | |
Trace[375822790]: [1.728239623s] [1.727610447s] Object stored in database | |
I1028 08:30:04.226135 7 trace.go:116] Trace[696686534]: "Create" url:/api/v1/namespaces/default/secrets,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/tokens-controller,client:127.0.0.1 (started: 2020-10-28 08:30:02.502503645 +0000 UTC m=+21.656222924) (total time: 1.721420092s): | |
Trace[696686534]: [1.720645605s] [1.719114548s] Object stored in database | |
I1028 08:30:04.369839 7 trace.go:116] Trace[1526841400]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:replicaset-controller,client:127.0.0.1 (started: 2020-10-28 08:30:02.787853922 +0000 UTC m=+21.941573199) (total time: 1.58187139s): | |
Trace[1526841400]: [1.581422206s] [1.58131581s] Object stored in database | |
I1028 08:30:04.423357 7 trace.go:116] Trace[1918760578]: "GuaranteedUpdate etcd3" type:*apps.ReplicaSet (started: 2020-10-28 08:30:02.752696027 +0000 UTC m=+21.906415313) (total time: 1.670638839s): | |
Trace[1918760578]: [168.498533ms] [167.623386ms] Transaction prepared | |
Trace[1918760578]: [1.67060223s] [1.502103697s] Transaction committed | |
I1028 08:30:04.423470 7 trace.go:116] Trace[988648983]: "Update" url:/apis/apps/v1/namespaces/kube-system/replicasets/local-path-provisioner-6d59f47c7/status,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:replicaset-controller,client:127.0.0.1 (started: 2020-10-28 08:30:02.750406323 +0000 UTC m=+21.904125625) (total time: 1.673046513s): | |
Trace[988648983]: [1.672999364s] [1.672714092s] Object stored in database | |
I1028 08:30:04.489239 7 trace.go:116] Trace[387019]: "GuaranteedUpdate etcd3" type:*rbac.ClusterRole (started: 2020-10-28 08:30:02.845017081 +0000 UTC m=+21.998736359) (total time: 1.644196252s): | |
Trace[387019]: [1.644143752s] [1.582759181s] Transaction committed | |
I1028 08:30:04.489358 7 trace.go:116] Trace[650321828]: "Update" url:/apis/rbac.authorization.k8s.io/v1/clusterroles/admin,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:clusterrole-aggregation-controller,client:127.0.0.1 (started: 2020-10-28 08:30:02.759887206 +0000 UTC m=+21.913606485) (total time: 1.729451141s): | |
Trace[650321828]: [1.729388875s] [1.649799183s] Object stored in database | |
I1028 08:30:04.534109 7 kube.go:124] Node controller sync successful | |
I1028 08:30:04.534329 7 vxlan.go:121] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false | |
I1028 08:30:04.545092 7 trace.go:116] Trace[1659532064]: "GuaranteedUpdate etcd3" type:*core.ConfigMap (started: 2020-10-28 08:30:03.060274199 +0000 UTC m=+22.213993486) (total time: 1.484780296s): | |
Trace[1659532064]: [1.484685208s] [1.467893616s] Transaction committed | |
I1028 08:30:04.546059 7 trace.go:116] Trace[2036810955]: "Update" url:/api/v1/namespaces/kube-system/configmaps/k3s,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf,client:127.0.0.1 (started: 2020-10-28 08:30:03.058428116 +0000 UTC m=+22.212147394) (total time: 1.487299951s): | |
Trace[2036810955]: [1.487241381s] [1.485449413s] Object stored in database | |
I1028 08:30:04.569278 7 trace.go:116] Trace[1614742933]: "GuaranteedUpdate etcd3" type:*discovery.EndpointSlice (started: 2020-10-28 08:30:03.13216754 +0000 UTC m=+22.285886832) (total time: 1.437092547s): | |
Trace[1614742933]: [1.437068898s] [1.436099496s] Transaction committed | |
I1028 08:30:04.574707 7 trace.go:116] Trace[229864281]: "Update" url:/apis/discovery.k8s.io/v1beta1/namespaces/kube-system/endpointslices/kube-dns-fptfw,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:endpointslice-controller,client:127.0.0.1 (started: 2020-10-28 08:30:03.13176733 +0000 UTC m=+22.285486611) (total time: 1.442897957s): | |
Trace[229864281]: [1.442815066s] [1.44265452s] Object stored in database | |
I1028 08:30:04.598381 7 trace.go:116] Trace[220048045]: "GuaranteedUpdate etcd3" type:*core.Pod (started: 2020-10-28 08:30:03.149602762 +0000 UTC m=+22.303322052) (total time: 1.448748854s): | |
Trace[220048045]: [1.445824174s] [1.44170945s] Transaction committed | |
I1028 08:30:04.608279 7 trace.go:116] Trace[647053836]: "Update" url:/api/v1/namespaces/kube-system/pods/local-path-provisioner-6d59f47c7-zw52t/status,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/scheduler,client:127.0.0.1 (started: 2020-10-28 08:30:03.128595301 +0000 UTC m=+22.282314576) (total time: 1.478694603s): | |
Trace[647053836]: [1.471004582s] [1.470626222s] Object stored in database | |
I1028 08:30:04.655749 7 trace.go:116] Trace[2060453351]: "Create" url:/apis/events.k8s.io/v1beta1/namespaces/kube-system/events,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/scheduler,client:127.0.0.1 (started: 2020-10-28 08:30:03.139133696 +0000 UTC m=+22.292852965) (total time: 1.516590236s): | |
Trace[2060453351]: [1.516548288s] [1.492804593s] Object stored in database | |
I1028 08:30:04.712031 7 trace.go:116] Trace[704723000]: "GuaranteedUpdate etcd3" type:*apps.Deployment (started: 2020-10-28 08:30:03.330064335 +0000 UTC m=+22.483783622) (total time: 1.381946399s): | |
Trace[704723000]: [1.381680137s] [1.378136396s] Transaction committed | |
I1028 08:30:04.712241 7 trace.go:116] Trace[1196774467]: "Update" url:/apis/apps/v1/namespaces/kube-system/deployments/metrics-server/status,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:deployment-controller,client:127.0.0.1 (started: 2020-10-28 08:30:03.328530908 +0000 UTC m=+22.482250186) (total time: 1.38369163s): | |
Trace[1196774467]: [1.38358367s] [1.382221851s] Object stored in database | |
I1028 08:30:04.760443 7 trace.go:116] Trace[678362482]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:replicaset-controller,client:127.0.0.1 (started: 2020-10-28 08:30:03.341296917 +0000 UTC m=+22.495016199) (total time: 1.41911558s): | |
Trace[678362482]: [1.419025376s] [1.417295915s] Object stored in database | |
I1028 08:30:04.762941 7 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-7566d596c8", UID:"514c8353-f372-4ab9-9454-ff0253d4a0ee", APIVersion:"apps/v1", ResourceVersion:"399", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-7566d596c8-cnmh2 | |
time="2020-10-28T08:30:04.906017801Z" level=info msg="Tunnel endpoint watch event: [172.18.0.2:6443]" | |
time="2020-10-28T08:30:04.906063272Z" level=info msg="Stopped tunnel to 172.18.0.3:6443" | |
I1028 08:30:04.934617 7 trace.go:116] Trace[1899160609]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:deployment-controller,client:127.0.0.1 (started: 2020-10-28 08:30:03.432873945 +0000 UTC m=+22.586593233) (total time: 1.501709715s): | |
Trace[1899160609]: [1.501427194s] [1.499399741s] Object stored in database | |
I1028 08:30:04.979887 7 trace.go:116] Trace[441008340]: "GuaranteedUpdate etcd3" type:*core.Node (started: 2020-10-28 08:30:03.596095603 +0000 UTC m=+22.749814895) (total time: 1.383690454s): | |
Trace[441008340]: [1.383528967s] [1.364700168s] Transaction committed | |
I1028 08:30:04.981049 7 trace.go:116] Trace[492372493]: "Update" url:/api/v1/nodes/k3d-testcluster-server-0,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf,client:127.0.0.1 (started: 2020-10-28 08:30:03.592955216 +0000 UTC m=+22.746674493) (total time: 1.388070864s): | |
Trace[492372493]: [1.387311896s] [1.384479371s] Object stored in database | |
time="2020-10-28T08:30:04.987565747Z" level=info msg="labels have been set successfully on node: k3d-testcluster-server-0" | |
I1028 08:30:05.067585 7 trace.go:116] Trace[1156925937]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:replicaset-controller,client:127.0.0.1 (started: 2020-10-28 08:30:03.648257847 +0000 UTC m=+22.801977122) (total time: 1.419072848s): | |
Trace[1156925937]: [1.418791358s] [1.417644056s] Object stored in database | |
I1028 08:30:05.097471 7 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-7944c66d8d", UID:"8b455fbe-6ef9-402f-b1af-365912e71ac2", APIVersion:"apps/v1", ResourceVersion:"403", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-7944c66d8d-x49fn | |
I1028 08:30:05.221453 7 trace.go:116] Trace[1698364778]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:job-controller,client:127.0.0.1 (started: 2020-10-28 08:30:03.72950494 +0000 UTC m=+22.883224206) (total time: 1.488300766s): | |
Trace[1698364778]: [1.488232084s] [1.463262649s] Object stored in database | |
I1028 08:30:05.264037 7 trace.go:116] Trace[447352248]: "GuaranteedUpdate etcd3" type:*apps.Deployment (started: 2020-10-28 08:30:03.727682167 +0000 UTC m=+22.881401448) (total time: 1.536330149s): | |
Trace[447352248]: [1.536248658s] [1.521348633s] Transaction committed | |
I1028 08:30:05.264612 7 trace.go:116] Trace[420580037]: "Update" url:/apis/apps/v1/namespaces/kube-system/deployments/local-path-provisioner/status,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:deployment-controller,client:127.0.0.1 (started: 2020-10-28 08:30:03.718574008 +0000 UTC m=+22.872293285) (total time: 1.546017419s): | |
Trace[420580037]: [1.545642604s] [1.536757989s] Object stored in database | |
I1028 08:30:05.286025 7 trace.go:116] Trace[331652800]: "Create" url:/api/v1/namespaces/kube-public/serviceaccounts,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:service-account-controller,client:127.0.0.1 (started: 2020-10-28 08:30:03.806389388 +0000 UTC m=+22.960108705) (total time: 1.479601393s): | |
Trace[331652800]: [1.479038636s] [1.478512001s] Object stored in database | |
I1028 08:30:05.422541 7 network_policy_controller.go:149] Starting network policy controller | |
I1028 08:30:05.432081 7 trace.go:116] Trace[1009812089]: "GuaranteedUpdate etcd3" type:*coordination.Lease (started: 2020-10-28 08:30:03.84698273 +0000 UTC m=+23.000702020) (total time: 1.585068614s): | |
Trace[1009812089]: [1.585036371s] [1.579357954s] Transaction committed | |
I1028 08:30:05.434735 7 trace.go:116] Trace[628879035]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/leader-election,client:127.0.0.1 (started: 2020-10-28 08:30:03.845598041 +0000 UTC m=+22.999317307) (total time: 1.588506296s): | |
Trace[628879035]: [1.587533281s] [1.587441373s] Object stored in database | |
I1028 08:30:05.525315 7 trace.go:116] Trace[1988446598]: "GuaranteedUpdate etcd3" type:*apps.Deployment (started: 2020-10-28 08:30:03.73263753 +0000 UTC m=+22.886356845) (total time: 1.792561107s): | |
Trace[1988446598]: [1.792434907s] [1.763290638s] Transaction committed | |
I1028 08:30:05.529828 7 trace.go:116] Trace[315899693]: "Update" url:/apis/apps/v1/namespaces/kube-system/deployments/coredns/status,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:deployment-controller,client:127.0.0.1 (started: 2020-10-28 08:30:03.701271232 +0000 UTC m=+22.854990506) (total time: 1.828516791s): | |
Trace[315899693]: [1.824337471s] [1.793809585s] Object stored in database | |
I1028 08:30:05.554754 7 trace.go:116] Trace[541548318]: "GuaranteedUpdate etcd3" type:*batch.Job (started: 2020-10-28 08:30:03.8772312 +0000 UTC m=+23.030950480) (total time: 1.677501367s): | |
Trace[541548318]: [1.677368857s] [1.668870027s] Transaction committed | |
I1028 08:30:05.555019 7 trace.go:116] Trace[187178486]: "Update" url:/apis/batch/v1/namespaces/kube-system/jobs/helm-install-traefik/status,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:job-controller,client:127.0.0.1 (started: 2020-10-28 08:30:03.87625647 +0000 UTC m=+23.029975748) (total time: 1.678739808s): | |
Trace[187178486]: [1.678614578s] [1.677728224s] Object stored in database | |
I1028 08:30:05.648891 7 trace.go:116] Trace[2092058882]: "GuaranteedUpdate etcd3" type:*coordination.Lease (started: 2020-10-28 08:30:03.984431475 +0000 UTC m=+23.138150763) (total time: 1.664374522s): | |
Trace[2092058882]: [1.664350928s] [1.663538394s] Transaction committed | |
I1028 08:30:05.649057 7 trace.go:116] Trace[1016751060]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/leader-election,client:127.0.0.1 (started: 2020-10-28 08:30:03.984162899 +0000 UTC m=+23.137882170) (total time: 1.664875975s): | |
Trace[1016751060]: [1.664777765s] [1.664570206s] Object stored in database | |
I1028 08:30:05.711013 7 trace.go:116] Trace[1056177694]: "Create" url:/api/v1/namespaces/kube-system/secrets,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/tokens-controller,client:127.0.0.1 (started: 2020-10-28 08:30:04.068147909 +0000 UTC m=+23.221867180) (total time: 1.642798237s): | |
Trace[1056177694]: [1.642545635s] [1.627900106s] Object stored in database | |
I1028 08:30:05.794649 7 trace.go:116] Trace[1880439280]: "GuaranteedUpdate etcd3" type:*core.ServiceAccount (started: 2020-10-28 08:30:04.309606487 +0000 UTC m=+23.463325787) (total time: 1.483674693s): | |
Trace[1880439280]: [1.483648445s] [1.479456793s] Transaction committed | |
I1028 08:30:05.797122 7 trace.go:116] Trace[1915060781]: "Update" url:/api/v1/namespaces/default/serviceaccounts/default,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/tokens-controller,client:127.0.0.1 (started: 2020-10-28 08:30:04.308802067 +0000 UTC m=+23.462521359) (total time: 1.488294502s): | |
Trace[1915060781]: [1.488100897s] [1.487399759s] Object stored in database | |
I1028 08:30:05.851008 7 trace.go:116] Trace[1708969984]: "GuaranteedUpdate etcd3" type:*apps.ReplicaSet (started: 2020-10-28 08:30:04.486664161 +0000 UTC m=+23.640383431) (total time: 1.364267919s): | |
Trace[1708969984]: [1.364127924s] [1.351591502s] Transaction committed | |
I1028 08:30:05.851320 7 trace.go:116] Trace[1504544726]: "Update" url:/apis/apps/v1/namespaces/kube-system/replicasets/local-path-provisioner-6d59f47c7/status,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:replicaset-controller,client:127.0.0.1 (started: 2020-10-28 08:30:04.484873587 +0000 UTC m=+23.638592867) (total time: 1.366424928s): | |
Trace[1504544726]: [1.366197089s] [1.365439065s] Object stored in database | |
I1028 08:30:05.905232 7 trace.go:116] Trace[552135566]: "GuaranteedUpdate etcd3" type:*rbac.ClusterRole (started: 2020-10-28 08:30:04.505027577 +0000 UTC m=+23.658746894) (total time: 1.400181234s): | |
Trace[552135566]: [1.399930294s] [1.395840662s] Transaction committed | |
I1028 08:30:05.906685 7 trace.go:116] Trace[2048833875]: "Update" url:/apis/rbac.authorization.k8s.io/v1/clusterroles/admin,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:clusterrole-aggregation-controller,client:127.0.0.1 (started: 2020-10-28 08:30:04.504296697 +0000 UTC m=+23.658015979) (total time: 1.402365173s): | |
Trace[2048833875]: [1.401850855s] [1.40124269s] Object stored in database | |
I1028 08:30:05.999396 7 trace.go:116] Trace[1537433638]: "GuaranteedUpdate etcd3" type:*core.Pod (started: 2020-10-28 08:30:04.662679235 +0000 UTC m=+23.816398521) (total time: 1.336641901s): | |
Trace[1537433638]: [1.336581116s] [1.336508609s] Transaction committed | |
I1028 08:30:05.999722 7 trace.go:116] Trace[1791368946]: "Create" url:/api/v1/namespaces/kube-system/pods/local-path-provisioner-6d59f47c7-zw52t/binding,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/scheduler,client:127.0.0.1 (started: 2020-10-28 08:30:04.662304736 +0000 UTC m=+23.816024012) (total time: 1.337397029s): | |
Trace[1791368946]: [1.337148636s] [1.337083069s] Object stored in database | |
I1028 08:30:06.082485 7 topology_manager.go:233] [topologymanager] Topology Admit Handler | |
I1028 08:30:06.146904 7 trace.go:116] Trace[827303740]: "GuaranteedUpdate etcd3" type:*core.Pod (started: 2020-10-28 08:30:04.65691952 +0000 UTC m=+23.810638801) (total time: 1.489773483s): | |
Trace[827303740]: [1.48954643s] [1.489506627s] Transaction committed | |
I1028 08:30:06.147271 7 trace.go:116] Trace[747502895]: "Create" url:/api/v1/namespaces/kube-system/pods/helm-install-traefik-qd5h6/binding,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/scheduler,client:127.0.0.1 (started: 2020-10-28 08:30:04.640115434 +0000 UTC m=+23.793834723) (total time: 1.507123318s): | |
Trace[747502895]: [1.506894063s] [1.490925593s] Object stored in database | |
I1028 08:30:06.172895 7 topology_manager.go:233] [topologymanager] Topology Admit Handler | |
I1028 08:30:06.214052 7 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4233c0b7-c77d-47b8-b77c-68d3942534d3-config-volume") pod "local-path-provisioner-6d59f47c7-zw52t" (UID: "4233c0b7-c77d-47b8-b77c-68d3942534d3") | |
I1028 08:30:06.214165 7 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "local-path-provisioner-service-account-token-5chgh" (UniqueName: "kubernetes.io/secret/4233c0b7-c77d-47b8-b77c-68d3942534d3-local-path-provisioner-service-account-token-5chgh") pod "local-path-provisioner-6d59f47c7-zw52t" (UID: "4233c0b7-c77d-47b8-b77c-68d3942534d3") | |
I1028 08:30:06.214190 7 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "values" (UniqueName: "kubernetes.io/configmap/77b94ca5-5abe-4b3c-86bb-0a4d595f4b73-values") pod "helm-install-traefik-qd5h6" (UID: "77b94ca5-5abe-4b3c-86bb-0a4d595f4b73") | |
I1028 08:30:06.214314 7 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "helm-traefik-token-hbwc2" (UniqueName: "kubernetes.io/secret/77b94ca5-5abe-4b3c-86bb-0a4d595f4b73-helm-traefik-token-hbwc2") pod "helm-install-traefik-qd5h6" (UID: "77b94ca5-5abe-4b3c-86bb-0a4d595f4b73") | |
I1028 08:30:06.279555 7 trace.go:116] Trace[1236697218]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:replicaset-controller,client:127.0.0.1 (started: 2020-10-28 08:30:04.783071195 +0000 UTC m=+23.936790475) (total time: 1.496248237s): | |
Trace[1236697218]: [1.495581447s] [1.495459887s] Object stored in database | |
I1028 08:30:06.378622 7 trace.go:116] Trace[1697830489]: "GuaranteedUpdate etcd3" type:*core.Pod (started: 2020-10-28 08:30:04.840387237 +0000 UTC m=+23.994106508) (total time: 1.538210645s): | |
Trace[1697830489]: [1.538101957s] [1.537471453s] Transaction committed | |
I1028 08:30:06.379379 7 trace.go:116] Trace[1615020054]: "Create" url:/api/v1/namespaces/kube-system/pods/metrics-server-7566d596c8-cnmh2/binding,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/scheduler,client:127.0.0.1 (started: 2020-10-28 08:30:04.83190748 +0000 UTC m=+23.985626758) (total time: 1.547259484s): | |
Trace[1615020054]: [1.547221295s] [1.546796187s] Object stored in database | |
I1028 08:30:06.422800 7 topology_manager.go:233] [topologymanager] Topology Admit Handler | |
I1028 08:30:06.469720 7 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "metrics-server-token-qrdz4" (UniqueName: "kubernetes.io/secret/99dd9853-5c91-4e70-a497-aa20720455a7-metrics-server-token-qrdz4") pod "metrics-server-7566d596c8-cnmh2" (UID: "99dd9853-5c91-4e70-a497-aa20720455a7") | |
I1028 08:30:06.469748 7 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-dir" (UniqueName: "kubernetes.io/empty-dir/99dd9853-5c91-4e70-a497-aa20720455a7-tmp-dir") pod "metrics-server-7566d596c8-cnmh2" (UID: "99dd9853-5c91-4e70-a497-aa20720455a7") | |
I1028 08:30:06.536453 7 trace.go:116] Trace[584709287]: "GuaranteedUpdate etcd3" type:*apps.ReplicaSet (started: 2020-10-28 08:30:04.784449134 +0000 UTC m=+23.938168415) (total time: 1.751976761s): | |
Trace[584709287]: [1.751910241s] [1.749423629s] Transaction committed | |
I1028 08:30:06.545784 7 trace.go:116] Trace[404106236]: "Update" url:/apis/apps/v1/namespaces/kube-system/replicasets/metrics-server-7566d596c8/status,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:replicaset-controller,client:127.0.0.1 (started: 2020-10-28 08:30:04.768629345 +0000 UTC m=+23.922348662) (total time: 1.777119857s): | |
Trace[404106236]: [1.771773989s] [1.769924265s] Object stored in database | |
I1028 08:30:06.994567 7 trace.go:116] Trace[1117143956]: "GuaranteedUpdate etcd3" type:*apps.Deployment (started: 2020-10-28 08:30:04.870339689 +0000 UTC m=+24.024058989) (total time: 2.124200463s): | |
Trace[1117143956]: [2.124047754s] [2.116024089s] Transaction committed | |
I1028 08:30:06.995059 7 trace.go:116] Trace[1076202203]: "Update" url:/apis/apps/v1/namespaces/kube-system/deployments/metrics-server/status,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:deployment-controller,client:127.0.0.1 (started: 2020-10-28 08:30:04.867007753 +0000 UTC m=+24.020727027) (total time: 2.127955956s): | |
Trace[1076202203]: [2.127831462s] [2.124626288s] Object stored in database | |
I1028 08:30:07.181812 7 trace.go:116] Trace[1984383660]: "GuaranteedUpdate etcd3" type:*discovery.EndpointSlice (started: 2020-10-28 08:30:04.821939541 +0000 UTC m=+23.975658839) (total time: 2.359843297s): | |
Trace[1984383660]: [2.359616857s] [2.330777463s] Transaction committed | |
I1028 08:30:07.182172 7 trace.go:116] Trace[1711523054]: "Update" url:/apis/discovery.k8s.io/v1beta1/namespaces/kube-system/endpointslices/metrics-server-x9n99,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:endpointslice-controller,client:127.0.0.1 (started: 2020-10-28 08:30:04.819098527 +0000 UTC m=+23.972817805) (total time: 2.363048671s): | |
Trace[1711523054]: [2.362944291s] [2.360359125s] Object stored in database | |
I1028 08:30:07.259332 7 trace.go:116] Trace[988527397]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:deployment-controller,client:127.0.0.1 (started: 2020-10-28 08:30:04.941171326 +0000 UTC m=+24.094890600) (total time: 2.317571611s): | |
Trace[988527397]: [2.317455116s] [2.314704929s] Object stored in database | |
I1028 08:30:07.285558 7 trace.go:116] Trace[1695091289]: "GuaranteedUpdate etcd3" type:*core.Pod (started: 2020-10-28 08:30:05.198586884 +0000 UTC m=+24.352306164) (total time: 2.086951696s): | |
Trace[1695091289]: [2.086858139s] [2.086736322s] Transaction committed | |
I1028 08:30:07.285733 7 trace.go:116] Trace[1824463876]: "Create" url:/api/v1/namespaces/kube-system/pods/coredns-7944c66d8d-x49fn/binding,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/scheduler,client:127.0.0.1 (started: 2020-10-28 08:30:05.198286389 +0000 UTC m=+24.352005666) (total time: 2.087425264s): | |
Trace[1824463876]: [2.087396684s] [2.087348693s] Object stored in database | |
I1028 08:30:07.324602 7 topology_manager.go:233] [topologymanager] Topology Admit Handler | |
I1028 08:30:07.414322 7 trace.go:116] Trace[1156600946]: "GuaranteedUpdate etcd3" type:*apps.ReplicaSet (started: 2020-10-28 08:30:05.109649948 +0000 UTC m=+24.263369236) (total time: 2.30463594s): | |
Trace[1156600946]: [2.304510838s] [2.2487184s] Transaction committed | |
I1028 08:30:07.427099 7 trace.go:116] Trace[401807356]: "Update" url:/apis/apps/v1/namespaces/kube-system/replicasets/coredns-7944c66d8d/status,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:replicaset-controller,client:127.0.0.1 (started: 2020-10-28 08:30:05.104802511 +0000 UTC m=+24.258521800) (total time: 2.320683838s): | |
Trace[401807356]: [2.309598251s] [2.304888183s] Object stored in database | |
I1028 08:30:07.466178 7 trace.go:116] Trace[1792754664]: "Create" url:/api/v1/namespaces/kube-public/secrets,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/tokens-controller,client:127.0.0.1 (started: 2020-10-28 08:30:05.509401517 +0000 UTC m=+24.663120785) (total time: 1.956725399s): | |
Trace[1792754664]: [1.956654099s] [1.956565447s] Object stored in database | |
I1028 08:30:07.553002 7 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a9709d92-591d-46e4-b71b-3df408b6ee59-config-volume") pod "coredns-7944c66d8d-x49fn" (UID: "a9709d92-591d-46e4-b71b-3df408b6ee59") | |
I1028 08:30:07.553131 7 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-mzscd" (UniqueName: "kubernetes.io/secret/a9709d92-591d-46e4-b71b-3df408b6ee59-coredns-token-mzscd") pod "coredns-7944c66d8d-x49fn" (UID: "a9709d92-591d-46e4-b71b-3df408b6ee59") | |
I1028 08:30:07.581333 7 trace.go:116] Trace[467055525]: "GuaranteedUpdate etcd3" type:*apps.Deployment (started: 2020-10-28 08:30:05.582340854 +0000 UTC m=+24.736060148) (total time: 1.998971012s): | |
Trace[467055525]: [1.998828066s] [1.976389513s] Transaction committed | |
I1028 08:30:07.586032 7 trace.go:116] Trace[1536424887]: "Update" url:/apis/apps/v1/namespaces/kube-system/deployments/coredns/status,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:deployment-controller,client:127.0.0.1 (started: 2020-10-28 08:30:05.578279459 +0000 UTC m=+24.731998749) (total time: 2.007721407s): | |
Trace[1536424887]: [2.007397763s] [2.003464836s] Object stored in database | |
I1028 08:30:07.788946 7 trace.go:116] Trace[1692631164]: "GuaranteedUpdate etcd3" type:*core.Endpoints (started: 2020-10-28 08:30:05.566109596 +0000 UTC m=+24.719828875) (total time: 2.222805836s): | |
Trace[1692631164]: [2.222768194s] [2.222402276s] Transaction committed | |
I1028 08:30:07.790201 7 trace.go:116] Trace[1542802265]: "Update" url:/api/v1/namespaces/kube-system/endpoints/cloud-controller-manager,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/leader-election,client:127.0.0.1 (started: 2020-10-28 08:30:05.565931583 +0000 UTC m=+24.719650858) (total time: 2.22379562s): | |
Trace[1542802265]: [2.223089598s] [2.222945939s] Object stored in database | |
I1028 08:30:07.838210 7 trace.go:116] Trace[1965578693]: "GuaranteedUpdate etcd3" type:*batch.Job (started: 2020-10-28 08:30:05.624891307 +0000 UTC m=+24.778610585) (total time: 2.213043766s): | |
Trace[1965578693]: [2.211116292s] [2.18941015s] Transaction committed | |
I1028 08:30:07.865206 7 trace.go:116] Trace[1606781330]: "Update" url:/apis/batch/v1/namespaces/kube-system/jobs/helm-install-traefik/status,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:job-controller,client:127.0.0.1 (started: 2020-10-28 08:30:05.62472275 +0000 UTC m=+24.778442024) (total time: 2.214734913s): | |
Trace[1606781330]: [2.213545271s] [2.213458538s] Object stored in database | |
I1028 08:30:07.944609 7 trace.go:116] Trace[1922809253]: "GuaranteedUpdate etcd3" type:*core.ServiceAccount (started: 2020-10-28 08:30:05.715718051 +0000 UTC m=+24.869437330) (total time: 2.228859609s): | |
Trace[1922809253]: [2.228826772s] [2.227060748s] Transaction committed | |
I1028 08:30:07.945142 7 trace.go:116] Trace[2074472039]: "Update" url:/api/v1/namespaces/kube-system/serviceaccounts/default,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/tokens-controller,client:127.0.0.1 (started: 2020-10-28 08:30:05.714206044 +0000 UTC m=+24.867925328) (total time: 2.230901898s): | |
Trace[2074472039]: [2.230481062s] [2.229000931s] Object stored in database | |
I1028 08:30:08.038587 7 trace.go:116] Trace[1358120128]: "GuaranteedUpdate etcd3" type:*apps.Deployment (started: 2020-10-28 08:30:05.88328476 +0000 UTC m=+25.037004075) (total time: 2.155279602s): | |
Trace[1358120128]: [2.155194819s] [2.151530003s] Transaction committed | |
I1028 08:30:08.039295 7 trace.go:116] Trace[1604757712]: "Update" url:/apis/apps/v1/namespaces/kube-system/deployments/local-path-provisioner/status,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:deployment-controller,client:127.0.0.1 (started: 2020-10-28 08:30:05.882347298 +0000 UTC m=+25.036066568) (total time: 2.156924306s): | |
Trace[1604757712]: [2.156543693s] [2.155868714s] Object stored in database | |
I1028 08:30:08.132939 7 trace.go:116] Trace[1759047094]: "GuaranteedUpdate etcd3" type:*core.Node (started: 2020-10-28 08:30:04.586384535 +0000 UTC m=+23.740103819) (total time: 3.546530013s): | |
Trace[1759047094]: [1.386174315s] [1.343120917s] Transaction committed | |
Trace[1759047094]: [3.546003284s] [2.133472357s] Transaction committed | |
I1028 08:30:08.137523 7 trace.go:116] Trace[706865362]: "Patch" url:/api/v1/nodes/k3d-testcluster-server-0/status,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf,client:127.0.0.1 (started: 2020-10-28 08:30:04.586303609 +0000 UTC m=+23.740022886) (total time: 3.550998137s): | |
Trace[706865362]: [1.386322853s] [1.365746531s] About to apply patch | |
Trace[706865362]: [3.5503005s] [2.147194758s] Object stored in database | |
I1028 08:30:08.187539 7 trace.go:116] Trace[257828923]: "Create" url:/apis/events.k8s.io/v1beta1/namespaces/kube-system/events,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/scheduler,client:127.0.0.1 (started: 2020-10-28 08:30:06.030750899 +0000 UTC m=+25.184470189) (total time: 2.156600679s): | |
Trace[257828923]: [2.156503179s] [2.155697006s] Object stored in database | |
I1028 08:30:08.199199 7 trace.go:116] Trace[1950699311]: "Create" url:/apis/events.k8s.io/v1beta1/namespaces/kube-system/events,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/scheduler,client:127.0.0.1 (started: 2020-10-28 08:30:06.21619921 +0000 UTC m=+25.369918487) (total time: 1.982975532s): | |
Trace[1950699311]: [1.982806972s] [1.982746161s] Object stored in database | |
I1028 08:30:08.246107 7 trace.go:116] Trace[1897792340]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:replicaset-controller,client:127.0.0.1 (started: 2020-10-28 08:30:06.366566336 +0000 UTC m=+25.520285611) (total time: 1.879505413s): | |
Trace[1897792340]: [1.879389657s] [1.879270701s] Object stored in database | |
I1028 08:30:08.270343 7 trace.go:116] Trace[690282541]: "Create" url:/apis/events.k8s.io/v1beta1/namespaces/kube-system/events,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/scheduler,client:127.0.0.1 (started: 2020-10-28 08:30:06.489094999 +0000 UTC m=+25.642814277) (total time: 1.780991371s): | |
Trace[690282541]: [1.778611033s] [1.778555046s] Object stored in database | |
I1028 08:30:08.321484 7 trace.go:116] Trace[1141721088]: "GuaranteedUpdate etcd3" type:*core.Pod (started: 2020-10-28 08:30:06.326025841 +0000 UTC m=+25.479745122) (total time: 1.995394464s): | |
Trace[1141721088]: [205.82456ms] [205.734133ms] Transaction prepared | |
Trace[1141721088]: [1.99415007s] [1.78832551s] Transaction committed | |
I1028 08:30:08.322343 7 trace.go:116] Trace[489796715]: "Patch" url:/api/v1/namespaces/kube-system/pods/local-path-provisioner-6d59f47c7-zw52t/status,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf,client:127.0.0.1 (started: 2020-10-28 08:30:06.325909943 +0000 UTC m=+25.479629221) (total time: 1.99637765s): | |
Trace[489796715]: [203.094471ms] [202.886721ms] About to check admission control | |
Trace[489796715]: [1.996128555s] [1.793034084s] Object stored in database | |
I1028 08:30:08.364419 7 trace.go:116] Trace[2011676420]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (started: 2020-10-28 08:30:06.614215081 +0000 UTC m=+25.767934357) (total time: 1.750181819s): | |
Trace[2011676420]: [1.75012345s] [1.668711075s] Transaction committed | |
I1028 08:30:08.398943 7 trace.go:116] Trace[1011057185]: "GuaranteedUpdate etcd3" type:*core.ConfigMap (started: 2020-10-28 08:30:07.091953228 +0000 UTC m=+26.245672532) (total time: 1.306970114s): | |
Trace[1011057185]: [1.306892495s] [1.295917815s] Transaction committed | |
I1028 08:30:08.399662 7 trace.go:116] Trace[1278945607]: "Update" url:/api/v1/namespaces/kube-system/configmaps/k3s,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf,client:127.0.0.1 (started: 2020-10-28 08:30:07.088678019 +0000 UTC m=+26.242397294) (total time: 1.310961638s): | |
Trace[1278945607]: [1.310318986s] [1.310145941s] Object stored in database | |
I1028 08:30:08.463750 7 trace.go:116] Trace[50337085]: "GuaranteedUpdate etcd3" type:*apps.ReplicaSet (started: 2020-10-28 08:30:06.990461325 +0000 UTC m=+26.144180609) (total time: 1.473265812s): | |
Trace[50337085]: [1.473209618s] [1.470845673s] Transaction committed | |
I1028 08:30:08.463887 7 trace.go:116] Trace[177718618]: "Update" url:/apis/apps/v1/namespaces/kube-system/replicasets/metrics-server-7566d596c8/status,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:replicaset-controller,client:127.0.0.1 (started: 2020-10-28 08:30:06.954536509 +0000 UTC m=+26.108255787) (total time: 1.509329834s): | |
Trace[177718618]: [1.509254381s] [1.473780925s] Object stored in database | |
I1028 08:30:08.512633 7 trace.go:116] Trace[1545170888]: "Create" url:/apis/events.k8s.io/v1beta1/namespaces/kube-system/events,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/scheduler,client:127.0.0.1 (started: 2020-10-28 08:30:07.347879984 +0000 UTC m=+26.501599279) (total time: 1.164679548s): | |
Trace[1545170888]: [1.164491883s] [1.164422554s] Object stored in database | |
I1028 08:30:08.543703 7 trace.go:116] Trace[1436269224]: "GuaranteedUpdate etcd3" type:*core.ServiceAccount (started: 2020-10-28 08:30:07.480770564 +0000 UTC m=+26.634489855) (total time: 1.062910429s): | |
Trace[1436269224]: [1.062883815s] [1.062289655s] Transaction committed | |
I1028 08:30:08.543917 7 trace.go:116] Trace[817399075]: "Update" url:/api/v1/namespaces/kube-public/serviceaccounts/default,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/tokens-controller,client:127.0.0.1 (started: 2020-10-28 08:30:07.480622966 +0000 UTC m=+26.634342246) (total time: 1.063234577s): | |
Trace[817399075]: [1.063187289s] [1.063077075s] Object stored in database | |
I1028 08:30:08.570590 7 trace.go:116] Trace[945547354]: "GuaranteedUpdate etcd3" type:*core.Endpoints (started: 2020-10-28 08:30:07.738208733 +0000 UTC m=+26.891928020) (total time: 832.3612ms): | |
Trace[945547354]: [827.287315ms] [826.017153ms] Transaction committed | |
I1028 08:30:08.570707 7 trace.go:116] Trace[363433564]: "Update" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/leader-election,client:127.0.0.1 (started: 2020-10-28 08:30:07.738080496 +0000 UTC m=+26.891799783) (total time: 832.609546ms): | |
Trace[363433564]: [832.54541ms] [832.44856ms] Object stored in database | |
I1028 08:30:08.617876 7 trace.go:116] Trace[1969997997]: "GuaranteedUpdate etcd3" type:*core.Endpoints (started: 2020-10-28 08:30:07.89865103 +0000 UTC m=+27.052370324) (total time: 719.2073ms): | |
Trace[1969997997]: [719.152814ms] [715.649973ms] Transaction committed | |
I1028 08:30:08.618020 7 trace.go:116] Trace[1758000199]: "Update" url:/api/v1/namespaces/kube-system/endpoints/kube-scheduler,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/leader-election,client:127.0.0.1 (started: 2020-10-28 08:30:07.870551629 +0000 UTC m=+27.024270905) (total time: 747.451403ms): | |
Trace[1758000199]: [747.374851ms] [719.81937ms] Object stored in database | |
I1028 08:30:08.656470 7 trace.go:116] Trace[16535317]: "GuaranteedUpdate etcd3" type:*coordination.Lease (started: 2020-10-28 08:30:07.935639478 +0000 UTC m=+27.089358777) (total time: 720.810435ms): | |
Trace[16535317]: [720.731102ms] [719.659482ms] Transaction committed | |
I1028 08:30:08.656729 7 trace.go:116] Trace[228174825]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cloud-controller-manager,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/leader-election,client:127.0.0.1 (started: 2020-10-28 08:30:07.935088117 +0000 UTC m=+27.088807392) (total time: 721.61346ms): | |
Trace[228174825]: [721.496378ms] [721.444099ms] Object stored in database | |
I1028 08:30:08.707474 7 trace.go:116] Trace[247803966]: "GuaranteedUpdate etcd3" type:*apps.ReplicaSet (started: 2020-10-28 08:30:07.748071909 +0000 UTC m=+26.901791193) (total time: 959.373514ms): | |
Trace[247803966]: [956.69773ms] [923.18488ms] Transaction committed | |
I1028 08:30:08.709262 7 trace.go:116] Trace[1002351667]: "Update" url:/apis/apps/v1/namespaces/kube-system/replicasets/coredns-7944c66d8d/status,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:replicaset-controller,client:127.0.0.1 (started: 2020-10-28 08:30:07.741887543 +0000 UTC m=+26.895606831) (total time: 967.227589ms): | |
Trace[1002351667]: [965.896667ms] [961.848252ms] Object stored in database | |
I1028 08:30:08.736430 7 trace.go:116] Trace[2059467800]: "GuaranteedUpdate etcd3" type:*core.Node (started: 2020-10-28 08:30:08.162353392 +0000 UTC m=+27.316072680) (total time: 573.935039ms): | |
Trace[2059467800]: [573.786998ms] [544.658098ms] Transaction committed | |
I1028 08:30:08.737453 7 trace.go:116] Trace[1975927804]: "Patch" url:/api/v1/nodes/k3d-testcluster-server-0/status,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf,client:127.0.0.1 (started: 2020-10-28 08:30:08.162231609 +0000 UTC m=+27.315950885) (total time: 575.050163ms): | |
Trace[1975927804]: [574.342009ms] [545.393358ms] Object stored in database | |
I1028 08:30:08.812493 7 flannel.go:78] Wrote subnet file to /run/flannel/subnet.env | |
I1028 08:30:08.812529 7 flannel.go:82] Running backend. | |
I1028 08:30:08.812537 7 vxlan_network.go:60] watching for new subnet leases | |
I1028 08:30:08.902296 7 iptables.go:145] Some iptables rules are missing; deleting and recreating rules | |
I1028 08:30:08.902914 7 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 -j ACCEPT | |
I1028 08:30:08.906267 7 iptables.go:145] Some iptables rules are missing; deleting and recreating rules | |
I1028 08:30:08.906338 7 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN | |
I1028 08:30:08.908739 7 iptables.go:167] Deleting iptables rule: -d 10.42.0.0/16 -j ACCEPT | |
I1028 08:30:08.910178 7 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 -j ACCEPT | |
I1028 08:30:08.916750 7 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully | |
I1028 08:30:08.934950 7 iptables.go:167] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/24 -j RETURN | |
I1028 08:30:08.942371 7 iptables.go:167] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully | |
I1028 08:30:08.984102 7 iptables.go:155] Adding iptables rule: -d 10.42.0.0/16 -j ACCEPT | |
I1028 08:30:09.032670 7 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN | |
I1028 08:30:09.107007 7 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully | |
I1028 08:30:09.119630 7 iptables.go:155] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/24 -j RETURN | |
I1028 08:30:09.127044 7 iptables.go:155] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully | |
time="2020-10-28T08:30:10.208173760Z" level=info msg="Tunnel endpoint watch event: [172.18.0.2:6443 172.18.0.3:6443]" | |
time="2020-10-28T08:30:10.208215446Z" level=info msg="Connecting to proxy" url="wss://172.18.0.3:6443/v1-k3s/connect" | |
time="2020-10-28T08:30:11.203104564Z" level=info msg="Active TLS secret k3s-serving (ver=489) (count 10): map[listener.cattle.io/cn-0.0.0.0:0.0.0.0 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.18.0.2:172.18.0.2 listener.cattle.io/cn-172.18.0.3:172.18.0.3 listener.cattle.io/cn-k3d-testcluster-server-0:k3d-testcluster-server-0 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/hash:88ce676c700a3b326f54de39eea53ac96ef93265b80e440a66bed398e65a1403]" | |
time="2020-10-28T08:30:11.906761407Z" level=info msg="Handling backend connection request [k3d-testcluster-server-1]" | |
W1028 08:30:12.205094 7 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="k3d-testcluster-server-1" does not exist | |
I1028 08:30:12.205715 7 node_controller.go:325] Initializing node k3d-testcluster-server-1 with cloud provider | |
time="2020-10-28T08:30:12.211166141Z" level=info msg="couldn't find node internal ip label on node k3d-testcluster-server-1" | |
time="2020-10-28T08:30:12.211205038Z" level=info msg="couldn't find node hostname label on node k3d-testcluster-server-1" | |
I1028 08:30:12.213574 7 range_allocator.go:373] Set node k3d-testcluster-server-1 PodCIDR to [10.42.1.0/24] | |
time="2020-10-28T08:30:12.221907390Z" level=info msg="Updated coredns node hosts entry [172.18.0.3 k3d-testcluster-server-1]" | |
time="2020-10-28T08:30:12.236693701Z" level=info msg="couldn't find node internal ip label on node k3d-testcluster-server-1" | |
time="2020-10-28T08:30:12.236733993Z" level=info msg="couldn't find node hostname label on node k3d-testcluster-server-1" | |
I1028 08:30:12.236747 7 node_controller.go:397] Successfully initialized node k3d-testcluster-server-1 with cloud provider | |
I1028 08:30:12.236757 7 node_controller.go:325] Initializing node k3d-testcluster-server-1 with cloud provider | |
I1028 08:30:12.246843 7 node_controller.go:325] Initializing node k3d-testcluster-server-1 with cloud provider | |
I1028 08:30:12.261305 7 node_controller.go:325] Initializing node k3d-testcluster-server-1 with cloud provider | |
I1028 08:30:13.897616 7 log.go:172] http: TLS handshake error from 172.18.0.4:55518: remote error: tls: bad certificate | |
I1028 08:30:13.958589 7 log.go:172] http: TLS handshake error from 172.18.0.4:55526: remote error: tls: bad certificate | |
I1028 08:30:16.430486 7 trace.go:116] Trace[727760328]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/leader-election,client:127.0.0.1 (started: 2020-10-28 08:30:15.922873994 +0000 UTC m=+35.076593303) (total time: 507.58031ms): | |
Trace[727760328]: [507.453748ms] [507.413763ms] About to write a response | |
W1028 08:30:16.485357 7 node_lifecycle_controller.go:1048] Missing timestamp for Node k3d-testcluster-server-1. Assuming now as a timestamp. | |
I1028 08:30:16.487377 7 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"k3d-testcluster-server-1", UID:"b5263b51-8b1f-453f-871d-94ca1e42e8b1", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node k3d-testcluster-server-1 event: Registered Node k3d-testcluster-server-1 in Controller | |
I1028 08:30:20.724168 7 trace.go:116] Trace[1323235882]: "GuaranteedUpdate etcd3" type:*core.Endpoints (started: 2020-10-28 08:30:19.494079404 +0000 UTC m=+38.647798708) (total time: 1.230046231s): | |
Trace[1323235882]: [1.229937975s] [1.192678647s] Transaction committed | |
I1028 08:30:20.724589 7 trace.go:116] Trace[1213718989]: "Update" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/leader-election,client:127.0.0.1 (started: 2020-10-28 08:30:19.48959101 +0000 UTC m=+38.643310301) (total time: 1.234968907s): | |
Trace[1213718989]: [1.234718469s] [1.232202901s] Object stored in database | |
I1028 08:30:20.893523 7 trace.go:116] Trace[2075040988]: "GuaranteedUpdate etcd3" type:*core.Endpoints (started: 2020-10-28 08:30:20.164327936 +0000 UTC m=+39.318047236) (total time: 728.984503ms): | |
Trace[2075040988]: [728.932064ms] [722.124779ms] Transaction committed | |
I1028 08:30:20.895129 7 trace.go:116] Trace[1785785145]: "Update" url:/api/v1/namespaces/kube-system/endpoints/cloud-controller-manager,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/leader-election,client:127.0.0.1 (started: 2020-10-28 08:30:20.164104933 +0000 UTC m=+39.317824215) (total time: 730.791748ms): | |
Trace[1785785145]: [729.456434ms] [729.274179ms] Object stored in database | |
I1028 08:30:21.346270 7 trace.go:116] Trace[1156538995]: "GuaranteedUpdate etcd3" type:*core.ConfigMap (started: 2020-10-28 08:30:19.521082433 +0000 UTC m=+38.674801733) (total time: 1.82515908s): | |
Trace[1156538995]: [1.824422495s] [1.812790161s] Transaction committed | |
I1028 08:30:21.355145 7 trace.go:116] Trace[619035684]: "Update" url:/api/v1/namespaces/kube-system/configmaps/k3s,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf,client:127.0.0.1 (started: 2020-10-28 08:30:19.519625499 +0000 UTC m=+38.673344773) (total time: 1.835457267s): | |
Trace[619035684]: [1.835120443s] [1.83421186s] Object stored in database | |
I1028 08:30:21.834694 7 trace.go:116] Trace[486732290]: "GuaranteedUpdate etcd3" type:*coordination.Lease (started: 2020-10-28 08:30:20.089685432 +0000 UTC m=+39.243404725) (total time: 1.744982484s): | |
Trace[486732290]: [1.744125495s] [1.73016544s] Transaction committed | |
I1028 08:30:21.835942 7 trace.go:116] Trace[197925385]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/k3d-testcluster-server-0,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf,client:127.0.0.1 (started: 2020-10-28 08:30:20.089038851 +0000 UTC m=+39.242758147) (total time: 1.746313536s): | |
Trace[197925385]: [1.746239767s] [1.746082623s] Object stored in database | |
I1028 08:30:21.839053 7 trace.go:116] Trace[229263802]: "List etcd3" key:/jobs,resourceVersion:,limit:500,continue: (started: 2020-10-28 08:30:20.744081839 +0000 UTC m=+39.897801120) (total time: 1.094944948s): | |
Trace[229263802]: [1.094944948s] [1.094944948s] END | |
I1028 08:30:21.889182 7 trace.go:116] Trace[23808623]: "List" url:/apis/batch/v1/jobs,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:cronjob-controller,client:127.0.0.1 (started: 2020-10-28 08:30:20.744054475 +0000 UTC m=+39.897773753) (total time: 1.145084177s): | |
Trace[23808623]: [1.095040867s] [1.095019997s] Listing from storage done | |
I1028 08:30:21.920721 7 trace.go:116] Trace[1388052275]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/leader-election,client:127.0.0.1 (started: 2020-10-28 08:30:20.895028732 +0000 UTC m=+40.048748020) (total time: 1.02566754s): | |
Trace[1388052275]: [1.025616386s] [1.025595238s] About to write a response | |
I1028 08:30:22.507486 7 trace.go:116] Trace[1538706623]: "GuaranteedUpdate etcd3" type:*core.Node (started: 2020-10-28 08:30:20.330603596 +0000 UTC m=+39.484322883) (total time: 2.176797386s): | |
Trace[1538706623]: [2.176642103s] [2.129452487s] Transaction committed | |
I1028 08:30:22.507962 7 trace.go:116] Trace[2082594921]: "Patch" url:/api/v1/nodes/k3d-testcluster-server-0/status,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf,client:127.0.0.1 (started: 2020-10-28 08:30:20.330228165 +0000 UTC m=+39.483947467) (total time: 2.177708046s): | |
Trace[2082594921]: [2.177378359s] [2.134620731s] Object stored in database | |
I1028 08:30:22.928776 7 trace.go:116] Trace[1264854921]: "GuaranteedUpdate etcd3" type:*coordination.Lease (started: 2020-10-28 08:30:21.928473946 +0000 UTC m=+41.082193236) (total time: 1.000279491s): | |
Trace[1264854921]: [1.000252137s] [989.529698ms] Transaction committed | |
I1028 08:30:22.929566 7 trace.go:116] Trace[107071565]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/leader-election,client:127.0.0.1 (started: 2020-10-28 08:30:21.927183992 +0000 UTC m=+41.080903271) (total time: 1.00234994s): | |
Trace[107071565]: [1.002231221s] [1.001878826s] Object stored in database | |
I1028 08:30:23.288432 7 trace.go:116] Trace[2069094555]: "GuaranteedUpdate etcd3" type:*coordination.Lease (started: 2020-10-28 08:30:21.374966695 +0000 UTC m=+40.528685983) (total time: 1.913436195s): | |
Trace[2069094555]: [1.913261655s] [1.902870715s] Transaction committed | |
I1028 08:30:23.290651 7 trace.go:116] Trace[949171234]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cloud-controller-manager,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/leader-election,client:127.0.0.1 (started: 2020-10-28 08:30:21.374021999 +0000 UTC m=+40.527741293) (total time: 1.916011962s): | |
Trace[949171234]: [1.915861683s] [1.915120203s] Object stored in database | |
I1028 08:30:23.356603 7 trace.go:116] Trace[1435742927]: "List etcd3" key:/cronjobs,resourceVersion:,limit:500,continue: (started: 2020-10-28 08:30:21.907113307 +0000 UTC m=+41.060832581) (total time: 1.449261396s): | |
Trace[1435742927]: [1.449261396s] [1.449261396s] END | |
I1028 08:30:23.360905 7 trace.go:116] Trace[490337767]: "List" url:/apis/batch/v1beta1/cronjobs,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:cronjob-controller,client:127.0.0.1 (started: 2020-10-28 08:30:21.907024923 +0000 UTC m=+41.060744199) (total time: 1.451041569s): | |
Trace[490337767]: [1.450455241s] [1.450374153s] Listing from storage done | |
time="2020-10-28T08:30:23.992538608Z" level=warning msg="no known leader address=172.18.0.2:6443 attempt=0" | |
time="2020-10-28T08:30:24.010567678Z" level=warning msg="reported leader server is not the leader address=172.18.0.3:6443 attempt=0" | |
time="2020-10-28T08:30:24.044406191Z" level=warning msg="no known leader address=172.18.0.2:6443 attempt=0" | |
time="2020-10-28T08:30:24.044559955Z" level=warning msg="no known leader address=172.18.0.2:6443 attempt=0" | |
time="2020-10-28T08:30:24.045032950Z" level=warning msg="no known leader address=172.18.0.2:6443 attempt=0" | |
time="2020-10-28T08:30:24.045306919Z" level=warning msg="no known leader address=172.18.0.2:6443 attempt=0" | |
time="2020-10-28T08:30:24.213584860Z" level=warning msg="no known leader address=172.18.0.3:6443 attempt=0" | |
time="2020-10-28T08:30:24.215638567Z" level=warning msg="no known leader address=172.18.0.3:6443 attempt=0" | |
time="2020-10-28T08:30:24.217384707Z" level=warning msg="no known leader address=172.18.0.2:6443 attempt=0" | |
time="2020-10-28T08:30:24.235429800Z" level=warning msg="no known leader address=172.18.0.3:6443 attempt=0" | |
time="2020-10-28T08:30:24.235719894Z" level=warning msg="no known leader address=172.18.0.3:6443 attempt=0" | |
time="2020-10-28T08:30:24.237693084Z" level=warning msg="no known leader address=172.18.0.3:6443 attempt=0" | |
time="2020-10-28T08:30:24.285761966Z" level=warning msg="no known leader address=172.18.0.2:6443 attempt=1" | |
time="2020-10-28T08:30:24.343110845Z" level=warning msg="no known leader address=172.18.0.3:6443 attempt=1" | |
time="2020-10-28T08:30:24.417265150Z" level=warning msg="no known leader address=172.18.0.2:6443 attempt=1" | |
time="2020-10-28T08:30:24.422788561Z" level=warning msg="no known leader address=172.18.0.2:6443 attempt=1" | |
time="2020-10-28T08:30:24.463720780Z" level=warning msg="no known leader address=172.18.0.2:6443 attempt=1" | |
time="2020-10-28T08:30:24.464791607Z" level=warning msg="no known leader address=172.18.0.2:6443 attempt=1" | |
time="2020-10-28T08:30:24.465498130Z" level=warning msg="no known leader address=172.18.0.2:6443 attempt=1" | |
time="2020-10-28T08:30:24.512607545Z" level=warning msg="no known leader address=172.18.0.3:6443 attempt=1" | |
I1028 08:30:24.579253 7 trace.go:116] Trace[350986550]: "GuaranteedUpdate etcd3" type:*core.Endpoints (started: 2020-10-28 08:30:22.923312603 +0000 UTC m=+42.077031890) (total time: 1.65591601s): | |
Trace[350986550]: [1.655880076s] [1.655150916s] Transaction committed | |
I1028 08:30:24.596309 7 trace.go:116] Trace[2086555311]: "Update" url:/api/v1/namespaces/kube-system/endpoints/kube-scheduler,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/leader-election,client:127.0.0.1 (started: 2020-10-28 08:30:22.782676461 +0000 UTC m=+41.936395740) (total time: 1.811387709s): | |
Trace[2086555311]: [140.513651ms] [140.513651ms] About to convert to expected version | |
Trace[2086555311]: [1.811224029s] [1.670625296s] Object stored in database | |
I1028 08:30:24.625514 7 trace.go:116] Trace[1324802123]: "GuaranteedUpdate etcd3" type:*core.Node (started: 2020-10-28 08:30:23.528772219 +0000 UTC m=+42.682491498) (total time: 1.096706172s): | |
Trace[1324802123]: [1.096425723s] [1.025576386s] Transaction committed | |
I1028 08:30:24.626864 7 trace.go:116] Trace[1989001296]: "Patch" url:/api/v1/nodes/k3d-testcluster-server-1,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/system:serviceaccount:kube-system:node-controller,client:127.0.0.1 (started: 2020-10-28 08:30:23.528670496 +0000 UTC m=+42.682389770) (total time: 1.098168926s): | |
Trace[1989001296]: [1.096982237s] [1.0306537s] Object stored in database | |
I1028 08:30:25.019615 7 trace.go:116] Trace[214171551]: "Get" url:/api/v1/namespaces/kube-system/configmaps/k3s,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf,client:127.0.0.1 (started: 2020-10-28 08:30:23.672167862 +0000 UTC m=+42.825887149) (total time: 1.347416971s): | |
Trace[214171551]: [1.331129381s] [1.326215313s] About to write a response | |
I1028 08:30:31.111470 7 controller.go:606] quota admission added evaluator for: daemonsets.apps | |
I1028 08:30:31.245839 7 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"traefik", UID:"d34dc607-cf4c-4f64-8f28-e6a601b32b4a", APIVersion:"apps/v1", ResourceVersion:"624", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set traefik-758cd5fc85 to 1 | |
I1028 08:30:31.305174 7 controller.go:606] quota admission added evaluator for: controllerrevisions.apps | |
I1028 08:30:31.315077 7 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"traefik-758cd5fc85", UID:"65db9103-f5ec-450e-b9e9-ba1c7eb828f5", APIVersion:"apps/v1", ResourceVersion:"629", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: traefik-758cd5fc85-lmppv | |
I1028 08:30:31.364840 7 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"svclb-traefik", UID:"139fd0cd-dee6-4df8-9642-e8f43956ee37", APIVersion:"apps/v1", ResourceVersion:"625", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: svclb-traefik-plxmh | |
I1028 08:30:31.402586 7 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"svclb-traefik", UID:"139fd0cd-dee6-4df8-9642-e8f43956ee37", APIVersion:"apps/v1", ResourceVersion:"625", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: svclb-traefik-9w2xw | |
I1028 08:30:31.460526 7 topology_manager.go:233] [topologymanager] Topology Admit Handler | |
E1028 08:30:31.496753 7 daemon_controller.go:321] kube-system/svclb-traefik failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"svclb-traefik", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/svclb-traefik", UID:"139fd0cd-dee6-4df8-9642-e8f43956ee37", ResourceVersion:"625", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63739470631, loc:(*time.Location)(0x6f91ee0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"objectset.rio.cattle.io/hash":"f31475152fbf70655d3c016d368e90118938f6ea", "svccontroller.k3s.cattle.io/nodeselector":"false"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "objectset.rio.cattle.io/applied":"H4sIAAAAAAAA/8xUwW7jNhD9lWLOlCPFltYR0MMiySFo1zFsby+BEYyoUcyaIgVypMYw9O8F5WTtbJJN0fawBx88nHl6fG/49rBVpoQcrpBqa5bEIAAb9Qc5r6yBHLBp/FmXgICaGEtkhHwPBmuCHHwndRGxQ6rUFsRQ9g3KcLZtC4r8zjPVIEA6QlbWrFRNnrFuIDet1gI0FqR9wLTFnyTZE4+csiOJzJpGyp5t0G8gh2qcTD6lSXpeFdWnOEvTcizjJCvH2ZQu4iSZXoynVUYIItCS1rCzWpMbbcf+BM3YkjxpkmxdQEXtCXoBaIzlgeIPyajycO0jPoh3m+1fhlz00G0hh7MuEb/8pkz565JcpyR9OPek8VHdj9vf0r4XMDQsqCJHRpKH/G7/0uTB36dNONJ7RaAdbl+eS5wkaRZl6XkWTdIqiS7SJIuStBhXMkuyVMrg+FGhnF1L/boX4BuSQd6jA3uokeXm929rgE3zarP6XgBT3WhkGkZOVvEfbNZbkD/eEt/J727fn7APY6gMuYOUT526iBrrOJrGIEDV+BCKDo3ckDvbatU05CJd5F08SkbnICB0v4+wsZ7n1jHk01gcP3ksNc6ylVZDDqvLOfRrAWS6U7zl4vJ+frtYgYAOdRtK0xh68a3h6nq5up8vble3Jy0D2Pc9H6LczE/Ok3g0GY/O4/CbDswcedu6Yfn2/ZM881brudVK7iCHm2pmee7IkwkZ5Em2TvHu0hqmRx5UxwYLpRUrOrhalpDfwex6df/56svNDNZ9f8LqWczJZPxf/ThAHA2ZTMavHBlq/8qSgP4/ePIWzE9iyloAW03uOV7v9rClgB/C2FlNoxBYzhCTD6+vRs+HYG3C0BDU14/KswcBVFUkGXKY2aXcUNlqGu59QLx0ipVE/bksrfG3Ru/ehOnX4T23TYlMS3bI9LAL9HnXBKkWVmtlHr4O5yDAvfg/ZNbjV4MdKo2FJsiTfsgHRm4HGWTrHBmetXVB7plmCXkswAy1L8r7F+WSvHJUvj+xICx3kMd9/3cAAAD//0Gy1y61BwAA", "objectset.rio.cattle.io/id":"svccontroller", "objectset.rio.cattle.io/owner-gvk":"/v1, Kind=Service", "objectset.rio.cattle.io/owner-name":"traefik", "objectset.rio.cattle.io/owner-namespace":"kube-system"}, OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Service", Name:"traefik", UID:"d2ca4156-6526-45f1-9516-15b3fc6165cc", Controller:(*bool)(0xc00dc5fe90), BlockOwnerDeletion:(*bool)(nil)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"k3s", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0078f9f60), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0078f9f80)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0078f9fa0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"svclb-traefik", "svccontroller.k3s.cattle.io/svcname":"traefik"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"lb-port-80", Image:"rancher/klipper-lb:v0.1.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"lb-port-80", HostPort:80, ContainerPort:80, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"SRC_PORT", Value:"80", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"DEST_PROTO", Value:"TCP", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"DEST_PORT", Value:"80", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"DEST_IP", Value:"10.43.203.208", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc007a2ec80), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"lb-port-443", Image:"rancher/klipper-lb:v0.1.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"lb-port-443", HostPort:443, ContainerPort:443, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"SRC_PORT", Value:"443", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"DEST_PROTO", Value:"TCP", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"DEST_PORT", Value:"443", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"DEST_IP", Value:"10.43.203.208", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc007a2ed70), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00dc86010), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc015ed1ab0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"noderole.kubernetes.io/master", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00fa6f998)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00dc86050)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "svclb-traefik": the object has been modified; please apply your changes to the latest version and try again | |
I1028 08:30:31.541829 7 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-tvq6m" (UniqueName: "kubernetes.io/secret/77e07d91-7171-4591-8eb4-c7b9721bd345-default-token-tvq6m") pod "svclb-traefik-9w2xw" (UID: "77e07d91-7171-4591-8eb4-c7b9721bd345") | |
E1028 08:30:31.631743 7 daemon_controller.go:321] kube-system/svclb-traefik failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"svclb-traefik", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/svclb-traefik", UID:"139fd0cd-dee6-4df8-9642-e8f43956ee37", ResourceVersion:"647", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63739470631, loc:(*time.Location)(0x6f91ee0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"objectset.rio.cattle.io/hash":"f31475152fbf70655d3c016d368e90118938f6ea", "svccontroller.k3s.cattle.io/nodeselector":"false"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "objectset.rio.cattle.io/applied":"H4sIAAAAAAAA/8xUwW7jNhD9lWLOlCPFltYR0MMiySFo1zFsby+BEYyoUcyaIgVypMYw9O8F5WTtbJJN0fawBx88nHl6fG/49rBVpoQcrpBqa5bEIAAb9Qc5r6yBHLBp/FmXgICaGEtkhHwPBmuCHHwndRGxQ6rUFsRQ9g3KcLZtC4r8zjPVIEA6QlbWrFRNnrFuIDet1gI0FqR9wLTFnyTZE4+csiOJzJpGyp5t0G8gh2qcTD6lSXpeFdWnOEvTcizjJCvH2ZQu4iSZXoynVUYIItCS1rCzWpMbbcf+BM3YkjxpkmxdQEXtCXoBaIzlgeIPyajycO0jPoh3m+1fhlz00G0hh7MuEb/8pkz565JcpyR9OPek8VHdj9vf0r4XMDQsqCJHRpKH/G7/0uTB36dNONJ7RaAdbl+eS5wkaRZl6XkWTdIqiS7SJIuStBhXMkuyVMrg+FGhnF1L/boX4BuSQd6jA3uokeXm929rgE3zarP6XgBT3WhkGkZOVvEfbNZbkD/eEt/J727fn7APY6gMuYOUT526iBrrOJrGIEDV+BCKDo3ckDvbatU05CJd5F08SkbnICB0v4+wsZ7n1jHk01gcP3ksNc6ylVZDDqvLOfRrAWS6U7zl4vJ+frtYgYAOdRtK0xh68a3h6nq5up8vble3Jy0D2Pc9H6LczE/Ok3g0GY/O4/CbDswcedu6Yfn2/ZM881brudVK7iCHm2pmee7IkwkZ5Em2TvHu0hqmRx5UxwYLpRUrOrhalpDfwex6df/56svNDNZ9f8LqWczJZPxf/ThAHA2ZTMavHBlq/8qSgP4/ePIWzE9iyloAW03uOV7v9rClgB/C2FlNoxBYzhCTD6+vRs+HYG3C0BDU14/KswcBVFUkGXKY2aXcUNlqGu59QLx0ipVE/bksrfG3Ru/ehOnX4T23TYlMS3bI9LAL9HnXBKkWVmtlHr4O5yDAvfg/ZNbjV4MdKo2FJsiTfsgHRm4HGWTrHBmetXVB7plmCXkswAy1L8r7F+WSvHJUvj+xICx3kMd9/3cAAAD//0Gy1y61BwAA", "objectset.rio.cattle.io/id":"svccontroller", "objectset.rio.cattle.io/owner-gvk":"/v1, Kind=Service", "objectset.rio.cattle.io/owner-name":"traefik", "objectset.rio.cattle.io/owner-namespace":"kube-system"}, OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Service", Name:"traefik", UID:"d2ca4156-6526-45f1-9516-15b3fc6165cc", Controller:(*bool)(0xc00e333ed0), BlockOwnerDeletion:(*bool)(nil)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"k3s", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00d616c60), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00d616c80)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00d616ca0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"svclb-traefik", "svccontroller.k3s.cattle.io/svcname":"traefik"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"lb-port-80", Image:"rancher/klipper-lb:v0.1.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"lb-port-80", HostPort:80, ContainerPort:80, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"SRC_PORT", Value:"80", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"DEST_PROTO", Value:"TCP", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"DEST_PORT", Value:"80", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"DEST_IP", Value:"10.43.203.208", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc007742a50), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"lb-port-443", Image:"rancher/klipper-lb:v0.1.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"lb-port-443", HostPort:443, ContainerPort:443, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"SRC_PORT", Value:"443", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"DEST_PROTO", Value:"TCP", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"DEST_PORT", Value:"443", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"DEST_IP", Value:"10.43.203.208", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc007742b90), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00e3740d0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc013186000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"noderole.kubernetes.io/master", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00bddc2e0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00e374110)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:1, NumberMisscheduled:0, DesiredNumberScheduled:2, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:1, NumberAvailable:0, NumberUnavailable:2, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "svclb-traefik": the object has been modified; please apply your changes to the latest version and try again | |
I1028 08:30:32.289965 7 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: ce195a57b12b887b65411e5501509c0b9570e90f4e4e00534e2451daafe2d51d | |
I1028 08:30:32.318069 7 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"kube-system", Name:"helm-install-traefik", UID:"396c60f8-3525-49b8-b44d-1cc70575cde6", APIVersion:"batch/v1", ResourceVersion:"452", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed | |
I1028 08:30:32.448997 7 reconciler.go:196] operationExecutor.UnmountVolume started for volume "helm-traefik-token-hbwc2" (UniqueName: "kubernetes.io/secret/77b94ca5-5abe-4b3c-86bb-0a4d595f4b73-helm-traefik-token-hbwc2") pod "77b94ca5-5abe-4b3c-86bb-0a4d595f4b73" (UID: "77b94ca5-5abe-4b3c-86bb-0a4d595f4b73") | |
I1028 08:30:32.449107 7 reconciler.go:196] operationExecutor.UnmountVolume started for volume "values" (UniqueName: "kubernetes.io/configmap/77b94ca5-5abe-4b3c-86bb-0a4d595f4b73-values") pod "77b94ca5-5abe-4b3c-86bb-0a4d595f4b73" (UID: "77b94ca5-5abe-4b3c-86bb-0a4d595f4b73") | |
W1028 08:30:32.449377 7 empty_dir.go:453] Warning: Failed to clear quota on /var/lib/kubelet/pods/77b94ca5-5abe-4b3c-86bb-0a4d595f4b73/volumes/kubernetes.io~configmap/values: ClearQuota called, but quotas disabled | |
I1028 08:30:32.450032 7 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77b94ca5-5abe-4b3c-86bb-0a4d595f4b73-values" (OuterVolumeSpecName: "values") pod "77b94ca5-5abe-4b3c-86bb-0a4d595f4b73" (UID: "77b94ca5-5abe-4b3c-86bb-0a4d595f4b73"). InnerVolumeSpecName "values". PluginName "kubernetes.io/configmap", VolumeGidValue "" | |
I1028 08:30:32.451941 7 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77b94ca5-5abe-4b3c-86bb-0a4d595f4b73-helm-traefik-token-hbwc2" (OuterVolumeSpecName: "helm-traefik-token-hbwc2") pod "77b94ca5-5abe-4b3c-86bb-0a4d595f4b73" (UID: "77b94ca5-5abe-4b3c-86bb-0a4d595f4b73"). InnerVolumeSpecName "helm-traefik-token-hbwc2". PluginName "kubernetes.io/secret", VolumeGidValue "" | |
I1028 08:30:32.550440 7 reconciler.go:319] Volume detached for volume "values" (UniqueName: "kubernetes.io/configmap/77b94ca5-5abe-4b3c-86bb-0a4d595f4b73-values") on node "k3d-testcluster-server-0" DevicePath "" | |
I1028 08:30:32.550948 7 reconciler.go:319] Volume detached for volume "helm-traefik-token-hbwc2" (UniqueName: "kubernetes.io/secret/77b94ca5-5abe-4b3c-86bb-0a4d595f4b73-helm-traefik-token-hbwc2") on node "k3d-testcluster-server-0" DevicePath "" | |
W1028 08:30:33.293496 7 pod_container_deletor.go:77] Container "98365d251dce59f81da21243758b1615ccc6e466103ce3d2d6839a05f0dea056" not found in pod's containers |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
➜ docker logs k3d-testcluster-server-1 | |
time="2020-10-28T08:29:50.482231821Z" level=info msg="Starting k3s v1.18.9+k3s1 (630bebf9)" | |
time="2020-10-28T08:29:50.797012905Z" level=info msg="Active TLS secret (ver=) (count 8): map[listener.cattle.io/cn-0.0.0.0:0.0.0.0 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.18.0.3:172.18.0.3 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/hash:b316215eecea9347a4b67980f66c76d3b91908b38387656d39cc324786775759]" | |
time="2020-10-28T08:29:50.852874113Z" level=info msg="Joining dqlite cluster as address=172.18.0.3:6443, id=1003908" | |
time="2020-10-28T08:29:52.445542440Z" level=info msg="Testing connection to peers [172.18.0.2:6443]" | |
time="2020-10-28T08:29:52.450245589Z" level=info msg="Connection OK to peers [172.18.0.2:6443]" | |
time="2020-10-28T08:29:52.497929329Z" level=info msg="Kine listening on unix://kine.sock" | |
time="2020-10-28T08:29:52.502487009Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" | |
Flag --basic-auth-file has been deprecated, Basic authentication mode is deprecated and will be removed in a future release. It is not recommended for production environments. | |
I1028 08:29:52.503896 8 server.go:645] external host was not specified, using 172.18.0.3 | |
I1028 08:29:52.504367 8 server.go:162] Version: v1.18.9+k3s1 | |
I1028 08:29:53.404971 8 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. | |
I1028 08:29:53.405074 8 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. | |
I1028 08:29:53.406952 8 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. | |
I1028 08:29:53.407053 8 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. | |
I1028 08:29:53.475490 8 master.go:270] Using reconciler: lease | |
I1028 08:29:53.714898 8 rest.go:113] the default service ipfamily for this cluster is: IPv4 | |
I1028 08:29:54.716768 8 trace.go:116] Trace[1681636332]: "List etcd3" key:/certificatesigningrequests,resourceVersion:,limit:10000,continue: (started: 2020-10-28 08:29:54.127663302 +0000 UTC m=+4.752869990) (total time: 589.020877ms): | |
Trace[1681636332]: [589.020877ms] [589.020877ms] END | |
I1028 08:29:54.717751 8 trace.go:116] Trace[2118875007]: "List etcd3" key:/ingress,resourceVersion:,limit:10000,continue: (started: 2020-10-28 08:29:54.20335812 +0000 UTC m=+4.828564799) (total time: 514.370616ms): | |
Trace[2118875007]: [514.370616ms] [514.370616ms] END | |
I1028 08:29:54.941740 8 trace.go:116] Trace[754902267]: "List etcd3" key:/runtimeclasses,resourceVersion:,limit:10000,continue: (started: 2020-10-28 08:29:54.257541417 +0000 UTC m=+4.882748106) (total time: 684.15807ms): | |
Trace[754902267]: [684.15807ms] [684.15807ms] END | |
I1028 08:29:54.945524 8 trace.go:116] Trace[666707148]: "List etcd3" key:/poddisruptionbudgets,resourceVersion:,limit:10000,continue: (started: 2020-10-28 08:29:54.282653311 +0000 UTC m=+4.907860002) (total time: 662.830976ms): | |
Trace[666707148]: [662.830976ms] [662.830976ms] END | |
I1028 08:29:55.001021 8 trace.go:116] Trace[817157907]: "List etcd3" key:/podsecuritypolicy,resourceVersion:,limit:10000,continue: (started: 2020-10-28 08:29:54.292089417 +0000 UTC m=+4.917296114) (total time: 708.89851ms): | |
Trace[817157907]: [708.89851ms] [708.89851ms] END | |
I1028 08:29:55.187158 8 trace.go:116] Trace[1575795077]: "List etcd3" key:/volumeattachments,resourceVersion:,limit:10000,continue: (started: 2020-10-28 08:29:54.675778068 +0000 UTC m=+5.300984747) (total time: 511.358507ms): | |
Trace[1575795077]: [511.358507ms] [511.358507ms] END | |
I1028 08:29:55.222638 8 trace.go:116] Trace[991373937]: "List etcd3" key:/clusterroles,resourceVersion:,limit:10000,continue: (started: 2020-10-28 08:29:54.334800707 +0000 UTC m=+4.960007391) (total time: 887.804505ms): | |
Trace[991373937]: [887.804505ms] [887.804505ms] END | |
I1028 08:29:55.235066 8 trace.go:116] Trace[1299958759]: "List etcd3" key:/csidrivers,resourceVersion:,limit:10000,continue: (started: 2020-10-28 08:29:54.698381844 +0000 UTC m=+5.323588528) (total time: 536.662905ms): | |
Trace[1299958759]: [536.662905ms] [536.662905ms] END | |
I1028 08:29:55.242538 8 trace.go:116] Trace[883551713]: "List etcd3" key:/clusterrolebindings,resourceVersion:,limit:10000,continue: (started: 2020-10-28 08:29:54.337980611 +0000 UTC m=+4.963187293) (total time: 904.538361ms): | |
Trace[883551713]: [904.538361ms] [904.538361ms] END | |
I1028 08:29:55.244176 8 trace.go:116] Trace[1038409895]: "List etcd3" key:/clusterroles,resourceVersion:,limit:10000,continue: (started: 2020-10-28 08:29:54.494111714 +0000 UTC m=+5.119318395) (total time: 750.049062ms): | |
Trace[1038409895]: [750.049062ms] [750.049062ms] END | |
I1028 08:29:55.258784 8 trace.go:116] Trace[1590455976]: "List etcd3" key:/csidrivers,resourceVersion:,limit:10000,continue: (started: 2020-10-28 08:29:54.664312135 +0000 UTC m=+5.289518819) (total time: 594.439792ms): | |
Trace[1590455976]: [594.439792ms] [594.439792ms] END | |
I1028 08:29:55.313423 8 trace.go:116] Trace[1474026164]: "List etcd3" key:/mutatingwebhookconfigurations,resourceVersion:,limit:10000,continue: (started: 2020-10-28 08:29:54.811163812 +0000 UTC m=+5.436370492) (total time: 501.629255ms): | |
Trace[1474026164]: [501.629255ms] [501.629255ms] END | |
I1028 08:29:55.318517 8 trace.go:116] Trace[292910502]: "List etcd3" key:/replicasets,resourceVersion:,limit:10000,continue: (started: 2020-10-28 08:29:54.756559322 +0000 UTC m=+5.381766001) (total time: 561.922406ms): | |
Trace[292910502]: [561.922406ms] [561.922406ms] END | |
I1028 08:29:55.361343 8 trace.go:116] Trace[1157009535]: "List etcd3" key:/volumeattachments,resourceVersion:,limit:10000,continue: (started: 2020-10-28 08:29:54.660662273 +0000 UTC m=+5.285868957) (total time: 700.61114ms): | |
Trace[1157009535]: [700.61114ms] [700.61114ms] END | |
I1028 08:29:55.407211 8 trace.go:116] Trace[876368019]: "List etcd3" key:/validatingwebhookconfigurations,resourceVersion:,limit:10000,continue: (started: 2020-10-28 08:29:54.789408077 +0000 UTC m=+5.414614765) (total time: 617.740947ms): | |
Trace[876368019]: [617.740947ms] [617.740947ms] END | |
I1028 08:29:55.419603 8 trace.go:116] Trace[2055068812]: "List etcd3" key:/validatingwebhookconfigurations,resourceVersion:,limit:10000,continue: (started: 2020-10-28 08:29:54.814012021 +0000 UTC m=+5.439218709) (total time: 605.505626ms): | |
Trace[2055068812]: [605.505626ms] [605.505626ms] END | |
W1028 08:29:55.646601 8 genericapiserver.go:409] Skipping API batch/v2alpha1 because it has no resources. | |
W1028 08:29:55.687688 8 genericapiserver.go:409] Skipping API discovery.k8s.io/v1alpha1 because it has no resources. | |
W1028 08:29:55.711848 8 genericapiserver.go:409] Skipping API node.k8s.io/v1alpha1 because it has no resources. | |
W1028 08:29:55.749119 8 genericapiserver.go:409] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. | |
W1028 08:29:55.758147 8 genericapiserver.go:409] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. | |
W1028 08:29:55.793670 8 genericapiserver.go:409] Skipping API storage.k8s.io/v1alpha1 because it has no resources. | |
W1028 08:29:55.843845 8 genericapiserver.go:409] Skipping API apps/v1beta2 because it has no resources. | |
W1028 08:29:55.843885 8 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources. | |
I1028 08:29:55.871217 8 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. | |
I1028 08:29:55.871255 8 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. | |
I1028 08:30:01.613781 8 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt | |
I1028 08:30:01.615095 8 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt | |
I1028 08:30:01.619548 8 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key | |
I1028 08:30:01.626977 8 secure_serving.go:178] Serving securely on 127.0.0.1:6444 | |
I1028 08:30:01.627398 8 autoregister_controller.go:141] Starting autoregister controller | |
I1028 08:30:01.627320 8 tlsconfig.go:240] Starting DynamicServingCertificateController | |
I1028 08:30:01.632593 8 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller | |
I1028 08:30:01.633095 8 crd_finalizer.go:266] Starting CRDFinalizer | |
I1028 08:30:01.633274 8 apiservice_controller.go:94] Starting APIServiceRegistrationController | |
I1028 08:30:01.633285 8 controller.go:81] Starting OpenAPI AggregationController | |
I1028 08:30:01.656349 8 available_controller.go:387] Starting AvailableConditionController | |
I1028 08:30:01.656371 8 crdregistration_controller.go:111] Starting crd-autoregister controller | |
I1028 08:30:01.661480 8 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt | |
I1028 08:30:01.661489 8 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt | |
I1028 08:30:01.661499 8 controller.go:86] Starting OpenAPI controller | |
I1028 08:30:01.661507 8 customresource_discovery_controller.go:209] Starting DiscoveryController | |
I1028 08:30:01.661513 8 naming_controller.go:291] Starting NamingConditionController | |
I1028 08:30:01.661518 8 establishing_controller.go:76] Starting EstablishingController | |
I1028 08:30:01.661524 8 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController | |
I1028 08:30:01.661530 8 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController | |
I1028 08:30:01.662489 8 cache.go:32] Waiting for caches to sync for autoregister controller | |
I1028 08:30:01.696268 8 shared_informer.go:223] Waiting for caches to sync for cluster_authentication_trust_controller | |
I1028 08:30:01.696701 8 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller | |
I1028 08:30:01.696727 8 cache.go:32] Waiting for caches to sync for AvailableConditionController controller | |
I1028 08:30:01.696733 8 shared_informer.go:223] Waiting for caches to sync for crd-autoregister | |
I1028 08:30:02.102986 8 shared_informer.go:230] Caches are synced for crd-autoregister | |
I1028 08:30:02.105790 8 cache.go:39] Caches are synced for autoregister controller | |
I1028 08:30:02.106774 8 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller | |
I1028 08:30:02.107696 8 cache.go:39] Caches are synced for APIServiceRegistrationController controller | |
I1028 08:30:02.118794 8 cache.go:39] Caches are synced for AvailableConditionController controller | |
I1028 08:30:03.073012 8 trace.go:116] Trace[1389875261]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (started: 2020-10-28 08:30:02.221683096 +0000 UTC m=+12.869623204) (total time: 851.25004ms): | |
Trace[1389875261]: [241.344824ms] [241.344824ms] initial value restored | |
Trace[1389875261]: [851.22759ms] [609.664656ms] Transaction committed | |
I1028 08:30:03.395319 8 storage_scheduling.go:143] all system priority classes are created successfully or already exist. | |
I1028 08:30:03.625827 8 trace.go:116] Trace[437769045]: "Get" url:/api/v1/namespaces/default/endpoints/kubernetes,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf,client:127.0.0.1 (started: 2020-10-28 08:30:03.097769729 +0000 UTC m=+13.745709845) (total time: 528.022119ms): | |
Trace[437769045]: [527.972944ms] [527.961044ms] About to write a response | |
W1028 08:30:03.686249 8 lease.go:224] Resetting endpoints for master service "kubernetes" to [172.18.0.2 172.18.0.3] | |
I1028 08:30:03.704477 8 controller.go:606] quota admission added evaluator for: endpoints | |
I1028 08:30:04.112377 8 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io | |
W1028 08:30:04.460051 8 lease.go:224] Resetting endpoints for master service "kubernetes" to [172.18.0.2] | |
I1028 08:30:06.382002 8 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue). | |
W1028 08:30:09.928672 8 lease.go:224] Resetting endpoints for master service "kubernetes" to [172.18.0.2 172.18.0.3] | |
I1028 08:30:10.034737 8 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). | |
W1028 08:30:10.034906 8 handler_proxy.go:102] no RequestInfo found in the context | |
E1028 08:30:10.035009 8 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable | |
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] | |
I1028 08:30:10.035018 8 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. | |
I1028 08:30:10.076843 8 registry.go:150] Registering EvenPodsSpread predicate and priority function | |
I1028 08:30:10.076862 8 registry.go:150] Registering EvenPodsSpread predicate and priority function | |
time="2020-10-28T08:30:10.077786088Z" level=info msg="Running kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --port=10251 --secure-port=0" | |
time="2020-10-28T08:30:10.086189354Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --port=10252 --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true" | |
I1028 08:30:10.115394 8 controllermanager.go:161] Version: v1.18.9+k3s1 | |
I1028 08:30:10.131197 8 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252 | |
I1028 08:30:10.131819 8 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-controller-manager... | |
time="2020-10-28T08:30:10.223977464Z" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --allow-untagged-cloud=true --bind-address=127.0.0.1 --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --node-status-update-frequency=1m --secure-port=0" | |
Flag --allow-untagged-cloud has been deprecated, This flag is deprecated and will be removed in a future release. A cluster-id will be required on cloud instances. | |
I1028 08:30:10.257179 8 controllermanager.go:120] Version: v1.18.9+k3s1 | |
W1028 08:30:10.257221 8 controllermanager.go:132] detected a cluster without a ClusterID. A ClusterID will be required in the future. Please tag your cluster to avoid any future issues | |
I1028 08:30:10.257249 8 leaderelection.go:242] attempting to acquire leader lease kube-system/cloud-controller-manager... | |
time="2020-10-28T08:30:10.260514353Z" level=info msg="Handling backend connection request [k3d-testcluster-server-0]" | |
I1028 08:30:10.279365 8 registry.go:150] Registering EvenPodsSpread predicate and priority function | |
I1028 08:30:10.279405 8 registry.go:150] Registering EvenPodsSpread predicate and priority function | |
W1028 08:30:10.282254 8 authorization.go:47] Authorization is disabled | |
W1028 08:30:10.282301 8 authentication.go:40] Authentication is disabled | |
I1028 08:30:10.282315 8 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 | |
time="2020-10-28T08:30:10.573029615Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.81.0.tgz" | |
time="2020-10-28T08:30:10.573683186Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml" | |
time="2020-10-28T08:30:10.574086989Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml" | |
time="2020-10-28T08:30:10.574324590Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml" | |
time="2020-10-28T08:30:10.574930175Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml" | |
time="2020-10-28T08:30:10.575231505Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml" | |
time="2020-10-28T08:30:10.576208433Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml" | |
time="2020-10-28T08:30:10.576772915Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml" | |
time="2020-10-28T08:30:10.577539764Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml" | |
time="2020-10-28T08:30:10.577914031Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml" | |
time="2020-10-28T08:30:10.578470598Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/ccm.yaml" | |
time="2020-10-28T08:30:10.578990949Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml" | |
time="2020-10-28T08:30:10.579327321Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/local-storage.yaml" | |
I1028 08:30:10.586001 8 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler... | |
time="2020-10-28T08:30:10.780421515Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller" | |
time="2020-10-28T08:30:10.781055039Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token" | |
time="2020-10-28T08:30:10.781164513Z" level=info msg="To join node to cluster: k3s agent -s https://172.18.0.3:6443 -t ${NODE_TOKEN}" | |
time="2020-10-28T08:30:10.781938613Z" level=info msg="Waiting for master node startup: resource name may not be empty" | |
I1028 08:30:10.782023 8 leaderelection.go:242] attempting to acquire leader lease kube-system/k3s... | |
2020-10-28 08:30:10.784445 I | http: TLS handshake error from 127.0.0.1:46040: remote error: tls: bad certificate | |
time="2020-10-28T08:30:10.825298925Z" level=info msg="Wrote kubeconfig /output/kubeconfig.yaml" | |
time="2020-10-28T08:30:10.825396638Z" level=info msg="Run: k3s kubectl" | |
time="2020-10-28T08:30:10.825410379Z" level=info msg="k3s is up and running" | |
time="2020-10-28T08:30:10.829902897Z" level=info msg="module overlay was already loaded" | |
time="2020-10-28T08:30:10.831912745Z" level=info msg="module nf_conntrack was already loaded" | |
time="2020-10-28T08:30:10.838362775Z" level=warning msg="failed to start br_netfilter module" | |
2020-10-28 08:30:10.840355 I | http: TLS handshake error from 127.0.0.1:46048: remote error: tls: bad certificate | |
2020-10-28 08:30:10.849724 I | http: TLS handshake error from 127.0.0.1:46054: remote error: tls: bad certificate | |
I1028 08:30:10.850598 8 controller.go:606] quota admission added evaluator for: addons.k3s.cattle.io | |
time="2020-10-28T08:30:10.879517683Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log" | |
time="2020-10-28T08:30:10.880003627Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd" | |
time="2020-10-28T08:30:10.887735605Z" level=info msg="Waiting for containerd startup: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: no such file or directory\"" | |
time="2020-10-28T08:30:11.103541617Z" level=info msg="Starting /v1, Kind=Secret controller" | |
time="2020-10-28T08:30:11.104041599Z" level=info msg="Starting /v1, Kind=Node controller" | |
time="2020-10-28T08:30:11.112034486Z" level=info msg="Updating TLS secret for k3s-serving (count: 10): map[listener.cattle.io/cn-0.0.0.0:0.0.0.0 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.18.0.2:172.18.0.2 listener.cattle.io/cn-172.18.0.3:172.18.0.3 listener.cattle.io/cn-k3d-testcluster-server-0:k3d-testcluster-server-0 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/hash:88ce676c700a3b326f54de39eea53ac96ef93265b80e440a66bed398e65a1403]" | |
time="2020-10-28T08:30:11.122428534Z" level=info msg="Active TLS secret k3s-serving (ver=489) (count 10): map[listener.cattle.io/cn-0.0.0.0:0.0.0.0 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.18.0.2:172.18.0.2 listener.cattle.io/cn-172.18.0.3:172.18.0.3 listener.cattle.io/cn-k3d-testcluster-server-0:k3d-testcluster-server-0 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/hash:88ce676c700a3b326f54de39eea53ac96ef93265b80e440a66bed398e65a1403]" | |
time="2020-10-28T08:30:11.136047404Z" level=info msg="Active TLS secret k3s-serving (ver=489) (count 10): map[listener.cattle.io/cn-0.0.0.0:0.0.0.0 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.18.0.2:172.18.0.2 listener.cattle.io/cn-172.18.0.3:172.18.0.3 listener.cattle.io/cn-k3d-testcluster-server-0:k3d-testcluster-server-0 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/hash:88ce676c700a3b326f54de39eea53ac96ef93265b80e440a66bed398e65a1403]" | |
time="2020-10-28T08:30:11.786338253Z" level=info msg="Waiting for master node k3d-testcluster-server-1 startup: nodes \"k3d-testcluster-server-1\" not found" | |
time="2020-10-28T08:30:11.900176419Z" level=info msg="Connecting to proxy" url="wss://172.18.0.2:6443/v1-k3s/connect" | |
time="2020-10-28T08:30:11.900468812Z" level=info msg="Connecting to proxy" url="wss://172.18.0.3:6443/v1-k3s/connect" | |
time="2020-10-28T08:30:11.904443038Z" level=info msg="Handling backend connection request [k3d-testcluster-server-1]" | |
time="2020-10-28T08:30:11.907447980Z" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us" | |
time="2020-10-28T08:30:11.908196693Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=/run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --cpu-cfs-quota=false --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=k3d-testcluster-server-1 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/systemd --node-labels= --read-only-port=0 --resolv-conf=/tmp/k3s-resolv.conf --runtime-cgroups=/systemd --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key" | |
Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed. | |
I1028 08:30:11.908715 8 server.go:413] Version: v1.18.9+k3s1 | |
time="2020-10-28T08:30:11.909344709Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --healthz-bind-address=127.0.0.1 --hostname-override=k3d-testcluster-server-1 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables" | |
W1028 08:30:11.909507 8 server.go:225] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP. | |
W1028 08:30:11.916962 8 proxier.go:625] Failed to read file /lib/modules/5.4.39-linuxkit/modules.builtin with error open /lib/modules/5.4.39-linuxkit/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules | |
W1028 08:30:11.917720 8 proxier.go:635] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules | |
W1028 08:30:11.918552 8 proxier.go:635] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules | |
W1028 08:30:11.918799 8 info.go:51] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id" | |
W1028 08:30:11.919177 8 proxier.go:635] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules | |
I1028 08:30:11.919295 8 server.go:645] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / | |
I1028 08:30:11.919727 8 container_manager_linux.go:277] container manager verified user specified cgroup-root exists: [] | |
I1028 08:30:11.919760 8 container_manager_linux.go:282] Creating Container Manager object based on Node Config: {RuntimeCgroupsName:/systemd SystemCgroupsName: KubeletCgroupsName:/systemd ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:false CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none} | |
I1028 08:30:11.920024 8 topology_manager.go:126] [topologymanager] Creating topology manager with none policy | |
I1028 08:30:11.920035 8 container_manager_linux.go:312] [topologymanager] Initializing Topology Manager with none policy | |
I1028 08:30:11.920039 8 container_manager_linux.go:317] Creating device plugin manager: true | |
W1028 08:30:11.920243 8 util_unix.go:103] Using "/run/k3s/containerd/containerd.sock" as endpoint is deprecated, please consider using full url format "unix:///run/k3s/containerd/containerd.sock". | |
W1028 08:30:11.920382 8 util_unix.go:103] Using "/run/k3s/containerd/containerd.sock" as endpoint is deprecated, please consider using full url format "unix:///run/k3s/containerd/containerd.sock". | |
I1028 08:30:11.920612 8 kubelet.go:317] Watching apiserver | |
I1028 08:30:11.922603 8 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt | |
W1028 08:30:11.923715 8 proxier.go:635] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules | |
W1028 08:30:11.925855 8 proxier.go:635] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules | |
I1028 08:30:11.935511 8 kuberuntime_manager.go:217] Container runtime containerd initialized, version: v1.3.3-k3s2, apiVersion: v1alpha2 | |
W1028 08:30:11.935861 8 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. | |
I1028 08:30:11.936415 8 server.go:1124] Started kubelet | |
time="2020-10-28T08:30:11.941965735Z" level=info msg="waiting for node k3d-testcluster-server-1: nodes \"k3d-testcluster-server-1\" not found" | |
E1028 08:30:11.942242 8 node.go:125] Failed to retrieve node info: nodes "k3d-testcluster-server-1" not found | |
E1028 08:30:11.945587 8 cri_stats_provider.go:375] Failed to get the info of the filesystem with mountpoint "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs": unable to find data in memory cache. | |
E1028 08:30:11.945732 8 kubelet.go:1308] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem | |
I1028 08:30:11.948614 8 server.go:145] Starting to listen on 0.0.0.0:10250 | |
I1028 08:30:11.949730 8 server.go:393] Adding debug handlers to kubelet server. | |
I1028 08:30:11.951240 8 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer | |
I1028 08:30:11.952911 8 volume_manager.go:265] Starting Kubelet Volume Manager | |
I1028 08:30:11.957540 8 desired_state_of_world_populator.go:139] Desired state populator starts to run | |
I1028 08:30:11.983381 8 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach | |
I1028 08:30:11.988989 8 cpu_manager.go:184] [cpumanager] starting with none policy | |
I1028 08:30:11.991703 8 cpu_manager.go:185] [cpumanager] reconciling every 10s | |
I1028 08:30:11.992381 8 state_mem.go:36] [cpumanager] initializing new in-memory state store | |
E1028 08:30:11.991931 8 controller.go:228] failed to get node "k3d-testcluster-server-1" when trying to set owner ref to the node lease: nodes "k3d-testcluster-server-1" not found | |
I1028 08:30:11.994098 8 policy_none.go:43] [cpumanager] none policy: Start | |
W1028 08:30:12.046925 8 manager.go:597] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found | |
I1028 08:30:12.047311 8 plugin_manager.go:114] Starting Kubelet Plugin Manager | |
E1028 08:30:12.048864 8 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get node info: node "k3d-testcluster-server-1" not found | |
E1028 08:30:12.056625 8 kubelet.go:2270] node "k3d-testcluster-server-1" not found | |
I1028 08:30:12.056700 8 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach | |
I1028 08:30:12.058752 8 kubelet_node_status.go:70] Attempting to register node k3d-testcluster-server-1 | |
I1028 08:30:12.072570 8 kubelet_node_status.go:73] Successfully registered node k3d-testcluster-server-1 | |
I1028 08:30:12.073488 8 status_manager.go:158] Starting to sync pod status with apiserver | |
I1028 08:30:12.075090 8 kubelet.go:1824] Starting kubelet main sync loop. | |
E1028 08:30:12.075224 8 kubelet.go:1848] skipping pod synchronization - PLEG is not healthy: pleg has yet to be successful | |
I1028 08:30:12.260412 8 reconciler.go:157] Reconciler: start to sync state | |
I1028 08:30:12.557999 8 kuberuntime_manager.go:984] updating runtime config through cri with podcidr 10.42.1.0/24 | |
I1028 08:30:12.558716 8 kubelet_network.go:77] Setting Pod CIDR: -> 10.42.1.0/24 | |
time="2020-10-28T08:30:12.868218891Z" level=info msg="master role label has been set succesfully on node: k3d-testcluster-server-1" | |
I1028 08:30:13.124700 8 node.go:136] Successfully retrieved node IP: 172.18.0.3 | |
I1028 08:30:13.124950 8 server_others.go:187] Using iptables Proxier. | |
I1028 08:30:13.129063 8 server.go:583] Version: v1.18.9+k3s1 | |
I1028 08:30:13.135022 8 conntrack.go:52] Setting nf_conntrack_max to 131072 | |
I1028 08:30:13.138075 8 conntrack.go:83] Setting conntrack hashsize to 32768 | |
E1028 08:30:13.140502 8 conntrack.go:85] failed to set conntrack hashsize to 32768: write /sys/module/nf_conntrack/parameters/hashsize: operation not supported | |
I1028 08:30:13.143600 8 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 | |
I1028 08:30:13.144172 8 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 | |
I1028 08:30:13.148710 8 config.go:315] Starting service config controller | |
I1028 08:30:13.160228 8 shared_informer.go:223] Waiting for caches to sync for service config | |
I1028 08:30:13.160042 8 config.go:133] Starting endpoints config controller | |
I1028 08:30:13.162085 8 shared_informer.go:223] Waiting for caches to sync for endpoints config | |
I1028 08:30:13.285853 8 shared_informer.go:230] Caches are synced for service config | |
I1028 08:30:13.289682 8 shared_informer.go:230] Caches are synced for endpoints config | |
I1028 08:30:13.959589 8 flannel.go:92] Determining IP address of default interface | |
I1028 08:30:13.960217 8 flannel.go:105] Using interface with name eth0 and address 172.18.0.3 | |
I1028 08:30:13.966355 8 kube.go:117] Waiting 10m0s for node controller to sync | |
I1028 08:30:13.966528 8 kube.go:300] Starting kube subnet manager | |
time="2020-10-28T08:30:14.116602260Z" level=info msg="labels have been set successfully on node: k3d-testcluster-server-1" | |
I1028 08:30:14.308038 8 network_policy_controller.go:149] Starting network policy controller | |
I1028 08:30:14.966787 8 kube.go:124] Node controller sync successful | |
I1028 08:30:14.966902 8 vxlan.go:121] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false | |
I1028 08:30:15.441421 8 flannel.go:78] Wrote subnet file to /run/flannel/subnet.env | |
I1028 08:30:15.444355 8 flannel.go:82] Running backend. | |
I1028 08:30:15.444369 8 vxlan_network.go:60] watching for new subnet leases | |
I1028 08:30:15.452489 8 iptables.go:145] Some iptables rules are missing; deleting and recreating rules | |
I1028 08:30:15.452646 8 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN | |
I1028 08:30:15.462749 8 iptables.go:145] Some iptables rules are missing; deleting and recreating rules | |
I1028 08:30:15.462925 8 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 -j ACCEPT | |
I1028 08:30:15.466448 8 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully | |
I1028 08:30:15.473357 8 iptables.go:167] Deleting iptables rule: -d 10.42.0.0/16 -j ACCEPT | |
I1028 08:30:15.475809 8 iptables.go:167] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.1.0/24 -j RETURN | |
I1028 08:30:15.483605 8 iptables.go:167] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully | |
I1028 08:30:15.486334 8 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 -j ACCEPT | |
I1028 08:30:15.508264 8 iptables.go:155] Adding iptables rule: -d 10.42.0.0/16 -j ACCEPT | |
I1028 08:30:15.509897 8 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN | |
I1028 08:30:15.525908 8 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully | |
I1028 08:30:15.542278 8 iptables.go:155] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.1.0/24 -j RETURN | |
I1028 08:30:15.549986 8 iptables.go:155] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully | |
I1028 08:30:21.243295 8 trace.go:116] Trace[248141867]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (started: 2020-10-28 08:30:20.33142833 +0000 UTC m=+30.979368447) (total time: 911.65105ms): | |
Trace[248141867]: [231.385831ms] [231.385831ms] initial value restored | |
Trace[248141867]: [911.592045ms] [678.085078ms] Transaction committed | |
I1028 08:30:21.613688 8 trace.go:116] Trace[1277260954]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/leader-election,client:127.0.0.1 (started: 2020-10-28 08:30:20.826282928 +0000 UTC m=+31.474223044) (total time: 787.377965ms): | |
Trace[1277260954]: [787.346447ms] [787.324539ms] About to write a response | |
I1028 08:30:22.585664 8 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io | |
I1028 08:30:23.144616 8 trace.go:116] Trace[1419147251]: "GuaranteedUpdate etcd3" type:*core.Node (started: 2020-10-28 08:30:22.127935441 +0000 UTC m=+32.775875557) (total time: 1.016655855s): | |
Trace[1419147251]: [1.016567729s] [1.006304069s] Transaction committed | |
I1028 08:30:23.146153 8 trace.go:116] Trace[1831306385]: "Patch" url:/api/v1/nodes/k3d-testcluster-server-1/status,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf,client:127.0.0.1 (started: 2020-10-28 08:30:22.127732531 +0000 UTC m=+32.775672645) (total time: 1.018336221s): | |
Trace[1831306385]: [1.016956509s] [1.007098533s] Object stored in database | |
I1028 08:30:23.384782 8 trace.go:116] Trace[250403710]: "List etcd3" key:/resourcequotas/kube-node-lease,resourceVersion:,limit:0,continue: (started: 2020-10-28 08:30:22.590739184 +0000 UTC m=+33.238679309) (total time: 794.016028ms): | |
Trace[250403710]: [794.016028ms] [794.016028ms] END | |
I1028 08:30:23.384909 8 trace.go:116] Trace[768713539]: "List" url:/api/v1/namespaces/kube-node-lease/resourcequotas,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf,client:127.0.0.1 (started: 2020-10-28 08:30:22.590717353 +0000 UTC m=+33.238657461) (total time: 794.171549ms): | |
Trace[768713539]: [794.117717ms] [794.106333ms] Listing from storage done | |
I1028 08:30:23.865513 8 trace.go:116] Trace[964744838]: "Create" url:/api/v1/namespaces/default/events,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf,client:127.0.0.1 (started: 2020-10-28 08:30:22.159406936 +0000 UTC m=+32.807347049) (total time: 1.706075237s): | |
Trace[964744838]: [1.706021145s] [1.7058882s] Object stored in database | |
time="2020-10-28T08:30:24.008128662Z" level=warning msg="no known leader address=172.18.0.2:6443 attempt=0" | |
time="2020-10-28T08:30:24.011305210Z" level=warning msg="no known leader address=172.18.0.2:6443 attempt=0" | |
time="2020-10-28T08:30:24.059703236Z" level=warning msg="no known leader address=172.18.0.2:6443 attempt=0" | |
time="2020-10-28T08:30:24.060631607Z" level=warning msg="no known leader address=172.18.0.3:6443 attempt=0" | |
time="2020-10-28T08:30:24.200168674Z" level=warning msg="no known leader address=172.18.0.2:6443 attempt=0" | |
time="2020-10-28T08:30:24.212030679Z" level=warning msg="no known leader address=172.18.0.3:6443 attempt=0" | |
time="2020-10-28T08:30:24.246418858Z" level=warning msg="no known leader address=172.18.0.2:6443 attempt=0" | |
time="2020-10-28T08:30:24.246456499Z" level=warning msg="no known leader address=172.18.0.2:6443 attempt=0" | |
time="2020-10-28T08:30:24.248813114Z" level=warning msg="no known leader address=172.18.0.3:6443 attempt=0" | |
time="2020-10-28T08:30:24.249267907Z" level=warning msg="no known leader address=172.18.0.3:6443 attempt=0" | |
time="2020-10-28T08:30:24.335031262Z" level=warning msg="no known leader address=172.18.0.2:6443 attempt=1" | |
time="2020-10-28T08:30:24.335352438Z" level=warning msg="no known leader address=172.18.0.3:6443 attempt=1" | |
time="2020-10-28T08:30:24.418332168Z" level=warning msg="no known leader address=172.18.0.2:6443 attempt=1" | |
time="2020-10-28T08:30:24.418906563Z" level=warning msg="no known leader address=172.18.0.3:6443 attempt=1" | |
time="2020-10-28T08:30:24.475362852Z" level=warning msg="no known leader address=172.18.0.2:6443 attempt=1" | |
time="2020-10-28T08:30:24.476105021Z" level=warning msg="no known leader address=172.18.0.3:6443 attempt=1" | |
I1028 08:30:24.783963 8 trace.go:116] Trace[850872394]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf/leader-election,client:127.0.0.1 (started: 2020-10-28 08:30:23.85027864 +0000 UTC m=+34.498218764) (total time: 933.644171ms): | |
Trace[850872394]: [933.534717ms] [933.50989ms] About to write a response | |
I1028 08:30:24.921554 8 trace.go:116] Trace[1500404538]: "Create" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases,user-agent:k3s/v1.18.9+k3s1 (linux/amd64) kubernetes/630bebf,client:127.0.0.1 (started: 2020-10-28 08:30:22.583419835 +0000 UTC m=+33.231359947) (total time: 2.338101927s): | |
Trace[1500404538]: [2.338038729s] [2.337427996s] Object stored in database | |
E1028 08:30:28.717151 8 available_controller.go:420] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again | |
I1028 08:30:31.506599 8 topology_manager.go:233] [topologymanager] Topology Admit Handler | |
I1028 08:30:31.513412 8 topology_manager.go:233] [topologymanager] Topology Admit Handler | |
I1028 08:30:31.668978 8 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config" (UniqueName: "kubernetes.io/configmap/fa01cd01-355e-4921-9882-458d41aaee83-config") pod "traefik-758cd5fc85-lmppv" (UID: "fa01cd01-355e-4921-9882-458d41aaee83") | |
I1028 08:30:31.669483 8 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ssl" (UniqueName: "kubernetes.io/secret/fa01cd01-355e-4921-9882-458d41aaee83-ssl") pod "traefik-758cd5fc85-lmppv" (UID: "fa01cd01-355e-4921-9882-458d41aaee83") | |
I1028 08:30:31.669585 8 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "traefik-token-f74xq" (UniqueName: "kubernetes.io/secret/fa01cd01-355e-4921-9882-458d41aaee83-traefik-token-f74xq") pod "traefik-758cd5fc85-lmppv" (UID: "fa01cd01-355e-4921-9882-458d41aaee83") | |
I1028 08:30:31.669707 8 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-tvq6m" (UniqueName: "kubernetes.io/secret/37b594d6-cc55-44ae-a1c9-82404c201c74-default-token-tvq6m") pod "svclb-traefik-plxmh" (UID: "37b594d6-cc55-44ae-a1c9-82404c201c74") |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment