laminar
[root@test ~]# ./kk create cluster –with-kubesphere v3.2.1 -f master-HA.yaml
+————–+——+——+———+———-+——-+——-+———–+——–+————+————-+——————+————–+
| name | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time |
+————–+——+——+———+———-+——-+——-+———–+——–+————+————-+——————+————–+
| k8s-worker02 | y | y | y | y | y | y | y | | | | | CST 15:06:30 |
| k8s-worker03 | y | y | y | y | y | y | y | | | | | CST 15:06:30 |
| k8s-master03 | y | y | y | y | y | y | y | | | | | CST 15:06:30 |
| k8s-master01 | y | y | y | y | y | y | y | | | | | CST 15:06:30 |
| k8s-master02 | y | y | y | y | y | y | y | | | | | CST 15:06:30 |
| k8s-worker01 | y | y | y | y | y | y | y | | | | | CST 15:06:30 |
+————–+——+——+———+———-+——-+——-+———–+——–+————+————-+——————+————–+
This is a simple check of your environment.
Before installation, you should ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations
Continue this installation? [yes/no]: yes
INFO[15:06:33 CST] Downloading Installation Files
INFO[15:06:33 CST] Downloading kubeadm …
INFO[15:06:34 CST] Downloading kubelet …
INFO[15:06:34 CST] Downloading kubectl …
INFO[15:06:35 CST] Downloading helm …
INFO[15:06:35 CST] Downloading kubecni …
INFO[15:06:35 CST] Downloading etcd …
INFO[15:06:35 CST] Downloading docker …
INFO[15:06:35 CST] Downloading crictl …
INFO[15:06:35 CST] Configuring operating system …
[k8s-worker03 10.3.7.18] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
[k8s-master02 10.3.7.14] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
[k8s-master01 10.3.7.13] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
[k8s-worker01 10.3.7.16] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
[k8s-master03 10.3.7.15] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
[k8s-worker02 10.3.7.17] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
INFO[15:06:36 CST] Get cluster status
INFO[15:06:37 CST] Installing Container Runtime …
Push /root/kubekey/v1.21.5/amd64/docker-20.10.8.tgz to 10.3.7.14:/tmp/kubekey/docker-20.10.8.tgz Done
Push /root/kubekey/v1.21.5/amd64/docker-20.10.8.tgz to 10.3.7.16:/tmp/kubekey/docker-20.10.8.tgz Done
Push /root/kubekey/v1.21.5/amd64/docker-20.10.8.tgz to 10.3.7.18:/tmp/kubekey/docker-20.10.8.tgz Done
Push /root/kubekey/v1.21.5/amd64/docker-20.10.8.tgz to 10.3.7.15:/tmp/kubekey/docker-20.10.8.tgz Done
Push /root/kubekey/v1.21.5/amd64/docker-20.10.8.tgz to 10.3.7.17:/tmp/kubekey/docker-20.10.8.tgz Done
Push /root/kubekey/v1.21.5/amd64/docker-20.10.8.tgz to 10.3.7.13:/tmp/kubekey/docker-20.10.8.tgz Done
INFO[15:06:42 CST] Start to download images on all nodes
[k8s-master01] Downloading image: kubesphere/pause:3.4.1
[k8s-worker03] Downloading image: kubesphere/pause:3.4.1
[k8s-master02] Downloading image: kubesphere/pause:3.4.1
[k8s-master03] Downloading image: kubesphere/pause:3.4.1
[k8s-worker01] Downloading image: kubesphere/pause:3.4.1
[k8s-worker02] Downloading image: kubesphere/pause:3.4.1
[k8s-master01] Downloading image: kubesphere/kube-apiserver:v1.21.5
[k8s-worker02] Downloading image: kubesphere/kube-proxy:v1.21.5
[k8s-master02] Downloading image: kubesphere/kube-apiserver:v1.21.5
[k8s-master03] Downloading image: kubesphere/kube-apiserver:v1.21.5
[k8s-master01] Downloading image: kubesphere/kube-controller-manager:v1.21.5
[k8s-worker03] Downloading image: kubesphere/kube-proxy:v1.21.5
[k8s-worker01] Downloading image: kubesphere/kube-proxy:v1.21.5
[k8s-worker01] Downloading image: coredns/coredns:1.8.0
[k8s-worker02] Downloading image: coredns/coredns:1.8.0
[k8s-master02] Downloading image: kubesphere/kube-controller-manager:v1.21.5
[k8s-master03] Downloading image: kubesphere/kube-controller-manager:v1.21.5
[k8s-master01] Downloading image: kubesphere/kube-scheduler:v1.21.5
[k8s-worker03] Downloading image: coredns/coredns:1.8.0
[k8s-worker01] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[k8s-worker02] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[k8s-worker01] Downloading image: calico/kube-controllers:v3.20.0
[k8s-master03] Downloading image: kubesphere/kube-scheduler:v1.21.5
[k8s-master02] Downloading image: kubesphere/kube-scheduler:v1.21.5
[k8s-master01] Downloading image: kubesphere/kube-proxy:v1.21.5
[k8s-worker03] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[k8s-worker03] Downloading image: calico/kube-controllers:v3.20.0
[k8s-worker02] Downloading image: calico/kube-controllers:v3.20.0
[k8s-worker01] Downloading image: calico/cni:v3.20.0
[k8s-master03] Downloading image: kubesphere/kube-proxy:v1.21.5
[k8s-master02] Downloading image: kubesphere/kube-proxy:v1.21.5
[k8s-master01] Downloading image: coredns/coredns:1.8.0
[k8s-worker01] Downloading image: calico/node:v3.20.0
[k8s-worker03] Downloading image: calico/cni:v3.20.0
[k8s-worker02] Downloading image: calico/cni:v3.20.0
[k8s-worker03] Downloading image: calico/node:v3.20.0
[k8s-master01] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[k8s-master03] Downloading image: coredns/coredns:1.8.0
[k8s-master02] Downloading image: coredns/coredns:1.8.0
[k8s-worker02] Downloading image: calico/node:v3.20.0
[k8s-master01] Downloading image: calico/kube-controllers:v3.20.0
[k8s-worker01] Downloading image: calico/pod2daemon-flexvol:v3.20.0
[k8s-master03] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[k8s-master02] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[k8s-worker03] Downloading image: calico/pod2daemon-flexvol:v3.20.0
[k8s-worker02] Downloading image: calico/pod2daemon-flexvol:v3.20.0
[k8s-master01] Downloading image: calico/cni:v3.20.0
[k8s-master02] Downloading image: calico/kube-controllers:v3.20.0
[k8s-master03] Downloading image: calico/kube-controllers:v3.20.0
[k8s-master01] Downloading image: calico/node:v3.20.0
[k8s-master02] Downloading image: calico/cni:v3.20.0
[k8s-master03] Downloading image: calico/cni:v3.20.0
[k8s-master02] Downloading image: calico/node:v3.20.0
[k8s-master03] Downloading image: calico/node:v3.20.0
[k8s-master02] Downloading image: calico/pod2daemon-flexvol:v3.20.0
[k8s-master01] Downloading image: calico/pod2daemon-flexvol:v3.20.0
[k8s-master03] Downloading image: calico/pod2daemon-flexvol:v3.20.0
INFO[15:10:46 CST] Getting etcd status
[k8s-master01 10.3.7.13] MSG:
Configuration file will be created
[k8s-master02 10.3.7.14] MSG:
Configuration file will be created
[k8s-master03 10.3.7.15] MSG:
Configuration file will be created
INFO[15:10:46 CST] Generating etcd certs
INFO[15:10:49 CST] Synchronizing etcd certs
INFO[15:10:49 CST] Creating etcd service
Push /root/kubekey/v1.21.5/amd64/etcd-v3.4.13-linux-amd64.tar.gz to 10.3.7.15:/tmp/kubekey/etcd-v3.4.13-linux-amd64.tar.gz Done
Push /root/kubekey/v1.21.5/amd64/etcd-v3.4.13-linux-amd64.tar.gz to 10.3.7.13:/tmp/kubekey/etcd-v3.4.13-linux-amd64.tar.gz Done
Push /root/kubekey/v1.21.5/amd64/etcd-v3.4.13-linux-amd64.tar.gz to 10.3.7.14:/tmp/kubekey/etcd-v3.4.13-linux-amd64.tar.gz Done
INFO[15:10:49 CST] Starting etcd cluster
INFO[15:10:50 CST] Refreshing etcd configuration
[k8s-master03 10.3.7.15] MSG:
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /etc/systemd/system/etcd.service.
[k8s-master01 10.3.7.13] MSG:
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /etc/systemd/system/etcd.service.
[k8s-master02 10.3.7.14] MSG:
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /etc/systemd/system/etcd.service.
INFO[15:10:51 CST] Backup etcd data regularly
INFO[15:10:57 CST] Installing kube binaries
Push /root/kubekey/v1.21.5/amd64/kubeadm to 10.3.7.18:/tmp/kubekey/kubeadm Done
Push /root/kubekey/v1.21.5/amd64/kubeadm to 10.3.7.13:/tmp/kubekey/kubeadm Done
Push /root/kubekey/v1.21.5/amd64/kubeadm to 10.3.7.14:/tmp/kubekey/kubeadm Done
Push /root/kubekey/v1.21.5/amd64/kubeadm to 10.3.7.16:/tmp/kubekey/kubeadm Done
Push /root/kubekey/v1.21.5/amd64/kubeadm to 10.3.7.17:/tmp/kubekey/kubeadm Done
Push /root/kubekey/v1.21.5/amd64/kubeadm to 10.3.7.15:/tmp/kubekey/kubeadm Done
Push /root/kubekey/v1.21.5/amd64/kubelet to 10.3.7.18:/tmp/kubekey/kubelet Done
Push /root/kubekey/v1.21.5/amd64/kubelet to 10.3.7.13:/tmp/kubekey/kubelet Done
Push /root/kubekey/v1.21.5/amd64/kubelet to 10.3.7.16:/tmp/kubekey/kubelet Done
Push /root/kubekey/v1.21.5/amd64/kubectl to 10.3.7.18:/tmp/kubekey/kubectl Done
Push /root/kubekey/v1.21.5/amd64/kubectl to 10.3.7.13:/tmp/kubekey/kubectl Done
Push /root/kubekey/v1.21.5/amd64/kubectl to 10.3.7.16:/tmp/kubekey/kubectl Done
Push /root/kubekey/v1.21.5/amd64/kubelet to 10.3.7.14:/tmp/kubekey/kubelet Done
Push /root/kubekey/v1.21.5/amd64/helm to 10.3.7.18:/tmp/kubekey/helm Done
Push /root/kubekey/v1.21.5/amd64/helm to 10.3.7.13:/tmp/kubekey/helm Done
Push /root/kubekey/v1.21.5/amd64/kubelet to 10.3.7.17:/tmp/kubekey/kubelet Done
Push /root/kubekey/v1.21.5/amd64/kubelet to 10.3.7.15:/tmp/kubekey/kubelet Done
Push /root/kubekey/v1.21.5/amd64/helm to 10.3.7.16:/tmp/kubekey/helm Done
Push /root/kubekey/v1.21.5/amd64/kubectl to 10.3.7.14:/tmp/kubekey/kubectl Done
Push /root/kubekey/v1.21.5/amd64/cni-plugins-linux-amd64-v0.9.1.tgz to 10.3.7.18:/tmp/kubekey/cni-plugins-linux-amd64-v0.9.1.tgz Done
Push /root/kubekey/v1.21.5/amd64/kubectl to 10.3.7.17:/tmp/kubekey/kubectl Done
Push /root/kubekey/v1.21.5/amd64/cni-plugins-linux-amd64-v0.9.1.tgz to 10.3.7.13:/tmp/kubekey/cni-plugins-linux-amd64-v0.9.1.tgz Done
Push /root/kubekey/v1.21.5/amd64/cni-plugins-linux-amd64-v0.9.1.tgz to 10.3.7.16:/tmp/kubekey/cni-plugins-linux-amd64-v0.9.1.tgz Done
Push /root/kubekey/v1.21.5/amd64/helm to 10.3.7.14:/tmp/kubekey/helm Done
Push /root/kubekey/v1.21.5/amd64/cni-plugins-linux-amd64-v0.9.1.tgz to 10.3.7.14:/tmp/kubekey/cni-plugins-linux-amd64-v0.9.1.tgz Done
Push /root/kubekey/v1.21.5/amd64/kubectl to 10.3.7.15:/tmp/kubekey/kubectl Done
Push /root/kubekey/v1.21.5/amd64/helm to 10.3.7.17:/tmp/kubekey/helm Done
Push /root/kubekey/v1.21.5/amd64/helm to 10.3.7.15:/tmp/kubekey/helm Done
Push /root/kubekey/v1.21.5/amd64/cni-plugins-linux-amd64-v0.9.1.tgz to 10.3.7.17:/tmp/kubekey/cni-plugins-linux-amd64-v0.9.1.tgz Done
Push /root/kubekey/v1.21.5/amd64/cni-plugins-linux-amd64-v0.9.1.tgz to 10.3.7.15:/tmp/kubekey/cni-plugins-linux-amd64-v0.9.1.tgz Done
INFO[15:11:04 CST] Initializing kubernetes cluster
[k8s-master01 10.3.7.13] MSG:
[reset] Reading configuration from the cluster…
[reset] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -o yaml’
W0127 15:16:07.121753 1261 reset.go:99] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get “https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s”: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
[preflight] Running pre-flight checks
W0127 15:16:07.121872 1261 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in “/var/lib/kubelet”
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the “iptables” command.
If your cluster was setup to utilize IPVS, run ipvsadm –clear (or similar)
to reset your system’s IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[k8s-master01 10.3.7.13] MSG:
[reset] Reading configuration from the cluster…
[reset] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -o yaml’
W0127 15:21:12.165590 3558 reset.go:99] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get “https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s”: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
[preflight] Running pre-flight checks
W0127 15:21:12.165721 3558 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in “/var/lib/kubelet”
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the “iptables” command.
If your cluster was setup to utilize IPVS, run ipvsadm –clear (or similar)
to reset your system’s IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[k8s-master01 10.3.7.13] MSG:
[reset] Reading configuration from the cluster…
[reset] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -o yaml’
W0127 15:21:12.165590 3558 reset.go:99] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get “https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s”: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
[preflight] Running pre-flight checks
W0127 15:21:12.165721 3558 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in “/var/lib/kubelet”
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the “iptables” command.
If your cluster was setup to utilize IPVS, run ipvsadm –clear (or similar)
to reset your system’s IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
ERRO[15:25:46 CST] Failed to init kubernetes cluster: Failed to exec command: sudo env PATH=$PATH:/sbin:/usr/sbin /bin/sh -c “/usr/local/bin/kubeadm init –config=/etc/kubernetes/kubeadm-config.yaml –ignore-preflight-errors=FileExisting-crictl”
W0127 15:21:13.994919 3973 utils.go:69] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.21.5
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
[certs] Using certificateDir folder “/etc/kubernetes/pki”
[certs] Generating “ca” certificate and key
[certs] Generating “apiserver” certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master01 k8s-master01.cluster.local k8s-master02 k8s-master02.cluster.local k8s-master03 k8s-master03.cluster.local k8s-worker01 k8s-worker01.cluster.local k8s-worker02 k8s-worker02.cluster.local k8s-worker03 k8s-worker03.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost] and IPs [10.233.0.1 10.3.7.13 127.0.0.1 10.3.7.20 10.3.7.14 10.3.7.15 10.3.7.16 10.3.7.17 10.3.7.18]
[certs] Generating “apiserver-kubelet-client” certificate and key
[certs] Generating “front-proxy-ca” certificate and key
[certs] Generating “front-proxy-client” certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating “sa” key and public key
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
[kubeconfig] Writing “admin.conf” kubeconfig file
[kubeconfig] Writing “kubelet.conf” kubeconfig file
[kubeconfig] Writing “controller-manager.conf” kubeconfig file
[kubeconfig] Writing “scheduler.conf” kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
[control-plane] Creating static Pod manifest for “kube-scheduler”
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn’t initialize a Kubernetes cluster
To see the stack trace of this error execute with –v=5 or higher: Process exited with status 1 node=10.3.7.13
WARN[15:25:46 CST] Task failed …
WARN[15:25:46 CST] error: interrupted by error
Error: Failed to init kubernetes cluster: interrupted by error
Usage:
kk create cluster [flags]
Flags:
--container-manager string Container runtime: docker, crio, containerd and isula. (default "docker")
--download-cmd string The user defined command to download the necessary binary files. The first param '%s' is output path, the second param '%s', is the URL (default "curl -L -o %s %s")
-f, –filename string Path to a configuration file
-h, –help help for cluster
--skip-pull-images Skip pre pull images
--with-kubernetes string Specify a supported version of kubernetes (default "v1.21.5")
--with-kubesphere Deploy a specific version of kubesphere (default v3.2.0)
--with-local-storage Deploy a local PV provisioner
-y, –yes Skip pre-check of the installation
Global Flags:
--debug Print detailed information (default true)
--in-cluster Running inside the cluster
Failed to init kubernetes cluster: interrupted by error