The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the “iptables” command.
If your cluster was setup to utilize IPVS, run ipvsadm –clear (or similar)
to reset your system’s IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[master2 192.168.3.162] MSG:
sudo -E /bin/sh -c “iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && ip link del kube-ipvs0 && ip link del nodelocaldns”
[node1 192.168.3.164] MSG:
[preflight] Running pre-flight checks
W1222 02:47:41.110345 7329 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in “/var/lib/kubelet”
W1222 02:47:41.120564 7329 cleanupnode.go:99] [reset] Failed to evaluate the “/var/lib/kubelet” directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the “iptables” command.
If your cluster was setup to utilize IPVS, run ipvsadm –clear (or similar)
to reset your system’s IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[node1 192.168.3.164] MSG:
sudo -E /bin/sh -c “iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && ip link del kube-ipvs0 && ip link del nodelocaldns”
[master3 192.168.3.163] MSG:
[preflight] Running pre-flight checks
W1222 02:47:41.247416 9506 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in “/var/lib/kubelet”
W1222 02:47:41.258937 9506 cleanupnode.go:99] [reset] Failed to evaluate the “/var/lib/kubelet” directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the “iptables” command.
If your cluster was setup to utilize IPVS, run ipvsadm –clear (or similar)
to reset your system’s IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[master3 192.168.3.163] MSG:
sudo -E /bin/sh -c “iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && ip link del kube-ipvs0 && ip link del nodelocaldns”
[node2 192.168.3.165] MSG:
[preflight] Running pre-flight checks
W1222 02:47:41.393705 7275 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in “/var/lib/kubelet”
W1222 02:47:41.405057 7275 cleanupnode.go:99] [reset] Failed to evaluate the “/var/lib/kubelet” directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the “iptables” command.
If your cluster was setup to utilize IPVS, run ipvsadm –clear (or similar)
to reset your system’s IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[node2 192.168.3.165] MSG:
sudo -E /bin/sh -c “iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && ip link del kube-ipvs0 && ip link del nodelocaldns”
[master1 192.168.3.161] MSG:
[reset] Reading configuration from the cluster…
[reset] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -oyaml’
W1222 02:47:41.856717 25340 reset.go:99] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s: EOF
[preflight] Running pre-flight checks
W1222 02:47:41.857094 25340 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in “/var/lib/kubelet”
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the “iptables” command.
If your cluster was setup to utilize IPVS, run ipvsadm –clear (or similar)
to reset your system’s IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[master1 192.168.3.161] MSG:
sudo -E /bin/sh -c “iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && ip link del kube-ipvs0 && ip link del nodelocaldns”
INFO[02:47:55 CST] Successful.
[root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]# vi /etc/sysctl.d/k8s.conf
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
[root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]#
[root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]# yum -y install ipset ipvsadm
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
软件包 ipset-7.1-1.el7.x86_64 已安装并且是最新版本
软件包 ipvsadm-1.27-8.el7.x86_64 已安装并且是最新版本
无须任何处理
[root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]#
[root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]#
[root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]#
[root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]#
[root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]# docker ps -a | grep kube | grep -v pause
d23861660c02 dockerhub.kubekey.local/kubesphere/etcd:v3.3.12 “/usr/local/bin/etcd” 19 minutes ago Exited (0) 3 minutes ago etcd1
[root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]#
[root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]#
[root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]# netstat -ntlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 886/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1587/master
tcp6 0 0 :::22 :::* LISTEN 886/sshd
tcp6 0 0 ::1:25 :::* LISTEN 1587/master
[root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]# docker rm d23861660c02
d23861660c02
[root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]# docker ps -a | grep kube | grep -v pause
[root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]#
[root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]#
[root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]# ./kk create cluster -f config-sample.yaml
+———+——+——+———+———-+——-+——-+———–+——–+————+————-+——————+————–+
| name | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time |
+———+——+——+———+———-+——-+——-+———–+——–+————+————-+——————+————–+
| node3 | y | y | y | y | y | y | y | y | y | y | y | CST 02:53:47 |
| master1 | y | y | y | y | y | y | y | y | y | y | y | CST 02:53:47 |
| master2 | y | y | y | y | y | y | y | y | y | y | y | CST 02:53:48 |
| master3 | y | y | y | y | y | y | y | y | y | y | y | CST 02:53:48 |
| node1 | y | y | y | y | y | y | y | y | y | y | y | CST 02:53:48 |
| node2 | y | y | y | y | y | y | y | y | y | y | y | CST 02:53:48 |
+———+——+——+———+———-+——-+——-+———–+——–+————+————-+——————+————–+
This is a simple check of your environment.
Before installation, you should ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations
Continue this installation? [yes/no]: yes
INFO[02:53:51 CST] Downloading Installation Files
INFO[02:53:51 CST] Downloading kubeadm …
INFO[02:53:51 CST] Downloading kubelet …
INFO[02:53:52 CST] Downloading kubectl …
INFO[02:53:52 CST] Downloading kubecni …
INFO[02:53:52 CST] Downloading helm …
INFO[02:53:52 CST] Configurating operating system …
[node3 192.168.3.166] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
[master2 192.168.3.162] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
[node1 192.168.3.164] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
[master1 192.168.3.161] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
[node2 192.168.3.165] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
[master3 192.168.3.163] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
INFO[02:53:55 CST] Installing docker …
INFO[02:53:56 CST] Start to download images on all nodes
[master2] Downloading image: dockerhub.kubekey.local/kubesphere/etcd:v3.3.12
[node3] Downloading image: dockerhub.kubekey.local/kubesphere/pause:3.1
[master3] Downloading image: dockerhub.kubekey.local/kubesphere/etcd:v3.3.12
[node1] Downloading image: dockerhub.kubekey.local/kubesphere/pause:3.1
[master2] Downloading image: dockerhub.kubekey.local/kubesphere/pause:3.1
[node2] Downloading image: dockerhub.kubekey.local/kubesphere/pause:3.1
[node3] Downloading image: dockerhub.kubekey.local/coredns/coredns:1.6.9
[master1] Downloading image: dockerhub.kubekey.local/kubesphere/etcd:v3.3.12
[master3] Downloading image: dockerhub.kubekey.local/kubesphere/pause:3.1
[master2] Downloading image: dockerhub.kubekey.local/kubesphere/kube-apiserver:v1.17.9
[node1] Downloading image: dockerhub.kubekey.local/coredns/coredns:1.6.9
[node3] Downloading image: dockerhub.kubekey.local/kubesphere/k8s-dns-node-cache:1.15.12
[node2] Downloading image: dockerhub.kubekey.local/coredns/coredns:1.6.9
[master1] Downloading image: dockerhub.kubekey.local/kubesphere/pause:3.1
[master3] Downloading image: dockerhub.kubekey.local/kubesphere/kube-apiserver:v1.17.9
[master2] Downloading image: dockerhub.kubekey.local/kubesphere/kube-controller-manager:v1.17.9
[node1] Downloading image: dockerhub.kubekey.local/kubesphere/k8s-dns-node-cache:1.15.12
[node3] Downloading image: dockerhub.kubekey.local/calico/kube-controllers:v3.15.1
[node2] Downloading image: dockerhub.kubekey.local/kubesphere/k8s-dns-node-cache:1.15.12
[master3] Downloading image: dockerhub.kubekey.local/kubesphere/kube-controller-manager:v1.17.9
[master1] Downloading image: dockerhub.kubekey.local/kubesphere/kube-apiserver:v1.17.9
[master2] Downloading image: dockerhub.kubekey.local/kubesphere/kube-scheduler:v1.17.9
[node1] Downloading image: dockerhub.kubekey.local/calico/kube-controllers:v3.15.1
[node3] Downloading image: dockerhub.kubekey.local/calico/cni:v3.15.1
[node2] Downloading image: dockerhub.kubekey.local/calico/kube-controllers:v3.15.1
[master3] Downloading image: dockerhub.kubekey.local/kubesphere/kube-scheduler:v1.17.9
[master1] Downloading image: dockerhub.kubekey.local/kubesphere/kube-controller-manager:v1.17.9
[master2] Downloading image: dockerhub.kubekey.local/kubesphere/kube-proxy:v1.17.9
[node1] Downloading image: dockerhub.kubekey.local/calico/cni:v3.15.1
[node3] Downloading image: dockerhub.kubekey.local/calico/node:v3.15.1
[node2] Downloading image: dockerhub.kubekey.local/calico/cni:v3.15.1
[master3] Downloading image: dockerhub.kubekey.local/kubesphere/kube-proxy:v1.17.9
[master1] Downloading image: dockerhub.kubekey.local/kubesphere/kube-scheduler:v1.17.9
[master2] Downloading image: dockerhub.kubekey.local/coredns/coredns:1.6.9
[node1] Downloading image: dockerhub.kubekey.local/calico/node:v3.15.1
[node3] Downloading image: dockerhub.kubekey.local/calico/pod2daemon-flexvol:v3.15.1
[node2] Downloading image: dockerhub.kubekey.local/calico/node:v3.15.1
[master3] Downloading image: dockerhub.kubekey.local/coredns/coredns:1.6.9
[master2] Downloading image: dockerhub.kubekey.local/kubesphere/k8s-dns-node-cache:1.15.12
[master1] Downloading image: dockerhub.kubekey.local/kubesphere/kube-proxy:v1.17.9
[node1] Downloading image: dockerhub.kubekey.local/calico/pod2daemon-flexvol:v3.15.1
[node2] Downloading image: dockerhub.kubekey.local/calico/pod2daemon-flexvol:v3.15.1
[master3] Downloading image: dockerhub.kubekey.local/kubesphere/k8s-dns-node-cache:1.15.12
[master2] Downloading image: dockerhub.kubekey.local/calico/kube-controllers:v3.15.1
[master1] Downloading image: dockerhub.kubekey.local/coredns/coredns:1.6.9
[master3] Downloading image: dockerhub.kubekey.local/calico/kube-controllers:v3.15.1
[master2] Downloading image: dockerhub.kubekey.local/calico/cni:v3.15.1
[master1] Downloading image: dockerhub.kubekey.local/kubesphere/k8s-dns-node-cache:1.15.12
[master3] Downloading image: dockerhub.kubekey.local/calico/cni:v3.15.1
[master2] Downloading image: dockerhub.kubekey.local/calico/node:v3.15.1
[master1] Downloading image: dockerhub.kubekey.local/calico/kube-controllers:v3.15.1
[master3] Downloading image: dockerhub.kubekey.local/calico/node:v3.15.1
[master2] Downloading image: dockerhub.kubekey.local/calico/pod2daemon-flexvol:v3.15.1
[master1] Downloading image: dockerhub.kubekey.local/calico/cni:v3.15.1
[master3] Downloading image: dockerhub.kubekey.local/calico/pod2daemon-flexvol:v3.15.1
[master1] Downloading image: dockerhub.kubekey.local/calico/node:v3.15.1
[master1] Downloading image: dockerhub.kubekey.local/calico/pod2daemon-flexvol:v3.15.1
INFO[02:54:02 CST] Generating etcd certs
INFO[02:54:05 CST] Synchronizing etcd certs
INFO[02:54:05 CST] Creating etcd service
INFO[02:54:09 CST] Starting etcd cluster
[master1 192.168.3.161] MSG:
Configuration file will be created
[master2 192.168.3.162] MSG:
Configuration file will be created
[master3 192.168.3.163] MSG:
Configuration file will be created
INFO[02:54:10 CST] Refreshing etcd configuration
Waiting for etcd to start
Waiting for etcd to start
Waiting for etcd to start
INFO[02:54:16 CST] Backup etcd data regularly
INFO[02:54:17 CST] Get cluster status
[master1 192.168.3.161] MSG:
Cluster will be created.
[master2 192.168.3.162] MSG:
Cluster will be created.
[master3 192.168.3.163] MSG:
Cluster will be created.
INFO[02:54:17 CST] Installing kube binaries
Push /tmp/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubeadm to 192.168.3.166:/tmp/kubekey/kubeadm Done
Push /tmp/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubeadm to 192.168.3.162:/tmp/kubekey/kubeadm Done
Push /tmp/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubeadm to 192.168.3.161:/tmp/kubekey/kubeadm Done
Push /tmp/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubeadm to 192.168.3.165:/tmp/kubekey/kubeadm Done
Push /tmp/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubeadm to 192.168.3.163:/tmp/kubekey/kubeadm Done
Push /tmp/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubeadm to 192.168.3.164:/tmp/kubekey/kubeadm Done
Push /tmp/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubelet to 192.168.3.166:/tmp/kubekey/kubelet Done
Push /tmp/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubelet to 192.168.3.165:/tmp/kubekey/kubelet Done
Push /tmp/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubelet to 192.168.3.162:/tmp/kubekey/kubelet Done
Push /tmp/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubectl to 192.168.3.166:/tmp/kubekey/kubectl Done
Push /tmp/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubectl to 192.168.3.165:/tmp/kubekey/kubectl Done
Push /tmp/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubectl to 192.168.3.162:/tmp/kubekey/kubectl Done
Push /tmp/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubelet to 192.168.3.164:/tmp/kubekey/kubelet Done
Push /tmp/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubelet to 192.168.3.163:/tmp/kubekey/kubelet Done
Push /tmp/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubelet to 192.168.3.161:/tmp/kubekey/kubelet Done
Push /tmp/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/helm to 192.168.3.166:/tmp/kubekey/helm Done
Push /tmp/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/helm to 192.168.3.165:/tmp/kubekey/helm Done
Push /tmp/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/helm to 192.168.3.162:/tmp/kubekey/helm Done
Push /tmp/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubectl to 192.168.3.164:/tmp/kubekey/kubectl Done
Push /tmp/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubectl to 192.168.3.163:/tmp/kubekey/kubectl Done
Push /tmp/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubectl to 192.168.3.161:/tmp/kubekey/kubectl Done
Push /tmp/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.3.162:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
Push /tmp/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.3.166:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
Push /tmp/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.3.165:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
Push /tmp/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/helm to 192.168.3.164:/tmp/kubekey/helm Done
Push /tmp/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/helm to 192.168.3.161:/tmp/kubekey/helm Done
Push /tmp/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/helm to 192.168.3.163:/tmp/kubekey/helm Done
Push /tmp/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.3.164:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
Push /tmp/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.3.161:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
Push /tmp/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.3.163:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
INFO[02:54:26 CST] Initializing kubernetes cluster
[master1 192.168.3.161] MSG:
[reset] Reading configuration from the cluster…
[reset] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -oyaml’
W1222 02:56:28.750425 28088 reset.go:99] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s: EOF
[preflight] Running pre-flight checks
W1222 02:56:28.750752 28088 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in “/var/lib/kubelet”
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the “iptables” command.
If your cluster was setup to utilize IPVS, run ipvsadm –clear (or similar)
to reset your system’s IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[master1 192.168.3.161] MSG:
[reset] Reading configuration from the cluster…
[reset] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -oyaml’
W1222 02:58:39.569055 29173 reset.go:99] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s: EOF
[preflight] Running pre-flight checks
W1222 02:58:39.569557 29173 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in “/var/lib/kubelet”
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the “iptables” command.
If your cluster was setup to utilize IPVS, run ipvsadm –clear (or similar)
to reset your system’s IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
ERRO[03:00:46 CST] Failed to init kubernetes cluster: Failed to exec command: sudo -E /bin/sh -c “/usr/local/bin/kubeadm init –config=/etc/kubernetes/kubeadm-config.yaml”
W1222 02:58:45.808875 29212 defaults.go:186] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.233.0.10]; the provided value is: [169.254.25.10]
W1222 02:58:45.809382 29212 validation.go:28] Cannot validate kube-proxy config - no validator is available
W1222 02:58:45.809421 29212 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.9
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder “/etc/kubernetes/pki”
[certs] Generating “ca” certificate and key
[certs] Generating “apiserver” certificate and key
[certs] apiserver serving cert is signed for DNS names [master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost lb.kubesphere.local master1 master1.cluster.local master2 master2.cluster.local master3 master3.cluster.local node1 node1.cluster.local node2 node2.cluster.local node3 node3.cluster.local] and IPs [10.233.0.1 192.168.3.161 127.0.0.1 192.168.3.190 192.168.3.161 192.168.3.162 192.168.3.163 192.168.3.164 192.168.3.165 192.168.3.166 10.233.0.1]
[certs] Generating “apiserver-kubelet-client” certificate and key
[certs] Generating “front-proxy-ca” certificate and key
[certs] Generating “front-proxy-client” certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating “sa” key and public key
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
[kubeconfig] Writing “admin.conf” kubeconfig file
[kubeconfig] Writing “kubelet.conf” kubeconfig file
[kubeconfig] Writing “controller-manager.conf” kubeconfig file
[kubeconfig] Writing “scheduler.conf” kubeconfig file
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
[controlplane] Adding extra host path mount “host-time” to “kube-controller-manager”
W1222 02:58:51.386193 29212 manifests.go:214] the default kube-apiserver authorization-mode is “Node,RBAC”; using “Node,RBAC”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
[controlplane] Adding extra host path mount “host-time” to “kube-controller-manager”
W1222 02:58:51.406787 29212 manifests.go:214] the default kube-apiserver authorization-mode is “Node,RBAC”; using “Node,RBAC”
[control-plane] Creating static Pod manifest for “kube-scheduler”
[controlplane] Adding extra host path mount “host-time” to “kube-controller-manager”
W1222 02:58:51.409930 29212 manifests.go:214] the default kube-apiserver authorization-mode is “Node,RBAC”; using “Node,RBAC”
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn’t running or healthy.
[kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz’ failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn’t running or healthy.
[kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz’ failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn’t running or healthy.
[kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz’ failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn’t running or healthy.
[kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz’ failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn’t running or healthy.
[kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz’ failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- ‘systemctl status kubelet’
- ‘journalctl -xeu kubelet’
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- ‘docker ps -a | grep kube | grep -v pause’
Once you have found the failing container, you can inspect its logs with:
- ‘docker logs CONTAINERID’
error execution phase wait-control-plane: couldn’t initialize a Kubernetes cluster
To see the stack trace of this error execute with –v=5 or higher: Process exited with status 1 node=192.168.3.161
WARN[03:00:46 CST] Task failed …
WARN[03:00:46 CST] error: interrupted by error
Error: Failed to init kubernetes cluster: interrupted by error
Usage:
kk create cluster [flags]
Flags:
-f, –filename string Path to a configuration file
-h, –help help for cluster
–skip-pull-images Skip pre pull images
–with-kubernetes string Specify a supported version of kubernetes
–with-kubesphere Deploy a specific version of kubesphere (default v3.0.0)
-y, –yes Skip pre-check of the installation
Global Flags:
–debug Print detailed information (default true)
Failed to init kubernetes cluster: interrupted by error
[root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]#