RolandMa1986 This is a simple check of your environment.
Before installation, you should ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations
Continue this installation? [yes/no]: yes
INFO[10:46:16 CST] Downloading Installation Files
INFO[10:46:16 CST] Downloading kubeadm …
INFO[10:46:16 CST] Downloading kubelet …
INFO[10:46:16 CST] Downloading kubectl …
INFO[10:46:17 CST] Downloading kubecni …
INFO[10:46:17 CST] Downloading helm …
INFO[10:46:17 CST] Configurating operating system …
[node1 192.168.23.130] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
[master2 192.168.23.129] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
[master1 192.168.23.132] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
INFO[10:46:18 CST] Installing docker …
INFO[10:46:19 CST] Start to download images on all nodes
[node1] Downloading image: 192.168.23.132:80/kubesphere/pause:3.1
[master1] Downloading image: 192.168.23.132:80/kubesphere/etcd:v3.3.12
[master2] Downloading image: 192.168.23.132:80/kubesphere/etcd:v3.3.12
[node1] Downloading image: 192.168.23.132:80/coredns/coredns:1.6.9
[master2] Downloading image: 192.168.23.132:80/kubesphere/pause:3.1
[master1] Downloading image: 192.168.23.132:80/kubesphere/pause:3.1
[node1] Downloading image: 192.168.23.132:80/kubesphere/k8s-dns-node-cache:1.15.12
[master2] Downloading image: 192.168.23.132:80/kubesphere/kube-apiserver:v1.17.9
[master1] Downloading image: 192.168.23.132:80/kubesphere/kube-apiserver:v1.17.9
[node1] Downloading image: 192.168.23.132:80/calico/kube-controllers:v3.15.1
[master2] Downloading image: 192.168.23.132:80/kubesphere/kube-controller-manager:v1.17.9
[master1] Downloading image: 192.168.23.132:80/kubesphere/kube-controller-manager:v1.17.9
[node1] Downloading image: 192.168.23.132:80/calico/cni:v3.15.1
[master2] Downloading image: 192.168.23.132:80/kubesphere/kube-scheduler:v1.17.9
[master1] Downloading image: 192.168.23.132:80/kubesphere/kube-scheduler:v1.17.9
[node1] Downloading image: 192.168.23.132:80/calico/node:v3.15.1
[master2] Downloading image: 192.168.23.132:80/kubesphere/kube-proxy:v1.17.9
[master1] Downloading image: 192.168.23.132:80/kubesphere/kube-proxy:v1.17.9
[node1] Downloading image: 192.168.23.132:80/calico/pod2daemon-flexvol:v3.15.1
[master2] Downloading image: 192.168.23.132:80/coredns/coredns:1.6.9
[master1] Downloading image: 192.168.23.132:80/coredns/coredns:1.6.9
[master2] Downloading image: 192.168.23.132:80/kubesphere/k8s-dns-node-cache:1.15.12
[master1] Downloading image: 192.168.23.132:80/kubesphere/k8s-dns-node-cache:1.15.12
[master2] Downloading image: 192.168.23.132:80/calico/kube-controllers:v3.15.1
[master1] Downloading image: 192.168.23.132:80/calico/kube-controllers:v3.15.1
[master2] Downloading image: 192.168.23.132:80/calico/cni:v3.15.1
[master1] Downloading image: 192.168.23.132:80/calico/cni:v3.15.1
[master2] Downloading image: 192.168.23.132:80/calico/node:v3.15.1
[master1] Downloading image: 192.168.23.132:80/calico/node:v3.15.1
[master2] Downloading image: 192.168.23.132:80/calico/pod2daemon-flexvol:v3.15.1
[master1] Downloading image: 192.168.23.132:80/calico/pod2daemon-flexvol:v3.15.1
INFO[10:46:21 CST] Generating etcd certs
INFO[10:46:24 CST] Synchronizing etcd certs
INFO[10:46:24 CST] Creating etcd service
INFO[10:46:25 CST] Starting etcd cluster
[master1 192.168.23.132] MSG:
Configuration file will be created
[master2 192.168.23.129] MSG:
Configuration file already exists
Waiting for etcd to start
Waiting for etcd to start
Waiting for etcd to start
INFO[10:46:51 CST] Refreshing etcd configuration
INFO[10:46:54 CST] Backup etcd data regularly
INFO[10:46:54 CST] Get cluster status
[master1 192.168.23.132] MSG:
Cluster will be created.
[master2 192.168.23.129] MSG:
Cluster will be created.
INFO[10:46:55 CST] Installing kube binaries
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubeadm to 192.168.23.132:/tmp/kubekey/kubeadm Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubeadm to 192.168.23.130:/tmp/kubekey/kubeadm Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubeadm to 192.168.23.129:/tmp/kubekey/kubeadm Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubelet to 192.168.23.132:/tmp/kubekey/kubelet Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubelet to 192.168.23.130:/tmp/kubekey/kubelet Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubectl to 192.168.23.132:/tmp/kubekey/kubectl Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubelet to 192.168.23.129:/tmp/kubekey/kubelet Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubectl to 192.168.23.130:/tmp/kubekey/kubectl Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/helm to 192.168.23.132:/tmp/kubekey/helm Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubectl to 192.168.23.129:/tmp/kubekey/kubectl Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.23.132:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/helm to 192.168.23.130:/tmp/kubekey/helm Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/helm to 192.168.23.129:/tmp/kubekey/helm Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.23.130:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.23.129:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
INFO[10:46:59 CST] Initializing kubernetes cluster
[master1 192.168.23.132] MSG:
[preflight] Running pre-flight checks
W1222 10:47:55.659154 42733 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in “/var/lib/kubelet”
W1222 10:47:55.667464 42733 cleanupnode.go:99] [reset] Failed to evaluate the “/var/lib/kubelet” directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the “iptables” command.
If your cluster was setup to utilize IPVS, run ipvsadm –clear (or similar)
to reset your system’s IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[master1 192.168.23.132] MSG:
[preflight] Running pre-flight checks
W1222 10:48:51.317223 42903 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in “/var/lib/kubelet”
W1222 10:48:51.324108 42903 cleanupnode.go:99] [reset] Failed to evaluate the “/var/lib/kubelet” directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the “iptables” command.
If your cluster was setup to utilize IPVS, run ipvsadm –clear (or similar)
to reset your system’s IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
ERRO[10:49:46 CST] Failed to init kubernetes cluster: Failed to exec command: sudo -E /bin/sh -c “/usr/local/bin/kubeadm init –config=/etc/kubernetes/kubeadm-config.yaml”
W1222 10:48:51.463237 42934 defaults.go:186] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.233.0.10]; the provided value is: [169.254.25.10]
W1222 10:48:51.463408 42934 validation.go:28] Cannot validate kubelet config - no validator is available
W1222 10:48:51.463413 42934 validation.go:28] Cannot validate kube-proxy config - no validator is available
[init] Using Kubernetes version: v1.17.9
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”. Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ExternalEtcdVersion]: Get https://192.168.23.129:2379/version: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
To see the stack trace of this error execute with –v=5 or higher: Process exited with status 1 node=192.168.23.132
WARN[10:49:46 CST] Task failed …
WARN[10:49:46 CST] error: interrupted by error
Error: Failed to init kubernetes cluster: interrupted by error
Usage:
kk create cluster [flags]
Flags:
-f, –filename string Path to a configuration file
-h, –help help for cluster
–skip-pull-images Skip pre pull images
–with-kubernetes string Specify a supported version of kubernetes
–with-kubesphere Deploy a specific version of kubesphere (default v3.0.0)
-y, –yes Skip pre-check of the installation
Global Flags:
–debug Print detailed information (default true)
Failed to init kubernetes cluster: interrupted by error这个是全部的Log