请大神帮看看,感谢
执行安装报错:
ps:防火墙手动关闭了
[root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]# ./kk create cluster -f config-sample.yaml
+———————+——+——+———+———-+——-+——-+———–+——–+————+————-+——————+————–+
| name | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time |
+———————+——+——+———+———-+——-+——-+———–+——–+————+————-+——————+————–+
| worker.node2 | y | y | y | y | | y | | y | | | | CST 22:52:46 |
| master.worker.node1 | y | y | y | y | | y | | y | | | | CST 22:52:46 |
+———————+——+——+———+———-+——-+——-+———–+——–+————+————-+——————+————–+
This is a simple check of your environment.
Before installation, you should ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations
Continue this installation? [yes/no]: yes
INFO[22:52:50 CST] Downloading Installation Files
INFO[22:52:50 CST] Downloading kubeadm …
INFO[22:52:50 CST] Downloading kubelet …
INFO[22:52:51 CST] Downloading kubectl …
INFO[22:52:51 CST] Downloading kubecni …
INFO[22:52:51 CST] Downloading helm …
INFO[22:52:51 CST] Configurating operating system …
[worker.node2 172.16.129.139] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
[master.worker.node1 172.16.129.140] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
INFO[22:52:54 CST] Installing docker …
INFO[22:52:55 CST] Start to download images on all nodes
[worker.node2] Downloading image: dockerhub.kubekey.local/kubesphere/pause:3.2
[master.worker.node1] Downloading image: dockerhub.kubekey.local/kubesphere/etcd:v3.3.12
[master.worker.node1] Downloading image: dockerhub.kubekey.local/kubesphere/pause:3.2
[master.worker.node1] Downloading image: dockerhub.kubekey.local/kubesphere/kube-apiserver:v1.18.6
[worker.node2] Downloading image: dockerhub.kubekey.local/coredns/coredns:1.6.9
[master.worker.node1] Downloading image: dockerhub.kubekey.local/kubesphere/kube-controller-manager:v1.18.6
[worker.node2] Downloading image: dockerhub.kubekey.local/kubesphere/k8s-dns-node-cache:1.15.12
[master.worker.node1] Downloading image: dockerhub.kubekey.local/kubesphere/kube-scheduler:v1.18.6
[worker.node2] Downloading image: dockerhub.kubekey.local/calico/kube-controllers:v3.15.1
[worker.node2] Downloading image: dockerhub.kubekey.local/calico/cni:v3.15.1
[master.worker.node1] Downloading image: dockerhub.kubekey.local/kubesphere/kube-proxy:v1.18.6
[worker.node2] Downloading image: dockerhub.kubekey.local/calico/node:v3.15.1
[master.worker.node1] Downloading image: dockerhub.kubekey.local/coredns/coredns:1.6.9
[worker.node2] Downloading image: dockerhub.kubekey.local/calico/pod2daemon-flexvol:v3.15.1
[master.worker.node1] Downloading image: dockerhub.kubekey.local/kubesphere/k8s-dns-node-cache:1.15.12
[master.worker.node1] Downloading image: dockerhub.kubekey.local/calico/kube-controllers:v3.15.1
[master.worker.node1] Downloading image: dockerhub.kubekey.local/calico/cni:v3.15.1
[master.worker.node1] Downloading image: dockerhub.kubekey.local/calico/node:v3.15.1
[master.worker.node1] Downloading image: dockerhub.kubekey.local/calico/pod2daemon-flexvol:v3.15.1
INFO[22:52:58 CST] Generating etcd certs
INFO[22:52:59 CST] Synchronizing etcd certs
INFO[22:52:59 CST] Creating etcd service
INFO[22:53:03 CST] Starting etcd cluster
[master.worker.node1 172.16.129.140] MSG:
Configuration file already exists
Waiting for etcd to start
Waiting for etcd to start
INFO[22:53:18 CST] Refreshing etcd configuration
INFO[22:53:18 CST] Backup etcd data regularly
INFO[22:53:19 CST] Get cluster status
[master.worker.node1 172.16.129.140] MSG:
Cluster will be created.
INFO[22:53:19 CST] Installing kube binaries
Push /home/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.18.6/amd64/kubeadm to 172.16.129.140:/tmp/kubekey/kubeadm Done
Push /home/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.18.6/amd64/kubeadm to 172.16.129.139:/tmp/kubekey/kubeadm Done
Push /home/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.18.6/amd64/kubelet to 172.16.129.140:/tmp/kubekey/kubelet Done
Push /home/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.18.6/amd64/kubelet to 172.16.129.139:/tmp/kubekey/kubelet Done
Push /home/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.18.6/amd64/kubectl to 172.16.129.140:/tmp/kubekey/kubectl Done
Push /home/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.18.6/amd64/kubectl to 172.16.129.139:/tmp/kubekey/kubectl Done
Push /home/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.18.6/amd64/helm to 172.16.129.140:/tmp/kubekey/helm Done
Push /home/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.18.6/amd64/helm to 172.16.129.139:/tmp/kubekey/helm Done
Push /home/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.18.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.16.129.140:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
Push /home/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.18.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.16.129.139:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
INFO[22:53:29 CST] Initializing kubernetes cluster
[master.worker.node1 172.16.129.140] MSG:
[preflight] Running pre-flight checks
W0202 22:53:31.237995 77903 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in “/var/lib/kubelet”
W0202 22:53:31.245150 77903 cleanupnode.go:99] [reset] Failed to evaluate the “/var/lib/kubelet” directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the “iptables” command.
If your cluster was setup to utilize IPVS, run ipvsadm –clear (or similar)
to reset your system’s IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[master.worker.node1 172.16.129.140] MSG:
[preflight] Running pre-flight checks
W0202 22:53:31.987330 78126 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in “/var/lib/kubelet”
W0202 22:53:31.992550 78126 cleanupnode.go:99] [reset] Failed to evaluate the “/var/lib/kubelet” directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the “iptables” command.
If your cluster was setup to utilize IPVS, run ipvsadm –clear (or similar)
to reset your system’s IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
ERRO[22:53:32 CST] Failed to init kubernetes cluster: Failed to exec command: sudo -E /bin/sh -c “/usr/local/bin/kubeadm init –config=/etc/kubernetes/kubeadm-config.yaml”
W0202 22:53:32.151548 78166 utils.go:26] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.233.0.10]; the provided value is: [169.254.25.10]
W0202 22:53:32.151761 78166 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.6
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”. Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING FileExisting-socat]: socat not found in system path
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileExisting-conntrack]: conntrack not found in system path
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
To see the stack trace of this error execute with –v=5 or higher: Process exited with status 1 node=172.16.129.140
WARN[22:53:32 CST] Task failed …
WARN[22:53:32 CST] error: interrupted by error
Error: Failed to init kubernetes cluster: interrupted by error
Usage:
kk create cluster [flags]
Flags:
-f, –filename string Path to a configuration file
-h, –help help for cluster
–skip-pull-images Skip pre pull images
–with-kubernetes string Specify a supported version of kubernetes
–with-kubesphere Deploy a specific version of kubesphere (default v3.0.0)
-y, –yes Skip pre-check of the installation
Global Flags:
–debug Print detailed information (default true)
Failed to init kubernetes cluster: interrupted by error