创建部署问题时,请参考下面模板:
操作系统信息,虚拟机,Centos7.6,2C/2G
Kubernetes版本信息,v18.6。多节点。
KubeSphere版本信息,v3.0.0。在线安装。全套安装。
报错日志:
[root@centos-kube1 opt]# ./kk create cluster -f config1.yaml
+——–+——+——+———+———-+——-+——-+———–+——–+————+————-+——————+————–+
| name | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time |
+——–+——+——+———+———-+——-+——-+———–+——–+————+————-+——————+————–+
| node1 | y | y | y | y | y | y | y | | y | | y | CST 17:42:03 |
| node2 | y | y | y | y | y | y | y | | y | | y | CST 17:42:02 |
| master | y | y | y | y | y | y | y | | y | | y | CST 17:42:03 |
+——–+——+——+———+———-+——-+——-+———–+——–+————+————-+——————+————–+
This is a simple check of your environment.
Before installation, you should ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations
Continue this installation? [yes/no]: yes
INFO[17:42:16 CST] Downloading Installation Files
INFO[17:42:16 CST] Downloading kubeadm …
INFO[17:43:00 CST] Downloading kubelet …
INFO[17:46:05 CST] Downloading kubectl …
INFO[17:47:08 CST] Downloading helm …
INFO[17:47:47 CST] Downloading kubecni …
INFO[17:48:37 CST] Configurating operating system …
[node2 10.211.55.99] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
no crontab for root
[node1 10.211.55.98] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
no crontab for root
[master 10.211.55.97] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
no crontab for root
INFO[17:48:40 CST] Installing docker …
INFO[17:50:15 CST] Start to download images on all nodes
[node2] Downloading image: kubesphere/pause:3.2
[master] Downloading image: kubesphere/etcd:v3.3.12
[node1] Downloading image: kubesphere/pause:3.2
[node1] Downloading image: kubesphere/kube-proxy:v1.18.6
[node2] Downloading image: kubesphere/kube-proxy:v1.18.6
[master] Downloading image: kubesphere/pause:3.2
[master] Downloading image: kubesphere/kube-apiserver:v1.18.6
[node2] Downloading image: coredns/coredns:1.6.9
[node1] Downloading image: coredns/coredns:1.6.9
[master] Downloading image: kubesphere/kube-controller-manager:v1.18.6
[node2] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[node1] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[node2] Downloading image: calico/kube-controllers:v3.15.1
[master] Downloading image: kubesphere/kube-scheduler:v1.18.6
[master] Downloading image: kubesphere/kube-proxy:v1.18.6
[node2] Downloading image: calico/cni:v3.15.1
[node1] Downloading image: calico/kube-controllers:v3.15.1
[master] Downloading image: coredns/coredns:1.6.9
[node2] Downloading image: calico/node:v3.15.1
[master] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[node1] Downloading image: calico/cni:v3.15.1
[master] Downloading image: calico/kube-controllers:v3.15.1
[node2] Downloading image: calico/pod2daemon-flexvol:v3.15.1
[node1] Downloading image: calico/node:v3.15.1
[master] Downloading image: calico/cni:v3.15.1
[master] Downloading image: calico/node:v3.15.1
[node1] Downloading image: calico/pod2daemon-flexvol:v3.15.1
[master] Downloading image: calico/pod2daemon-flexvol:v3.15.1
INFO[17:55:05 CST] Generating etcd certs
INFO[17:55:07 CST] Synchronizing etcd certs
INFO[17:55:07 CST] Creating etcd service
[master 10.211.55.97] MSG:
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /etc/systemd/system/etcd.service.
INFO[17:55:08 CST] Starting etcd cluster
[master 10.211.55.97] MSG:
Configuration file will be created
INFO[17:55:09 CST] Refreshing etcd configuration
Waiting for etcd to start
INFO[17:55:15 CST] Backup etcd data regularly
INFO[17:55:15 CST] Get cluster status
[master 10.211.55.97] MSG:
Cluster will be created.
INFO[17:55:16 CST] Installing kube binaries
Push /opt/kubekey/v1.18.6/amd64/kubeadm to 10.211.55.99:/tmp/kubekey/kubeadm Done
Push /opt/kubekey/v1.18.6/amd64/kubeadm to 10.211.55.97:/tmp/kubekey/kubeadm Done
Push /opt/kubekey/v1.18.6/amd64/kubeadm to 10.211.55.98:/tmp/kubekey/kubeadm Done
Push /opt/kubekey/v1.18.6/amd64/kubelet to 10.211.55.99:/tmp/kubekey/kubelet Done
Push /opt/kubekey/v1.18.6/amd64/kubelet to 10.211.55.97:/tmp/kubekey/kubelet Done
Push /opt/kubekey/v1.18.6/amd64/kubectl to 10.211.55.99:/tmp/kubekey/kubectl Done
Push /opt/kubekey/v1.18.6/amd64/kubelet to 10.211.55.98:/tmp/kubekey/kubelet Done
Push /opt/kubekey/v1.18.6/amd64/kubectl to 10.211.55.97:/tmp/kubekey/kubectl Done
Push /opt/kubekey/v1.18.6/amd64/kubectl to 10.211.55.98:/tmp/kubekey/kubectl Done
Push /opt/kubekey/v1.18.6/amd64/helm to 10.211.55.99:/tmp/kubekey/helm Done
Push /opt/kubekey/v1.18.6/amd64/helm to 10.211.55.97:/tmp/kubekey/helm Done
Push /opt/kubekey/v1.18.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 10.211.55.97:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
Push /opt/kubekey/v1.18.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 10.211.55.99:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
Push /opt/kubekey/v1.18.6/amd64/helm to 10.211.55.98:/tmp/kubekey/helm Done
Push /opt/kubekey/v1.18.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 10.211.55.98:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
INFO[17:55:23 CST] Initializing kubernetes cluster
[master 10.211.55.97] MSG:
W0331 17:55:23.858836 19973 utils.go:26] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.233.0.10]; the provided value is: [169.254.25.10]
W0331 17:55:23.859391 19973 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.6
[preflight] Running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder “/etc/kubernetes/pki”
[certs] Generating “ca” certificate and key
[certs] Generating “apiserver” certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost lb.kubesphere.local master master.cluster.local node1 node1.cluster.local node2 node2.cluster.local] and IPs [10.233.0.1 10.211.55.97 127.0.0.1 10.211.55.97 10.211.55.98 10.211.55.99 10.233.0.1]
[certs] Generating “apiserver-kubelet-client” certificate and key
[certs] Generating “front-proxy-ca” certificate and key
[certs] Generating “front-proxy-client” certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating “sa” key and public key
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
[kubeconfig] Writing “admin.conf” kubeconfig file
[kubeconfig] Writing “kubelet.conf” kubeconfig file
[kubeconfig] Writing “controller-manager.conf” kubeconfig file
[kubeconfig] Writing “scheduler.conf” kubeconfig file
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
W0331 17:55:28.829753 19973 manifests.go:225] the default kube-apiserver authorization-mode is “Node,RBAC”; using “Node,RBAC”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
W0331 17:55:28.837943 19973 manifests.go:225] the default kube-apiserver authorization-mode is “Node,RBAC”; using “Node,RBAC”
[control-plane] Creating static Pod manifest for “kube-scheduler”
W0331 17:55:28.839615 19973 manifests.go:225] the default kube-apiserver authorization-mode is “Node,RBAC”; using “Node,RBAC”
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s
[apiclient] All control plane components are healthy after 20.002959 seconds
[upload-config] Storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace
[kubelet] Creating a ConfigMap “kubelet-config-1.18” in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see –upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the label “node-role.kubernetes.io/master=''”
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: djllcv.cb0b5qd9y3fve9dx
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the “cluster-info” ConfigMap in the “kube-public” namespace
[kubelet-finalize] Updating “/etc/kubernetes/kubelet.conf” to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join lb.kubesphere.local:6443 –token djllcv.cb0b5qd9y3fve9dx \
–discovery-token-ca-cert-hash sha256:0c10bf33e3ff9021bdbc045e95b64e1b35d3b834b9986bcd497b69b19bde5aa9 \
–control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join lb.kubesphere.local:6443 –token djllcv.cb0b5qd9y3fve9dx \
–discovery-token-ca-cert-hash sha256:0c10bf33e3ff9021bdbc045e95b64e1b35d3b834b9986bcd497b69b19bde5aa9
[master 10.211.55.97] MSG:
service “kube-dns” deleted
[master 10.211.55.97] MSG:
service/coredns created
[master 10.211.55.97] MSG:
serviceaccount/nodelocaldns created
daemonset.apps/nodelocaldns created
[master 10.211.55.97] MSG:
configmap/nodelocaldns created
[master 10.211.55.97] MSG:
I0331 17:56:13.482306 21872 version.go:252] remote version is much newer: v1.20.5; falling back to: stable-1.18
W0331 17:56:14.446264 21872 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[upload-certs] Storing the certificates in Secret “kubeadm-certs” in the “kube-system” Namespace
[upload-certs] Using certificate key:
2e198be852c48df9cb29de70a7fc8b0e83c4e4823f83b18856330a9ec806d8bb
[master 10.211.55.97] MSG:
secret/kubeadm-certs patched
[master 10.211.55.97] MSG:
secret/kubeadm-certs patched
[master 10.211.55.97] MSG:
secret/kubeadm-certs patched
[master 10.211.55.97] MSG:
W0331 17:56:14.891846 21966 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
kubeadm join lb.kubesphere.local:6443 –token c52is2.zhjcp28xl1rl1iah –discovery-token-ca-cert-hash sha256:0c10bf33e3ff9021bdbc045e95b64e1b35d3b834b9986bcd497b69b19bde5aa9
[master 10.211.55.97] MSG:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master NotReady master 28s v1.18.6 10.211.55.97 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://19.3.8
INFO[17:56:15 CST] Deploying network plugin …
[master 10.211.55.97] MSG:
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
INFO[17:56:17 CST] Joining nodes to cluster
[node2 10.211.55.99] MSG:
[preflight] Running pre-flight checks
W0331 18:01:18.456821 20138 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in “/var/lib/kubelet”
W0331 18:01:18.465160 20138 cleanupnode.go:99] [reset] Failed to evaluate the “/var/lib/kubelet” directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the “iptables” command.
If your cluster was setup to utilize IPVS, run ipvsadm –clear (or similar)
to reset your system’s IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[node1 10.211.55.98] MSG:
[preflight] Running pre-flight checks
W0331 18:01:18.699023 20127 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in “/var/lib/kubelet”
W0331 18:01:18.709557 20127 cleanupnode.go:99] [reset] Failed to evaluate the “/var/lib/kubelet” directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the “iptables” command.
If your cluster was setup to utilize IPVS, run ipvsadm –clear (or similar)
to reset your system’s IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[node2 10.211.55.99] MSG:
[preflight] Running pre-flight checks
W0331 18:06:19.160284 21228 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in “/var/lib/kubelet”
W0331 18:06:19.167333 21228 cleanupnode.go:99] [reset] Failed to evaluate the “/var/lib/kubelet” directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the “iptables” command.
If your cluster was setup to utilize IPVS, run ipvsadm –clear (or similar)
to reset your system’s IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[node1 10.211.55.98] MSG:
[preflight] Running pre-flight checks
W0331 18:06:19.405496 21221 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in “/var/lib/kubelet”
W0331 18:06:19.413475 21221 cleanupnode.go:99] [reset] Failed to evaluate the “/var/lib/kubelet” directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the “iptables” command.
If your cluster was setup to utilize IPVS, run ipvsadm –clear (or similar)
to reset your system’s IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
ERRO[18:11:19 CST] Failed to add worker to cluster: Failed to exec command: sudo -E /bin/sh -c “/usr/local/bin/kubeadm join lb.kubesphere.local:6443 –token c52is2.zhjcp28xl1rl1iah –discovery-token-ca-cert-hash sha256:0c10bf33e3ff9021bdbc045e95b64e1b35d3b834b9986bcd497b69b19bde5aa9”
W0331 18:06:19.301940 21257 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
error execution phase preflight: couldn’t validate the identity of the API Server: Get https://lb.kubesphere.local:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s: dial tcp 10.211.55.97:6443: connect: no route to host
To see the stack trace of this error execute with –v=5 or higher: Process exited with status 1 node=10.211.55.99
ERRO[18:11:19 CST] Failed to add worker to cluster: Failed to exec command: sudo -E /bin/sh -c “/usr/local/bin/kubeadm join lb.kubesphere.local:6443 –token c52is2.zhjcp28xl1rl1iah –discovery-token-ca-cert-hash sha256:0c10bf33e3ff9021bdbc045e95b64e1b35d3b834b9986bcd497b69b19bde5aa9”
W0331 18:06:19.562109 21252 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
error execution phase preflight: couldn’t validate the identity of the API Server: Get https://lb.kubesphere.local:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s: dial tcp 10.211.55.97:6443: connect: no route to host
To see the stack trace of this error execute with –v=5 or higher: Process exited with status 1 node=10.211.55.98
WARN[18:11:19 CST] Task failed …
WARN[18:11:19 CST] error: interrupted by error
Error: Failed to join node: interrupted by error
Usage:
kk create cluster [flags]
Flags:
-f, –filename string Path to a configuration file
-h, –help help for cluster
–skip-pull-images Skip pre pull images
–with-kubernetes string Specify a supported version of kubernetes
–with-kubesphere Deploy a specific version of kubesphere (default v3.0.0)
-y, –yes Skip pre-check of the installation
Global Flags:
–debug Print detailed information (default true)
Failed to join node: interrupted by error