[root@master165 ~]# ./kk add nodes -f my-cluster.yaml
+-----------+------+------+---------+----------+-------+-------+-----------+---------+------------+-------------+------------------+--------------+
| name | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time |
+-----------+------+------+---------+----------+-------+-------+-----------+---------+------------+-------------+------------------+--------------+
| node166 | y | y | y | y | y | y | y | 20.10.8 | | | | CST 11:39:58 |
| node163 | y | y | y | y | y | y | y | 20.10.8 | | | | CST 11:39:58 |
| node159 | y | y | y | y | y | y | y | 20.10.8 | | | | CST 11:39:59 |
| master165 | y | y | y | y | y | y | y | 20.10.8 | | | | CST 11:39:59 |
+-----------+------+------+---------+----------+-------+-------+-----------+---------+------------+-------------+------------------+--------------+
This is a simple check of your environment.
Before installation, you should ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations
Continue this installation? [yes/no]: yes
INFO[11:40:04 CST] Downloading Installation Files
INFO[11:40:04 CST] Downloading kubeadm ...
INFO[11:40:06 CST] Downloading kubelet ...
INFO[11:40:09 CST] Downloading kubectl ...
INFO[11:40:09 CST] Downloading helm ...
INFO[11:40:10 CST] Downloading kubecni ...
INFO[11:40:10 CST] Configuring operating system ...
[node166 172.22.151.166] MSG:
vm.swappiness = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
fs.inotify.max_user_instances = 524288
[master165 172.22.151.165] MSG:
vm.swappiness = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
fs.inotify.max_user_instances = 524288
[node163 172.22.151.163] MSG:
vm.swappiness = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
fs.inotify.max_user_instances = 524288
[node159 172.22.151.159] MSG:
vm.swappiness = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
fs.inotify.max_user_instances = 524288
INFO[11:40:15 CST] Installing docker ...
INFO[11:40:17 CST] Start to download images on all nodes
[master165] Downloading image: kubesphere/etcd:v3.4.13
[node166] Downloading image: kubesphere/pause:3.2
[node159] Downloading image: kubesphere/etcd:v3.4.13
[node163] Downloading image: kubesphere/etcd:v3.4.13
[node163] Downloading image: kubesphere/pause:3.2
[node166] Downloading image: kubesphere/kube-proxy:v1.19.8
[node166] Downloading image: coredns/coredns:1.6.9
[master165] Downloading image: kubesphere/pause:3.2
[node159] Downloading image: kubesphere/pause:3.2
[node163] Downloading image: kubesphere/kube-proxy:v1.19.8
[node166] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[node166] Downloading image: calico/kube-controllers:v3.16.3
[master165] Downloading image: kubesphere/kube-apiserver:v1.19.8
[node159] Downloading image: kubesphere/kube-proxy:v1.19.8
[node163] Downloading image: coredns/coredns:1.6.9
[node166] Downloading image: calico/cni:v3.16.3
[master165] Downloading image: kubesphere/kube-controller-manager:v1.19.8
[node159] Downloading image: coredns/coredns:1.6.9
[node163] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[node166] Downloading image: calico/node:v3.16.3
[master165] Downloading image: kubesphere/kube-scheduler:v1.19.8
[node163] Downloading image: calico/kube-controllers:v3.16.3
[master165] Downloading image: kubesphere/kube-proxy:v1.19.8
[node159] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[node166] Downloading image: calico/pod2daemon-flexvol:v3.16.3
[node163] Downloading image: calico/cni:v3.16.3
[master165] Downloading image: coredns/coredns:1.6.9
[node159] Downloading image: calico/kube-controllers:v3.16.3
[master165] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[node163] Downloading image: calico/node:v3.16.3
[node159] Downloading image: calico/cni:v3.16.3
[master165] Downloading image: calico/kube-controllers:v3.16.3
[node163] Downloading image: calico/pod2daemon-flexvol:v3.16.3
[node159] Downloading image: calico/node:v3.16.3
[master165] Downloading image: calico/cni:v3.16.3
[master165] Downloading image: calico/node:v3.16.3
[node159] Downloading image: calico/pod2daemon-flexvol:v3.16.3
[master165] Downloading image: calico/pod2daemon-flexvol:v3.16.3
INFO[11:42:43 CST] Generating etcd certs
INFO[11:42:48 CST] Synchronizing etcd certs
INFO[11:42:48 CST] Creating etcd service
[master165 172.22.151.165] MSG:
etcd already exists
[node163 172.22.151.163] MSG:
etcd already exists
[node159 172.22.151.159] MSG:
etcd already exists
INFO[11:42:51 CST] Starting etcd cluster
[master165 172.22.151.165] MSG:
Configuration file already exists
[master165 172.22.151.165] MSG:
v3.4.13
[node159 172.22.151.159] MSG:
Configuration file already exists
[node159 172.22.151.159] MSG:
v3.4.13
[node163 172.22.151.163] MSG:
Configuration file already exists
[node163 172.22.151.163] MSG:
v3.4.13
INFO[11:42:53 CST] Refreshing etcd configuration
INFO[11:42:54 CST] Backup etcd data regularly
INFO[11:43:02 CST] Get cluster status
[master165 172.22.151.165] MSG:
Cluster already exists.
[master165 172.22.151.165] MSG:
v1.19.8
[master165 172.22.151.165] MSG:
I0916 11:43:05.206773 26483 version.go:255] remote version is much newer: v1.22.2; falling back to: stable-1.19
W0916 11:43:05.875676 26483 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
27fa47fbb4bd1622fc849e30a4ec4d84d36e74476950c46172f6c7e87f04a870
[master165 172.22.151.165] MSG:
secret/kubeadm-certs patched
[master165 172.22.151.165] MSG:
secret/kubeadm-certs patched
[master165 172.22.151.165] MSG:
secret/kubeadm-certs patched
[master165 172.22.151.165] MSG:
W0916 11:43:07.386517 26820 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
kubeadm join lb.kubesphere.local:6443 --token 7lu8u8.r96hfglt8vg5kmkq --discovery-token-ca-cert-hash sha256:61c13fb8cb93c9b23cb45cde44fe3b77a0d50f9ae229bb9e4a964aa9afe33d60
[master165 172.22.151.165] MSG:
master165 v1.19.8 [map[address:172.22.151.165 type:InternalIP] map[address:master165 type:Hostname]]
node159 v1.19.8 [map[address:172.22.151.159 type:InternalIP] map[address:node159 type:Hostname]]
node163 v1.19.8 [map[address:172.22.151.163 type:InternalIP] map[address:node163 type:Hostname]]
INFO[11:43:07 CST] Installing kube binaries
Push /root/kubekey/v1.19.8/amd64/kubeadm to 172.22.151.166:/tmp/kubekey/kubeadm Done
Push /root/kubekey/v1.19.8/amd64/kubelet to 172.22.151.166:/tmp/kubekey/kubelet Done
Push /root/kubekey/v1.19.8/amd64/kubectl to 172.22.151.166:/tmp/kubekey/kubectl Done
Push /root/kubekey/v1.19.8/amd64/helm to 172.22.151.166:/tmp/kubekey/helm Done
Push /root/kubekey/v1.19.8/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.22.151.166:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
INFO[11:47:02 CST] Joining nodes to cluster
[node166 172.22.151.166] MSG:
[preflight] Running pre-flight checks
W0916 11:47:04.099469 21200 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[node166 172.22.151.166] MSG:
[preflight] Running pre-flight checks
W0916 11:49:10.484543 22935 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
完全按照教程来的,也没有具体的报错信息。