在kubesphere中添加节点,使用./kk add nodes -f config-sample.yaml 命令添加节点,第一次执行报错,如下:
+———–+——+——+———+———-+——-+——-+———–+——–+————+————-+——————+————–+
| name | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time |
+———–+——+——+———+———-+——-+——-+———–+——–+————+————-+——————+————–+
| ubuntu250 | y | y | y | y | y | y | y | y | y | | y | CST 16:55:16 |
| ks-master | y | y | y | y | y | y | y | y | y | | y | CST 16:55:16 |
| ks-node1 | y | y | y | y | y | y | y | y | y | | y | CST 16:55:16 |
+———–+——+——+———+———-+——-+——-+———–+——–+————+————-+——————+————–+
This is a simple check of your environment.
Before installation, you should ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations
Continue this installation? [yes/no]: yes
INFO[16:55:28 CST] Downloading Installation Files
INFO[16:55:28 CST] Downloading kubeadm …
INFO[16:55:29 CST] Downloading kubelet …
INFO[16:55:30 CST] Downloading kubectl …
INFO[16:55:30 CST] Downloading kubecni …
INFO[16:55:30 CST] Downloading helm …
INFO[16:55:31 CST] Downloading helm2 …
INFO[16:55:31 CST] Configurating operating system …
[ubuntu250 192.168.1.250] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
[ks-master 192.168.1.104] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
[ks-node1 192.168.1.106] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
INFO[16:57:42 CST] Installing docker …
INFO[16:57:44 CST] Start to download images on all nodes
[ubuntu250] Downloading image: kubesphere/pause:3.1
[ks-node1] Downloading image: kubesphere/pause:3.1
[ks-master] Downloading image: kubesphere/etcd:v3.3.12
[ks-master] Downloading image: kubesphere/pause:3.1
[ks-node1] Downloading image: kubesphere/kube-proxy:v1.17.9
[ks-master] Downloading image: kubesphere/kube-apiserver:v1.17.9
[ks-node1] Downloading image: coredns/coredns:1.6.9
[ks-master] Downloading image: kubesphere/kube-controller-manager:v1.17.9
[ks-node1] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[ks-master] Downloading image: kubesphere/kube-scheduler:v1.17.9
[ks-node1] Downloading image: calico/kube-controllers:v3.15.1
[ks-master] Downloading image: kubesphere/kube-proxy:v1.17.9
[ks-node1] Downloading image: calico/cni:v3.15.1
[ks-master] Downloading image: coredns/coredns:1.6.9
[ks-node1] Downloading image: calico/node:v3.15.1
[ks-master] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[ks-node1] Downloading image: calico/pod2daemon-flexvol:v3.15.1
[ks-master] Downloading image: calico/kube-controllers:v3.15.1
[ks-master] Downloading image: calico/cni:v3.15.1
[ks-master] Downloading image: calico/node:v3.15.1
[ks-master] Downloading image: calico/pod2daemon-flexvol:v3.15.1
[ubuntu250] Downloading image: kubesphere/kube-proxy:v1.17.9
[ubuntu250] Downloading image: coredns/coredns:1.6.9
[ubuntu250] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[ubuntu250] Downloading image: calico/kube-controllers:v3.15.1
[ubuntu250] Downloading image: calico/cni:v3.15.1
[ubuntu250] Downloading image: calico/node:v3.15.1
[ubuntu250] Downloading image: calico/pod2daemon-flexvol:v3.15.1
INFO[17:00:08 CST] Generating etcd certs
INFO[17:00:09 CST] Synchronizing etcd certs
INFO[17:00:09 CST] Creating etcd service
INFO[17:00:16 CST] Starting etcd cluster
[ks-master 192.168.1.104] MSG:
Configuration file already exists
Waiting for etcd to start
INFO[17:00:21 CST] Refreshing etcd configuration
INFO[17:00:21 CST] Get cluster status
[ks-master 192.168.1.104] MSG:
Cluster already exists.
[ks-master 192.168.1.104] MSG:
v1.17.9
[ks-master 192.168.1.104] MSG:
I0301 17:00:26.863384 28418 version.go:251] remote version is much newer: v1.20.4; falling back to: stable-1.17
W0301 17:00:28.944143 28418 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0301 17:00:28.944170 28418 validation.go:28] Cannot validate kubelet config - no validator is available
[upload-certs] Storing the certificates in Secret “kubeadm-certs” in the “kube-system” Namespace
[upload-certs] Using certificate key:
f4f1ff2c4c8d0a089a188fcc87fe13cb78722eff54affb56e8e973d313b7af6d
[ks-master 192.168.1.104] MSG:
secret/kubeadm-certs patched
[ks-master 192.168.1.104] MSG:
secret/kubeadm-certs patched
[ks-master 192.168.1.104] MSG:
secret/kubeadm-certs patched
[ks-master 192.168.1.104] MSG:
W0301 17:00:33.651050 29035 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0301 17:00:33.651100 29035 validation.go:28] Cannot validate kubelet config - no validator is available
kubeadm join lb.kubesphere.local:6443 –token 6qltvp.ow7nz54q69jkx91w –discovery-token-ca-cert-hash sha256:bed7462d58b2eb59eb9c9180f013f17ae8db7243b271779658227e761c7a7cdf
[ks-master 192.168.1.104] MSG:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ks-master Ready master,worker 77d v1.17.9 192.168.1.104 <none> Ubuntu 16.04.6 LTS 4.4.0-142-generic docker://18.6.1
ks-node1 Ready worker 77d v1.17.9 192.168.1.106 <none> Ubuntu 16.04.6 LTS 4.4.0-142-generic docker://18.6.1
INFO[17:00:33 CST] Installing kube binaries
Push /root/kubesphere/kubekey/v1.17.9/amd64/kubeadm to 192.168.1.250:/tmp/kubekey/kubeadm Done
Push /root/kubesphere/kubekey/v1.17.9/amd64/kubelet to 192.168.1.250:/tmp/kubekey/kubelet Done
Push /root/kubesphere/kubekey/v1.17.9/amd64/kubectl to 192.168.1.250:/tmp/kubekey/kubectl Done
Push /root/kubesphere/kubekey/v1.17.9/amd64/helm to 192.168.1.250:/tmp/kubekey/helm Done
Push /root/kubesphere/kubekey/v1.17.9/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.1.250:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
INFO[17:00:38 CST] Joining nodes to cluster
[ubuntu250 192.168.1.250] MSG:
[preflight] Running pre-flight checks
W0301 17:00:39.147647 19929 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in “/var/lib/kubelet”
W0301 17:00:39.150039 19929 cleanupnode.go:99] [reset] Failed to evaluate the “/var/lib/kubelet” directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the “iptables” command.
If your cluster was setup to utilize IPVS, run ipvsadm –clear (or similar)
to reset your system’s IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[ubuntu250 192.168.1.250] MSG:
[preflight] Running pre-flight checks
W0301 17:00:39.635429 20136 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in “/var/lib/kubelet”
W0301 17:00:39.637906 20136 cleanupnode.go:99] [reset] Failed to evaluate the “/var/lib/kubelet” directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the “iptables” command.
If your cluster was setup to utilize IPVS, run ipvsadm –clear (or similar)
to reset your system’s IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
ERRO[17:00:40 CST] Failed to add worker to cluster: Failed to exec command: sudo -E /bin/sh -c “/usr/local/bin/kubeadm join lb.kubesphere.local:6443 –token 6qltvp.ow7nz54q69jkx91w –discovery-token-ca-cert-hash sha256:bed7462d58b2eb59eb9c9180f013f17ae8db7243b271779658227e761c7a7cdf”
W0301 17:00:39.742310 20185 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”. Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
To see the stack trace of this error execute with –v=5 or higher: Process exited with status 1 node=192.168.1.250
WARN[17:00:40 CST] Task failed …
WARN[17:00:40 CST] error: interrupted by error
Error: Failed to join node: interrupted by error
Usage:
kk add nodes [flags]
Flags:
-f, –filename string Path to a configuration file
-h, –help help for nodes
–skip-pull-images Skip pre pull images
-y, –yes Skip pre-check of the installation
Global Flags:
–debug Print detailed information (default true)
Failed to join node: interrupted by error
看报错应该是swap未禁用,但我已经禁用了swap.随后重启机器重新执行了./kk add nodes -f config-sample.yaml 命令,日志如下:
+———–+——+——+———+———-+——-+——-+———–+——–+————+————-+——————+————–+
| name | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time |
+———–+——+——+———+———-+——-+——-+———–+——–+————+————-+——————+————–+
| ks-master | y | y | y | y | y | y | y | y | y | | y | CST 17:39:01 |
| ks-node1 | y | y | y | y | y | y | y | y | y | | y | CST 17:39:01 |
| ubuntu250 | y | y | y | y | y | y | y | y | y | | y | CST 17:39:01 |
+———–+——+——+———+———-+——-+——-+———–+——–+————+————-+——————+————–+
This is a simple check of your environment.
Before installation, you should ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations
Continue this installation? [yes/no]: yes
INFO[17:39:04 CST] Downloading Installation Files
INFO[17:39:04 CST] Downloading kubeadm …
INFO[17:39:04 CST] Downloading kubelet …
INFO[17:39:05 CST] Downloading kubectl …
INFO[17:39:05 CST] Downloading kubecni …
INFO[17:39:06 CST] Downloading helm …
INFO[17:39:06 CST] Downloading helm2 …
INFO[17:39:06 CST] Configurating operating system …
[ubuntu250 192.168.1.250] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
[ks-master 192.168.1.104] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
FATA[19:39:11 CST] Execute task timeout, Timeout=120s
5点半开始跑,跑到七点半说一个任务超时,后面我执行添加节点命令都是这个结果,有人能帮忙解决下这个问题吗?
另外附上配置文件如下:
apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: ks-master, address: 192.168.1.104, internalAddress: 192.168.1.104, user: root, password: aa}
- {name: ks-node1, address: 192.168.1.106, internalAddress: 192.168.1.106, user: root, password: aa}
- {name: ubuntu250, address: 192.168.1.250, internalAddress: 192.168.1.250, user: root, password: aa}
roleGroups:
etcd:
- ks-master
master:
- ks-master
worker:
- ks-master
- ks-node1
- ubuntu250
controlPlaneEndpoint:
domain: lb.kubesphere.local
address: ""
port: "6443"
kubernetes:
version: v1.17.9
imageRepo: kubesphere
clusterName: cluster.local
network:
plugin: calico
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
registry:
registryMirrors: []
insecureRegistries: []
addons: []
---
apiVersion: v1
data:
ks-config.yaml: |
---
local_registry: ""
persistence:
storageClass: ""
etcd:
monitoring: true
endpointIps: localhost
port: 2379
tlsEnable: true
common:
mysqlVolumeSize: 20Gi
minioVolumeSize: 20Gi
etcdVolumeSize: 20Gi
openldapVolumeSize: 2Gi
redisVolumSize: 2Gi
metrics_server:
enabled: false
console:
enableMultiLogin: False # enable/disable multi login
port: 30880
monitoring:
prometheusReplicas: 1
prometheusMemoryRequest: 400Mi
prometheusVolumeSize: 20Gi
grafana:
enabled: false
logging:
enabled: false
elasticsearchMasterReplicas: 1
elasticsearchDataReplicas: 1
logsidecarReplicas: 2
elasticsearchMasterVolumeSize: 4Gi
elasticsearchDataVolumeSize: 20Gi
logMaxAge: 7
elkPrefix: logstash
containersLogMountedPath: ""
kibana:
enabled: false
openpitrix:
enabled: false
devops:
enabled: false
jenkinsMemoryLim: 2Gi
jenkinsMemoryReq: 1500Mi
jenkinsVolumeSize: 8Gi
jenkinsJavaOpts_Xms: 512m
jenkinsJavaOpts_Xmx: 512m
jenkinsJavaOpts_MaxRAM: 2g
sonarqube:
enabled: false
postgresqlVolumeSize: 8Gi
servicemesh:
enabled: false
notification:
enabled: false
alerting:
enabled: false
kind: ConfigMap
metadata:
name: ks-installer
namespace: kubesphere-system
labels:
version: v2.1.1