创建部署问题时,请参考下面模板,你提供的信息越多,越容易及时获得解答。
你只花一分钟创建的问题,不能指望别人花上半个小时给你解答。
发帖前请点击 发表主题 右边的 预览(👀) 按钮,确保帖子格式正确。
操作系统信息
例如:虚拟机/,Centos7.9/,8C/8G
Kubernetes版本信息
例如:v1.23.10 1master 2node
容器运行时
例如,使用 docker/,版本 20.10.21
KubeSphere版本信息
例如:v3.3.1。在线安装。使用kk安装1master 2node集群。
问题是什么
无报错,但是节点没有添加进来
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
name: sample
spec:
hosts:
{name: master, address: 172.16.9.114, internalAddress: 172.16.9.114, user: root, password: “root00–”}
#- {name: master, address: 172.16.9.117, internalAddress: 172.16.9.114, user: root, password: “root00–”}
#- {name: master, address: 172.16.9.118, internalAddress: 172.16.9.114, user: root, password: “root00–”}
{name: node1, address: 172.16.9.115, internalAddress: 172.16.9.115, user: root, password: “root00–”}
{name: node2, address: 172.16.9.116, internalAddress: 172.16.9.116, user: root, password: “root00–”}
{name: node3, address: 172.16.9.119, internalAddress: 172.16.9.116, user: root, password: “root00–”}
roleGroups:
etcd:
master
control-plane:
master
worker:
node1
node2
node3
controlPlaneEndpoint:
Internal loadbalancer for apiservers
#internalLoadbalancer: haproxy
domain: lb.kubesphere.local
address: ""
port: 6443
kubernetes:
version: v1.23.10
clusterName: cluster.local
autoRenewCerts: true
containerManager: docker
etcd:
Continue this installation? [yes/no]: yes
14:33:04 CST success: [LocalHost]
14:33:04 CST [NodeBinariesModule] Download installation binaries
14:33:04 CST message: [localhost]
downloading amd64 kubeadm v1.23.10 …
14:33:04 CST message: [localhost]
kubeadm is existed
14:33:04 CST message: [localhost]
downloading amd64 kubelet v1.23.10 …
14:33:05 CST message: [localhost]
kubelet is existed
14:33:05 CST message: [localhost]
downloading amd64 kubectl v1.23.10 …
14:33:06 CST message: [localhost]
kubectl is existed
14:33:06 CST message: [localhost]
downloading amd64 helm v3.9.0 …
14:33:06 CST message: [localhost]
helm is existed
14:33:06 CST message: [localhost]
downloading amd64 kubecni v0.9.1 …
14:33:06 CST message: [localhost]
kubecni is existed
14:33:06 CST message: [localhost]
downloading amd64 crictl v1.24.0 …
14:33:06 CST message: [localhost]
crictl is existed
14:33:06 CST message: [localhost]
downloading amd64 etcd v3.4.13 …
14:33:06 CST message: [localhost]
etcd is existed
14:33:06 CST message: [localhost]
downloading amd64 docker 20.10.8 …
14:33:07 CST message: [localhost]
docker is existed
14:33:07 CST success: [LocalHost]
14:33:07 CST [ConfigureOSModule] Get OS release
14:33:07 CST success: [master]
14:33:07 CST success: [node3]
14:33:07 CST success: [node1]
14:33:07 CST success: [node2]
14:33:07 CST [ConfigureOSModule] Prepare to init OS
14:33:08 CST success: [node3]
14:33:08 CST success: [master]
14:33:08 CST success: [node1]
14:33:08 CST success: [node2]
14:33:08 CST [ConfigureOSModule] Generate init os script
14:33:09 CST success: [master]
14:33:09 CST success: [node3]
14:33:09 CST success: [node1]
14:33:09 CST success: [node2]
14:33:09 CST [ConfigureOSModule] Exec init os script
14:33:09 CST stdout: [node3]
setenforce: SELinux is disabled
Disabled
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
kernel.pid_max = 65535
14:33:10 CST stdout: [master]
setenforce: SELinux is disabled
Disabled
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
kernel.pid_max = 65535
14:33:10 CST stdout: [node2]
setenforce: SELinux is disabled
Disabled
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
kernel.pid_max = 65535
14:33:10 CST stdout: [node1]
setenforce: SELinux is disabled
Disabled
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
kernel.pid_max = 65535
14:33:10 CST success: [node3]
14:33:10 CST success: [master]
14:33:10 CST success: [node2]
14:33:10 CST success: [node1]
14:33:10 CST [ConfigureOSModule] configure the ntp server for each node
14:33:10 CST skipped: [node3]
14:33:10 CST skipped: [node2]
14:33:10 CST skipped: [master]
14:33:10 CST skipped: [node1]
14:33:10 CST [KubernetesStatusModule] Get kubernetes cluster status
14:33:11 CST stdout: [master]
v1.23.10
14:33:12 CST stdout: [master]
master v1.23.10 [map[address:172.16.9.114 type:InternalIP] map[address:master type:Hostname]]
node1 v1.23.10 [map[address:172.16.9.115 type:InternalIP] map[address:node1 type:Hostname]]
node2 v1.23.10 [map[address:172.16.9.116 type:InternalIP] map[address:node2 type:Hostname]]
14:33:14 CST stdout: [master]
I1117 14:33:13.681684 27775 version.go:255] remote version is much newer: v1.25.4; falling back to: stable-1.23
[upload-certs] Storing the certificates in Secret “kubeadm-certs” in the “kube-system” Namespace
[upload-certs] Using certificate key:
33fb7a01b307ba912a9bc433f11bd33fa75585eef22a6b2eef9cca641a4cd180
14:33:14 CST stdout: [master]
secret/kubeadm-certs patched
14:33:14 CST stdout: [master]
secret/kubeadm-certs patched
14:33:14 CST stdout: [master]
secret/kubeadm-certs patched
14:33:14 CST stdout: [master]
wokp2y.8ljakxs3wb0pv6ts
14:33:14 CST success: [master]
14:33:14 CST [InstallContainerModule] Sync docker binaries
14:33:14 CST skipped: [node3]
14:33:14 CST skipped: [node1]
14:33:14 CST skipped: [node2]
14:33:14 CST skipped: [master]
14:33:14 CST [InstallContainerModule] Generate docker service
14:33:14 CST skipped: [node3]
14:33:14 CST skipped: [master]
14:33:14 CST skipped: [node1]
14:33:14 CST skipped: [node2]
14:33:14 CST [InstallContainerModule] Generate docker config
14:33:14 CST skipped: [node3]
14:33:14 CST skipped: [master]
14:33:14 CST skipped: [node1]
14:33:14 CST skipped: [node2]
14:33:14 CST [InstallContainerModule] Enable docker
14:33:14 CST skipped: [node3]
14:33:14 CST skipped: [master]
14:33:14 CST skipped: [node1]
14:33:14 CST skipped: [node2]
14:33:14 CST [InstallContainerModule] Add auths to container runtime
14:33:14 CST skipped: [node3]
14:33:14 CST skipped: [master]
14:33:14 CST skipped: [node1]
14:33:14 CST skipped: [node2]
14:33:14 CST [PullModule] Start to pull images on all nodes
14:33:14 CST message: [node2]
downloading image: kubesphere/pause:3.6
14:33:14 CST message: [master]
downloading image: kubesphere/pause:3.6
14:33:14 CST message: [node1]
downloading image: kubesphere/pause:3.6
14:33:14 CST message: [node3]
downloading image: kubesphere/pause:3.6
14:33:15 CST message: [master]
downloading image: kubesphere/kube-apiserver:v1.23.10
14:33:15 CST message: [node2]
downloading image: kubesphere/kube-proxy:v1.23.10
14:33:17 CST message: [master]
downloading image: kubesphere/kube-controller-manager:v1.23.10
14:33:17 CST message: [master]
downloading image: kubesphere/kube-scheduler:v1.23.10
14:33:17 CST message: [node1]
downloading image: kubesphere/kube-proxy:v1.23.10
14:33:18 CST message: [node3]
downloading image: kubesphere/kube-proxy:v1.23.10
14:33:18 CST message: [node2]
downloading image: coredns/coredns:1.8.6
14:33:20 CST message: [node1]
downloading image: coredns/coredns:1.8.6
14:33:20 CST message: [node2]
downloading image: kubesphere/k8s-dns-node-cache:1.15.12
14:33:20 CST message: [master]
downloading image: kubesphere/kube-proxy:v1.23.10
14:33:20 CST message: [node3]
downloading image: coredns/coredns:1.8.6
14:33:22 CST message: [node1]
downloading image: kubesphere/k8s-dns-node-cache:1.15.12
14:33:22 CST message: [node2]
downloading image: calico/kube-controllers:v3.23.2
14:33:22 CST message: [node3]
downloading image: kubesphere/k8s-dns-node-cache:1.15.12
14:33:22 CST message: [master]
downloading image: coredns/coredns:1.8.6
14:33:23 CST message: [node2]
downloading image: calico/cni:v3.23.2
14:33:23 CST message: [master]
downloading image: kubesphere/k8s-dns-node-cache:1.15.12
14:33:23 CST message: [node1]
downloading image: calico/kube-controllers:v3.23.2
14:33:23 CST message: [node3]
downloading image: calico/kube-controllers:v3.23.2
14:33:24 CST message: [master]
downloading image: calico/kube-controllers:v3.23.2
14:33:24 CST message: [node3]
downloading image: calico/cni:v3.23.2
14:33:24 CST message: [node1]
downloading image: calico/cni:v3.23.2
14:33:25 CST message: [node2]
downloading image: calico/node:v3.23.2
14:33:27 CST message: [master]
downloading image: calico/cni:v3.23.2
14:33:28 CST message: [node2]
downloading image: calico/pod2daemon-flexvol:v3.23.2
14:33:28 CST message: [node1]
downloading image: calico/node:v3.23.2
14:33:29 CST message: [master]
downloading image: calico/node:v3.23.2
14:33:29 CST message: [node3]
downloading image: calico/node:v3.23.2
14:33:30 CST message: [node1]
downloading image: calico/pod2daemon-flexvol:v3.23.2
14:33:30 CST message: [node3]
downloading image: calico/pod2daemon-flexvol:v3.23.2
14:33:30 CST message: [master]
downloading image: calico/pod2daemon-flexvol:v3.23.2
14:33:31 CST success: [node2]
14:33:31 CST success: [node1]
14:33:31 CST success: [node3]
14:33:31 CST success: [master]
14:33:31 CST [ETCDPreCheckModule] Get etcd status
14:33:31 CST stdout: [master]
ETCD_NAME=etcd-master
14:33:31 CST success: [master]
14:33:31 CST [CertsModule] Fetch etcd certs
14:33:31 CST success: [master]
14:33:31 CST [CertsModule] Generate etcd Certs
[certs] Using existing ca certificate authority
[certs] Using existing admin-master certificate and key on disk
[certs] Using existing member-master certificate and key on disk
[certs] Using existing node-master certificate and key on disk
14:33:31 CST success: [LocalHost]
14:33:31 CST [CertsModule] Synchronize certs file
14:33:32 CST success: [master]
14:33:32 CST [CertsModule] Synchronize certs file to master
14:33:32 CST skipped: [master]
14:33:32 CST [InstallETCDBinaryModule] Install etcd using binary
14:33:33 CST success: [master]
14:33:33 CST [InstallETCDBinaryModule] Generate etcd service
14:33:33 CST success: [master]
14:33:33 CST [InstallETCDBinaryModule] Generate access address
14:33:33 CST success: [master]
14:33:33 CST [ETCDConfigureModule] Health check on exist etcd
14:33:33 CST success: [master]
14:33:33 CST [ETCDConfigureModule] Generate etcd.env config on new etcd
14:33:33 CST skipped: [master]
14:33:33 CST [ETCDConfigureModule] Join etcd member
14:33:33 CST skipped: [master]
14:33:33 CST [ETCDConfigureModule] Restart etcd
14:33:33 CST skipped: [master]
14:33:33 CST [ETCDConfigureModule] Health check on new etcd
14:33:33 CST skipped: [master]
14:33:33 CST [ETCDConfigureModule] Check etcd member
14:33:33 CST skipped: [master]
14:33:33 CST [ETCDConfigureModule] Refresh etcd.env config on all etcd
14:33:34 CST success: [master]
14:33:34 CST [ETCDConfigureModule] Health check on all etcd
14:33:34 CST success: [master]
14:33:34 CST [ETCDBackupModule] Backup etcd data regularly
14:33:34 CST success: [master]
14:33:34 CST [ETCDBackupModule] Generate backup ETCD service
14:33:34 CST success: [master]
14:33:34 CST [ETCDBackupModule] Generate backup ETCD timer
14:33:34 CST success: [master]
14:33:34 CST [ETCDBackupModule] Enable backup etcd service
14:33:35 CST success: [master]
14:33:35 CST [InstallKubeBinariesModule] Synchronize kubernetes binaries
14:33:35 CST skipped: [node3]
14:33:35 CST skipped: [node2]
14:33:35 CST skipped: [master]
14:33:35 CST skipped: [node1]
14:33:35 CST [InstallKubeBinariesModule] Synchronize kubelet
14:33:35 CST skipped: [node3]
14:33:35 CST skipped: [node2]
14:33:35 CST skipped: [master]
14:33:35 CST skipped: [node1]
14:33:35 CST [InstallKubeBinariesModule] Generate kubelet service
14:33:35 CST skipped: [node3]
14:33:35 CST skipped: [master]
14:33:35 CST skipped: [node1]
14:33:35 CST skipped: [node2]
14:33:35 CST [InstallKubeBinariesModule] Enable kubelet service
14:33:35 CST skipped: [node3]
14:33:35 CST skipped: [master]
14:33:35 CST skipped: [node1]
14:33:35 CST skipped: [node2]
14:33:35 CST [InstallKubeBinariesModule] Generate kubelet env
14:33:35 CST skipped: [node3]
14:33:35 CST skipped: [master]
14:33:35 CST skipped: [node1]
14:33:35 CST skipped: [node2]
14:33:35 CST [JoinNodesModule] Generate kubeadm config
14:33:35 CST skipped: [node3]
14:33:35 CST skipped: [master]
14:33:35 CST skipped: [node1]
14:33:35 CST skipped: [node2]
14:33:35 CST [JoinNodesModule] Join control-plane node
14:33:35 CST skipped: [master]
14:33:35 CST [JoinNodesModule] Join worker node
14:33:35 CST skipped: [node3]
14:33:35 CST skipped: [node1]
14:33:35 CST skipped: [node2]
14:33:35 CST [JoinNodesModule] Copy admin.conf to ~/.kube/config
14:33:35 CST skipped: [master]
14:33:35 CST [JoinNodesModule] Remove master taint
14:33:35 CST skipped: [master]
14:33:35 CST [JoinNodesModule] Add worker label to master
14:33:35 CST skipped: [master]
14:33:35 CST [JoinNodesModule] Synchronize kube config to worker
14:33:35 CST skipped: [node3]
14:33:35 CST skipped: [node1]
14:33:35 CST skipped: [node2]
14:33:35 CST [JoinNodesModule] Add worker label to worker
14:33:35 CST skipped: [node3]
14:33:35 CST skipped: [node1]
14:33:35 CST skipped: [node2]
14:33:35 CST [ConfigureKubernetesModule] Configure kubernetes
14:33:35 CST success: [node3]
14:33:35 CST success: [master]
14:33:35 CST success: [node1]
14:33:35 CST success: [node2]
14:33:35 CST [ChownModule] Chown user $HOME/.kube dir
14:33:35 CST success: [master]
14:33:35 CST success: [node3]
14:33:35 CST success: [node1]
14:33:35 CST success: [node2]
14:33:35 CST [AutoRenewCertsModule] Generate k8s certs renew script
14:33:35 CST success: [master]
14:33:35 CST [AutoRenewCertsModule] Generate k8s certs renew service
14:33:35 CST success: [master]
14:33:35 CST [AutoRenewCertsModule] Generate k8s certs renew timer
14:33:35 CST success: [master]
14:33:35 CST [AutoRenewCertsModule] Enable k8s certs renew service
14:33:35 CST success: [master]
14:33:35 CST Pipeline[AddNodesPipeline] execute successfully