• 安装部署
  • 安装提示Failed to deploy kubesphere: KubeSphere startup timeout.

创建部署问题时,请参考下面模板,你提供的信息越多,越容易及时获得解答。
你只花一分钟创建的问题,不能指望别人花上半个小时给你解答。
发帖前请点击 发表主题 右边的 预览(👀) 按钮,确保帖子格式正确。

操作系统信息
私有云主机

Centos 7.6 4核 16G

Kubernetes版本信息
v1.21.5。多节点。三个master,9个work

容器运行时
docker信息
Client:

Version: 20.10.8

API version: 1.41

Go version: go1.16.6

Git commit: 3967b7d

Built: Fri Jul 30 19:50:40 2021

OS/Arch: linux/amd64

Context: default

Experimental: true

Server: Docker Engine - Community

Engine:

Version: 20.10.8

API version: 1.41 (minimum version 1.12)

Go version: go1.16.6

Git commit: 75249d8

Built: Fri Jul 30 19:55:09 2021

OS/Arch: linux/amd64

Experimental: false

containerd:

Version: v1.4.9

GitCommit: e25210fe30a0a703442421b0f60afac609f950a3

runc:

Version: 1.0.1

GitCommit: v1.0.1-0-g4144b638

docker-init:

Version: 0.19.0

GitCommit: de40ad0

KubeSphere版本信息
kubekey安装。 ks版本version: v3.2.1

问题是什么
Continue this installation? [yes/no]: yes

INFO[11:18:51 CST] Downloading Installation Files

INFO[11:18:51 CST] Downloading kubeadm …

INFO[11:18:51 CST] Downloading kubelet …

INFO[11:18:52 CST] Downloading kubectl …

INFO[11:18:52 CST] Downloading helm …

INFO[11:18:52 CST] Downloading kubecni …

INFO[11:18:53 CST] Downloading etcd …

INFO[11:18:53 CST] Downloading docker …

INFO[11:18:54 CST] Downloading crictl …

INFO[11:18:54 CST] Configuring operating system …

[k8s-prod-node2 200.1.129.73] MSG:

vm.swappiness = 1

net.ipv4.neigh.default.gc_stale_time = 120

net.ipv4.conf.all.rp_filter = 0

net.ipv4.conf.default.rp_filter = 0

net.ipv4.conf.default.arp_announce = 2

net.ipv4.conf.lo.arp_announce = 2

net.ipv4.conf.all.arp_announce = 2

net.ipv4.tcp_max_tw_buckets = 5000

net.ipv4.tcp_syncookies = 1

net.ipv4.tcp_max_syn_backlog = 1024

net.ipv4.tcp_synack_retries = 2

net.ipv6.conf.all.disable_ipv6 = 1

net.ipv6.conf.default.disable_ipv6 = 1

net.ipv6.conf.lo.disable_ipv6 = 1

kernel.sysrq = 1

net.ipv4.ip_forward = 1

net.bridge.bridge-nf-call-arptables = 1

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_local_reserved_ports = 30000-32767

vm.max_map_count = 262144

fs.inotify.max_user_instances = 524288

[k8s-prod-node4 200.1.129.75] MSG:

vm.swappiness = 1

net.ipv4.neigh.default.gc_stale_time = 120

net.ipv4.conf.all.rp_filter = 0

net.ipv4.conf.default.rp_filter = 0

net.ipv4.conf.default.arp_announce = 2

net.ipv4.conf.lo.arp_announce = 2

net.ipv4.conf.all.arp_announce = 2

net.ipv4.tcp_max_tw_buckets = 5000

net.ipv4.tcp_syncookies = 1

net.ipv4.tcp_max_syn_backlog = 1024

net.ipv4.tcp_synack_retries = 2

net.ipv6.conf.all.disable_ipv6 = 1

net.ipv6.conf.default.disable_ipv6 = 1

net.ipv6.conf.lo.disable_ipv6 = 1

kernel.sysrq = 1

net.ipv4.ip_forward = 1

net.bridge.bridge-nf-call-arptables = 1

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_local_reserved_ports = 30000-32767

vm.max_map_count = 262144

fs.inotify.max_user_instances = 524288

[k8s-prod-node5 200.1.129.76] MSG:

vm.swappiness = 1

net.ipv4.neigh.default.gc_stale_time = 120

net.ipv4.conf.all.rp_filter = 0

net.ipv4.conf.default.rp_filter = 0

net.ipv4.conf.default.arp_announce = 2

net.ipv4.conf.lo.arp_announce = 2

net.ipv4.conf.all.arp_announce = 2

net.ipv4.tcp_max_tw_buckets = 5000

net.ipv4.tcp_syncookies = 1

net.ipv4.tcp_max_syn_backlog = 1024

net.ipv4.tcp_synack_retries = 2

net.ipv6.conf.all.disable_ipv6 = 1

net.ipv6.conf.default.disable_ipv6 = 1

net.ipv6.conf.lo.disable_ipv6 = 1

kernel.sysrq = 1

net.ipv4.ip_forward = 1

net.bridge.bridge-nf-call-arptables = 1

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_local_reserved_ports = 30000-32767

vm.max_map_count = 262144

fs.inotify.max_user_instances = 524288

[k8s-prod-node1 200.1.129.72] MSG:

vm.swappiness = 1

net.ipv4.neigh.default.gc_stale_time = 120

net.ipv4.conf.all.rp_filter = 0

net.ipv4.conf.default.rp_filter = 0

net.ipv4.conf.default.arp_announce = 2

net.ipv4.conf.lo.arp_announce = 2

net.ipv4.conf.all.arp_announce = 2

net.ipv4.tcp_max_tw_buckets = 5000

net.ipv4.tcp_syncookies = 1

net.ipv4.tcp_max_syn_backlog = 1024

net.ipv4.tcp_synack_retries = 2

net.ipv6.conf.all.disable_ipv6 = 1

net.ipv6.conf.default.disable_ipv6 = 1

net.ipv6.conf.lo.disable_ipv6 = 1

kernel.sysrq = 1

net.ipv4.ip_forward = 1

net.bridge.bridge-nf-call-arptables = 1

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_local_reserved_ports = 30000-32767

vm.max_map_count = 262144

fs.inotify.max_user_instances = 524288

[k8s-prod-node6 200.1.129.77] MSG:

vm.swappiness = 1

net.ipv4.neigh.default.gc_stale_time = 120

net.ipv4.conf.all.rp_filter = 0

net.ipv4.conf.default.rp_filter = 0

net.ipv4.conf.default.arp_announce = 2

net.ipv4.conf.lo.arp_announce = 2

net.ipv4.conf.all.arp_announce = 2

net.ipv4.tcp_max_tw_buckets = 5000

net.ipv4.tcp_syncookies = 1

net.ipv4.tcp_max_syn_backlog = 1024

net.ipv4.tcp_synack_retries = 2

net.ipv6.conf.all.disable_ipv6 = 1

net.ipv6.conf.default.disable_ipv6 = 1

net.ipv6.conf.lo.disable_ipv6 = 1

kernel.sysrq = 1

net.ipv4.ip_forward = 1

net.bridge.bridge-nf-call-arptables = 1

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_local_reserved_ports = 30000-32767

vm.max_map_count = 262144

fs.inotify.max_user_instances = 524288

[k8s-prod-node3 200.1.129.74] MSG:

vm.swappiness = 1

net.ipv4.neigh.default.gc_stale_time = 120

net.ipv4.conf.all.rp_filter = 0

net.ipv4.conf.default.rp_filter = 0

net.ipv4.conf.default.arp_announce = 2

net.ipv4.conf.lo.arp_announce = 2

net.ipv4.conf.all.arp_announce = 2

net.ipv4.tcp_max_tw_buckets = 5000

net.ipv4.tcp_syncookies = 1

net.ipv4.tcp_max_syn_backlog = 1024

net.ipv4.tcp_synack_retries = 2

net.ipv6.conf.all.disable_ipv6 = 1

net.ipv6.conf.default.disable_ipv6 = 1

net.ipv6.conf.lo.disable_ipv6 = 1

kernel.sysrq = 1

net.ipv4.ip_forward = 1

net.bridge.bridge-nf-call-arptables = 1

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_local_reserved_ports = 30000-32767

vm.max_map_count = 262144

fs.inotify.max_user_instances = 524288

[k8s-prod-master3 200.1.129.71] MSG:

vm.swappiness = 1

net.ipv4.neigh.default.gc_stale_time = 120

net.ipv4.conf.all.rp_filter = 0

net.ipv4.conf.default.rp_filter = 0

net.ipv4.conf.default.arp_announce = 2

net.ipv4.conf.lo.arp_announce = 2

net.ipv4.conf.all.arp_announce = 2

net.ipv4.tcp_max_tw_buckets = 5000

net.ipv4.tcp_syncookies = 1

net.ipv4.tcp_max_syn_backlog = 1024

net.ipv4.tcp_synack_retries = 2

net.ipv6.conf.all.disable_ipv6 = 1

net.ipv6.conf.default.disable_ipv6 = 1

net.ipv6.conf.lo.disable_ipv6 = 1

kernel.sysrq = 1

net.ipv4.ip_forward = 1

net.bridge.bridge-nf-call-arptables = 1

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_local_reserved_ports = 30000-32767

vm.max_map_count = 262144

fs.inotify.max_user_instances = 524288

[k8s-prod-master2 200.1.129.70] MSG:

vm.swappiness = 1

net.ipv4.neigh.default.gc_stale_time = 120

net.ipv4.conf.all.rp_filter = 0

net.ipv4.conf.default.rp_filter = 0

net.ipv4.conf.default.arp_announce = 2

net.ipv4.conf.lo.arp_announce = 2

net.ipv4.conf.all.arp_announce = 2

net.ipv4.tcp_max_tw_buckets = 5000

net.ipv4.tcp_syncookies = 1

net.ipv4.tcp_max_syn_backlog = 1024

net.ipv4.tcp_synack_retries = 2

net.ipv6.conf.all.disable_ipv6 = 1

net.ipv6.conf.default.disable_ipv6 = 1

net.ipv6.conf.lo.disable_ipv6 = 1

kernel.sysrq = 1

net.ipv4.ip_forward = 1

net.bridge.bridge-nf-call-arptables = 1

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_local_reserved_ports = 30000-32767

vm.max_map_count = 262144

fs.inotify.max_user_instances = 524288

[k8s-prod-node7 200.1.129.78] MSG:

vm.swappiness = 1

net.ipv4.neigh.default.gc_stale_time = 120

net.ipv4.conf.all.rp_filter = 0

net.ipv4.conf.default.rp_filter = 0

net.ipv4.conf.default.arp_announce = 2

net.ipv4.conf.lo.arp_announce = 2

net.ipv4.conf.all.arp_announce = 2

net.ipv4.tcp_max_tw_buckets = 5000

net.ipv4.tcp_syncookies = 1

net.ipv4.tcp_max_syn_backlog = 1024

net.ipv4.tcp_synack_retries = 2

net.ipv6.conf.all.disable_ipv6 = 1

net.ipv6.conf.default.disable_ipv6 = 1

net.ipv6.conf.lo.disable_ipv6 = 1

kernel.sysrq = 1

net.ipv4.ip_forward = 1

net.bridge.bridge-nf-call-arptables = 1

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_local_reserved_ports = 30000-32767

vm.max_map_count = 262144

fs.inotify.max_user_instances = 524288

[k8s-prod-master1 200.1.129.69] MSG:

vm.swappiness = 1

net.ipv4.neigh.default.gc_stale_time = 120

net.ipv4.conf.all.rp_filter = 0

net.ipv4.conf.default.rp_filter = 0

net.ipv4.conf.default.arp_announce = 2

net.ipv4.conf.lo.arp_announce = 2

net.ipv4.conf.all.arp_announce = 2

net.ipv4.tcp_max_tw_buckets = 5000

net.ipv4.tcp_syncookies = 1

net.ipv4.tcp_max_syn_backlog = 1024

net.ipv4.tcp_synack_retries = 2

net.ipv6.conf.all.disable_ipv6 = 1

net.ipv6.conf.default.disable_ipv6 = 1

net.ipv6.conf.lo.disable_ipv6 = 1

kernel.sysrq = 1

net.ipv4.ip_forward = 1

net.bridge.bridge-nf-call-arptables = 1

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_local_reserved_ports = 30000-32767

vm.max_map_count = 262144

fs.inotify.max_user_instances = 524288

[k8s-prod-node9 200.1.129.80] MSG:

vm.swappiness = 1

net.ipv4.neigh.default.gc_stale_time = 120

net.ipv4.conf.all.rp_filter = 0

net.ipv4.conf.default.rp_filter = 0

net.ipv4.conf.default.arp_announce = 2

net.ipv4.conf.lo.arp_announce = 2

net.ipv4.conf.all.arp_announce = 2

net.ipv4.tcp_max_tw_buckets = 5000

net.ipv4.tcp_syncookies = 1

net.ipv4.tcp_max_syn_backlog = 1024

net.ipv4.tcp_synack_retries = 2

net.ipv6.conf.all.disable_ipv6 = 1

net.ipv6.conf.default.disable_ipv6 = 1

net.ipv6.conf.lo.disable_ipv6 = 1

kernel.sysrq = 1

net.ipv4.ip_forward = 1

net.bridge.bridge-nf-call-arptables = 1

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_local_reserved_ports = 30000-32767

vm.max_map_count = 262144

fs.inotify.max_user_instances = 524288

[k8s-prod-node8 200.1.129.79] MSG:

vm.swappiness = 1

net.ipv4.neigh.default.gc_stale_time = 120

net.ipv4.conf.all.rp_filter = 0

net.ipv4.conf.default.rp_filter = 0

net.ipv4.conf.default.arp_announce = 2

net.ipv4.conf.lo.arp_announce = 2

net.ipv4.conf.all.arp_announce = 2

net.ipv4.tcp_max_tw_buckets = 5000

net.ipv4.tcp_syncookies = 1

net.ipv4.tcp_max_syn_backlog = 1024

net.ipv4.tcp_synack_retries = 2

net.ipv6.conf.all.disable_ipv6 = 1

net.ipv6.conf.default.disable_ipv6 = 1

net.ipv6.conf.lo.disable_ipv6 = 1

kernel.sysrq = 1

net.ipv4.ip_forward = 1

net.bridge.bridge-nf-call-arptables = 1

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_local_reserved_ports = 30000-32767

vm.max_map_count = 262144

fs.inotify.max_user_instances = 524288

INFO[11:18:59 CST] Get cluster status

INFO[11:19:05 CST] Installing Container Runtime …

INFO[11:19:06 CST] Start to download images on all nodes

[k8s-prod-master2] Downloading image: kubesphere/pause:3.4.1

[k8s-prod-node7] Downloading image: kubesphere/pause:3.4.1

[k8s-prod-node2] Downloading image: kubesphere/pause:3.4.1

[k8s-prod-node3] Downloading image: kubesphere/pause:3.4.1

[k8s-prod-node4] Downloading image: kubesphere/pause:3.4.1

[k8s-prod-node1] Downloading image: kubesphere/pause:3.4.1

[k8s-prod-node5] Downloading image: kubesphere/pause:3.4.1

[k8s-prod-master3] Downloading image: kubesphere/pause:3.4.1

[k8s-prod-master1] Downloading image: kubesphere/pause:3.4.1

[k8s-prod-node6] Downloading image: kubesphere/pause:3.4.1

[k8s-prod-master1] Downloading image: kubesphere/kube-apiserver:v1.21.5

[k8s-prod-master2] Downloading image: kubesphere/kube-apiserver:v1.21.5

[k8s-prod-node7] Downloading image: kubesphere/kube-proxy:v1.21.5

[k8s-prod-node3] Downloading image: kubesphere/kube-proxy:v1.21.5

[k8s-prod-node2] Downloading image: kubesphere/kube-proxy:v1.21.5

[k8s-prod-node4] Downloading image: kubesphere/kube-proxy:v1.21.5

[k8s-prod-node5] Downloading image: kubesphere/kube-proxy:v1.21.5

[k8s-prod-node1] Downloading image: kubesphere/kube-proxy:v1.21.5

[k8s-prod-master3] Downloading image: kubesphere/kube-apiserver:v1.21.5

[k8s-prod-node6] Downloading image: kubesphere/kube-proxy:v1.21.5

[k8s-prod-master1] Downloading image: kubesphere/kube-controller-manager:v1.21.5

[k8s-prod-master2] Downloading image: kubesphere/kube-controller-manager:v1.21.5

[k8s-prod-node7] Downloading image: coredns/coredns:1.8.0

[k8s-prod-node3] Downloading image: coredns/coredns:1.8.0

[k8s-prod-node2] Downloading image: coredns/coredns:1.8.0

[k8s-prod-node5] Downloading image: coredns/coredns:1.8.0

[k8s-prod-node4] Downloading image: coredns/coredns:1.8.0

[k8s-prod-node1] Downloading image: coredns/coredns:1.8.0

[k8s-prod-master3] Downloading image: kubesphere/kube-controller-manager:v1.21.5

[k8s-prod-node6] Downloading image: coredns/coredns:1.8.0

[k8s-prod-master1] Downloading image: kubesphere/kube-scheduler:v1.21.5

[k8s-prod-master2] Downloading image: kubesphere/kube-scheduler:v1.21.5

[k8s-prod-node3] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12

[k8s-prod-node4] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12

[k8s-prod-node5] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12

[k8s-prod-node2] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12

[k8s-prod-node1] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12

[k8s-prod-master3] Downloading image: kubesphere/kube-scheduler:v1.21.5

[k8s-prod-node6] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12

[k8s-prod-node7] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12

[k8s-prod-master1] Downloading image: kubesphere/kube-proxy:v1.21.5

[k8s-prod-master2] Downloading image: kubesphere/kube-proxy:v1.21.5

[k8s-prod-node3] Downloading image: calico/kube-controllers:v3.20.0

[k8s-prod-node4] Downloading image: calico/kube-controllers:v3.20.0

[k8s-prod-node5] Downloading image: calico/kube-controllers:v3.20.0

[k8s-prod-node2] Downloading image: calico/kube-controllers:v3.20.0

[k8s-prod-node1] Downloading image: calico/kube-controllers:v3.20.0

[k8s-prod-master3] Downloading image: kubesphere/kube-proxy:v1.21.5

[k8s-prod-node6] Downloading image: calico/kube-controllers:v3.20.0

[k8s-prod-node7] Downloading image: calico/kube-controllers:v3.20.0

[k8s-prod-master1] Downloading image: coredns/coredns:1.8.0

[k8s-prod-master2] Downloading image: coredns/coredns:1.8.0

[k8s-prod-node3] Downloading image: calico/cni:v3.20.0

[k8s-prod-node5] Downloading image: calico/cni:v3.20.0

[k8s-prod-node1] Downloading image: calico/cni:v3.20.0

[k8s-prod-node4] Downloading image: calico/cni:v3.20.0

[k8s-prod-node2] Downloading image: calico/cni:v3.20.0

[k8s-prod-master3] Downloading image: coredns/coredns:1.8.0

[k8s-prod-node6] Downloading image: calico/cni:v3.20.0

[k8s-prod-node7] Downloading image: calico/cni:v3.20.0

[k8s-prod-master1] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12

[k8s-prod-master2] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12

[k8s-prod-node5] Downloading image: calico/node:v3.20.0

[k8s-prod-node3] Downloading image: calico/node:v3.20.0

[k8s-prod-node1] Downloading image: calico/node:v3.20.0

[k8s-prod-node2] Downloading image: calico/node:v3.20.0

[k8s-prod-node4] Downloading image: calico/node:v3.20.0

[k8s-prod-master3] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12

[k8s-prod-node6] Downloading image: calico/node:v3.20.0

[k8s-prod-node7] Downloading image: calico/node:v3.20.0

[k8s-prod-master1] Downloading image: calico/kube-controllers:v3.20.0

[k8s-prod-master2] Downloading image: calico/kube-controllers:v3.20.0

[k8s-prod-node5] Downloading image: calico/pod2daemon-flexvol:v3.20.0

[k8s-prod-node3] Downloading image: calico/pod2daemon-flexvol:v3.20.0

[k8s-prod-node2] Downloading image: calico/pod2daemon-flexvol:v3.20.0

[k8s-prod-node1] Downloading image: calico/pod2daemon-flexvol:v3.20.0

[k8s-prod-node4] Downloading image: calico/pod2daemon-flexvol:v3.20.0

[k8s-prod-master3] Downloading image: calico/kube-controllers:v3.20.0

[k8s-prod-node6] Downloading image: calico/pod2daemon-flexvol:v3.20.0

[k8s-prod-node7] Downloading image: calico/pod2daemon-flexvol:v3.20.0

[k8s-prod-master1] Downloading image: calico/cni:v3.20.0

[k8s-prod-master2] Downloading image: calico/cni:v3.20.0

[k8s-prod-node3] Downloading image: library/haproxy:2.3

[k8s-prod-node5] Downloading image: library/haproxy:2.3

[k8s-prod-node2] Downloading image: library/haproxy:2.3

[k8s-prod-node1] Downloading image: library/haproxy:2.3

[k8s-prod-node4] Downloading image: library/haproxy:2.3

[k8s-prod-master3] Downloading image: calico/cni:v3.20.0

[k8s-prod-node6] Downloading image: library/haproxy:2.3

[k8s-prod-node7] Downloading image: library/haproxy:2.3

[k8s-prod-master1] Downloading image: calico/node:v3.20.0

[k8s-prod-master2] Downloading image: calico/node:v3.20.0

[k8s-prod-node8] Downloading image: kubesphere/pause:3.4.1

[k8s-prod-node9] Downloading image: kubesphere/pause:3.4.1

[k8s-prod-master3] Downloading image: calico/node:v3.20.0

[k8s-prod-master1] Downloading image: calico/pod2daemon-flexvol:v3.20.0

[k8s-prod-master3] Downloading image: calico/pod2daemon-flexvol:v3.20.0

[k8s-prod-node8] Downloading image: kubesphere/kube-proxy:v1.21.5

[k8s-prod-master2] Downloading image: calico/pod2daemon-flexvol:v3.20.0

[k8s-prod-node9] Downloading image: kubesphere/kube-proxy:v1.21.5

[k8s-prod-node9] Downloading image: coredns/coredns:1.8.0

[k8s-prod-node8] Downloading image: coredns/coredns:1.8.0

[k8s-prod-node9] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12

[k8s-prod-node8] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12

[k8s-prod-node9] Downloading image: calico/kube-controllers:v3.20.0

[k8s-prod-node8] Downloading image: calico/kube-controllers:v3.20.0

[k8s-prod-node9] Downloading image: calico/cni:v3.20.0

[k8s-prod-node8] Downloading image: calico/cni:v3.20.0

[k8s-prod-node9] Downloading image: calico/node:v3.20.0

[k8s-prod-node8] Downloading image: calico/node:v3.20.0

[k8s-prod-node9] Downloading image: calico/pod2daemon-flexvol:v3.20.0

[k8s-prod-node8] Downloading image: calico/pod2daemon-flexvol:v3.20.0

[k8s-prod-node9] Downloading image: library/haproxy:2.3

[k8s-prod-node8] Downloading image: library/haproxy:2.3

INFO[11:20:14 CST] Getting etcd status

[k8s-prod-master1 200.1.129.69] MSG:

Configuration file already exists

[k8s-prod-master1 200.1.129.69] MSG:

ETCD_NAME=etcd-k8s-prod-master1

[k8s-prod-master2 200.1.129.70] MSG:

Configuration file already exists

[k8s-prod-master2 200.1.129.70] MSG:

ETCD_NAME=etcd-k8s-prod-master2

[k8s-prod-master3 200.1.129.71] MSG:

Configuration file already exists

[k8s-prod-master3 200.1.129.71] MSG:

ETCD_NAME=etcd-k8s-prod-master3

INFO[11:20:15 CST] Generating etcd certs

INFO[11:20:18 CST] Synchronizing etcd certs

INFO[11:20:18 CST] Creating etcd service

Push /root/kubekey/v1.21.5/amd64/etcd-v3.4.13-linux-amd64.tar.gz to 200.1.129.71:/tmp/kubekey/etcd-v3.4.13-linux-amd64.tar.gz Done

Push /root/kubekey/v1.21.5/amd64/etcd-v3.4.13-linux-amd64.tar.gz to 200.1.129.69:/tmp/kubekey/etcd-v3.4.13-linux-amd64.tar.gz Done

Push /root/kubekey/v1.21.5/amd64/etcd-v3.4.13-linux-amd64.tar.gz to 200.1.129.70:/tmp/kubekey/etcd-v3.4.13-linux-amd64.tar.gz Done

INFO[11:20:19 CST] Starting etcd cluster

INFO[11:20:19 CST] Refreshing etcd configuration

INFO[11:20:20 CST] Backup etcd data regularly

INFO[11:20:28 CST] Installing kube binaries

INFO[11:20:28 CST] Initializing kubernetes cluster

INFO[11:20:29 CST] Get cluster status

INFO[11:20:30 CST] Joining nodes to cluster

INFO[11:20:31 CST] Install internal load balancer to cluster

[k8s-prod-node9] generate haproxy manifest.

[k8s-prod-node4] generate haproxy manifest.

[k8s-prod-node1] generate haproxy manifest.

[k8s-prod-node2] generate haproxy manifest.

[k8s-prod-node3] generate haproxy manifest.

[k8s-prod-node6] generate haproxy manifest.

[k8s-prod-node7] generate haproxy manifest.

[k8s-prod-node8] generate haproxy manifest.

[k8s-prod-node5] generate haproxy manifest.

[k8s-prod-node7 200.1.129.78] MSG:

kubelet.conf is exists.

[k8s-prod-node2 200.1.129.73] MSG:

kubelet.conf is exists.

[k8s-prod-master1 200.1.129.69] MSG:

kubelet.conf is exists.

[k8s-prod-node4 200.1.129.75] MSG:

kubelet.conf is exists.

[k8s-prod-master2 200.1.129.70] MSG:

kubelet.conf is exists.

[k8s-prod-master3 200.1.129.71] MSG:

kubelet.conf is exists.

[k8s-prod-node3 200.1.129.74] MSG:

kubelet.conf is exists.

[k8s-prod-node1 200.1.129.72] MSG:

kubelet.conf is exists.

[k8s-prod-node5 200.1.129.76] MSG:

kubelet.conf is exists.

[k8s-prod-node6 200.1.129.77] MSG:

kubelet.conf is exists.

[k8s-prod-node8 200.1.129.79] MSG:

kubelet.conf is exists.

[k8s-prod-node9 200.1.129.80] MSG:

kubelet.conf is exists.

INFO[11:20:33 CST] Deploying network plugin …

[k8s-prod-master1 200.1.129.69] MSG:

configmap/calico-config unchanged

customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org configured

customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org configured

customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org configured

customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org configured

customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org configured

customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org configured

customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org configured

customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org configured

customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org configured

customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org configured

customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org configured

customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org configured

customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org configured

customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org configured

customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org configured

clusterrole.rbac.authorization.k8s.io/calico-kube-controllers unchanged

clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers unchanged

clusterrole.rbac.authorization.k8s.io/calico-node unchanged

clusterrolebinding.rbac.authorization.k8s.io/calico-node unchanged

daemonset.apps/calico-node configured

serviceaccount/calico-node unchanged

deployment.apps/calico-kube-controllers unchanged

serviceaccount/calico-kube-controllers unchanged

Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget

poddisruptionbudget.policy/calico-kube-controllers configured

INFO[11:20:35 CST] Deploying KubeSphere …

v3.2.1

[k8s-prod-master1 200.1.129.69] MSG:

namespace/kubesphere-system unchanged

namespace/kubesphere-monitoring-system unchanged

[k8s-prod-master1 200.1.129.69] MSG:

namespace/kubesphere-system unchanged

serviceaccount/ks-installer unchanged

customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io unchanged

clusterrole.rbac.authorization.k8s.io/ks-installer configured

clusterrolebinding.rbac.authorization.k8s.io/ks-installer unchanged

deployment.apps/ks-installer unchanged

clusterconfiguration.installer.kubesphere.io/ks-installer unchanged

WARN[11:51:42 CST] Task failed …

WARN[11:51:42 CST] error: KubeSphere startup timeout.

Error: Failed to deploy kubesphere: KubeSphere startup timeout.

Usage:

kk create cluster [flags]

Flags:

–container-manager string Container runtime: docker, crio, containerd and isula. (default “docker”)

–download-cmd string The user defined command to download the necessary binary files. The first param ‘%s’ is output path, the second param ‘%s’, is the URL (default “curl -L -o %s %s”)

-f, –filename string Path to a configuration file

-h, –help help for cluster

–skip-pull-images Skip pre pull images

–with-kubernetes string Specify a supported version of kubernetes (default “v1.21.5”)

–with-kubesphere Deploy a specific version of kubesphere (default v3.2.0)

–with-local-storage Deploy a local PV provisioner

-y, –yes Skip pre-check of the installation

Global Flags:

–debug Print detailed information (default true)

–in-cluster Running inside the cluster

Failed to deploy kubesphere: KubeSphere startup timeout.

你的环境感觉不是很干净,你可以清除一下你的环境