创建部署问题时,请参考下面模板,你提供的信息越多,越容易及时获得解答。
你只花一分钟创建的问题,不能指望别人花上半个小时给你解答。
发帖前请点击 发表主题 右边的 预览(👀) 按钮,确保帖子格式正确。
操作系统信息
例如:虚拟机/物理机,Centos7.5/Ubuntu18.04,4C/8G
Kubernetes版本信息
例如:v18.6。单节点还是多节点。
容器运行时
例如,使用 docker/containerd,版本多少
KubeSphere版本信息
v1.22.10/v3.3.0。在线安装。全套安装。
问题是什么?
等待此过程差不多两小时后报错 Please wait for the installation to complete: >>—>
报错信息:
13:02:55 CST failed: [ecs-arm-kyllin]
error: Pipeline[CreateClusterPipeline] execute failed: Module[CheckResultModule] exec failed:
failed: [ecs-arm-kyllin] execute task timeout, Timeout=7200000000000
全部安装日志:
11:01:12 CST success: [LocalHost]
11:01:12 CST [ConfigureOSModule] Prepare to init OS
11:01:14 CST success: [ecs-arm-kyllin]
11:01:14 CST [ConfigureOSModule] Generate init os script
11:01:14 CST success: [ecs-arm-kyllin]
11:01:14 CST [ConfigureOSModule] Exec init os script
11:01:15 CST stdout: [ecs-arm-kyllin]
setenforce: SELinux is disabled
Disabled
kernel.sysrq = 0
net.ipv4.ip_forward = 1
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv4.conf.all.secure_redirects = 0
net.ipv4.conf.default.secure_redirects = 0
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.icmp_ignore_bogus_error_responses = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.tcp_syncookies = 1
kernel.dmesg_restrict = 1
net.ipv6.conf.all.accept_redirects = 0
net.ipv6.conf.default.accept_redirects = 0
vm.swappiness = 1
net.core.somaxconn = 1024
net.ipv4.tcp_max_tw_buckets = 5000
net.ipv4.tcp_max_syn_backlog = 1024
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
fs.inotify.max_user_instances = 524288
kernel.pid_max = 65535
11:01:15 CST success: [ecs-arm-kyllin]
11:01:15 CST [ConfigureOSModule] configure the ntp server for each node
11:01:15 CST skipped: [ecs-arm-kyllin]
11:01:15 CST [KubernetesStatusModule] Get kubernetes cluster status
11:01:15 CST success: [ecs-arm-kyllin]
11:01:15 CST [InstallContainerModule] Sync docker binaries
11:01:17 CST success: [ecs-arm-kyllin]
11:01:17 CST [InstallContainerModule] Generate docker service
11:01:18 CST success: [ecs-arm-kyllin]
11:01:18 CST [InstallContainerModule] Generate docker config
11:01:18 CST success: [ecs-arm-kyllin]
11:01:18 CST [InstallContainerModule] Enable docker
11:01:20 CST success: [ecs-arm-kyllin]
11:01:20 CST [InstallContainerModule] Add auths to container runtime
11:01:20 CST skipped: [ecs-arm-kyllin]
11:01:20 CST [PullModule] Start to pull images on all nodes
11:01:20 CST message: [ecs-arm-kyllin]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.4.1
11:01:21 CST message: [ecs-arm-kyllin]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.21.5
11:01:25 CST message: [ecs-arm-kyllin]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.21.5
11:01:28 CST message: [ecs-arm-kyllin]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.21.5
11:01:31 CST message: [ecs-arm-kyllin]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.21.5
11:01:34 CST message: [ecs-arm-kyllin]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.0
11:01:36 CST message: [ecs-arm-kyllin]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
11:01:40 CST message: [ecs-arm-kyllin]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.20.0
11:01:43 CST message: [ecs-arm-kyllin]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.20.0
11:01:48 CST message: [ecs-arm-kyllin]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.20.0
11:01:52 CST message: [ecs-arm-kyllin]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.20.0
11:01:54 CST success: [ecs-arm-kyllin]
11:01:54 CST [ETCDPreCheckModule] Get etcd status
11:01:54 CST success: [ecs-arm-kyllin]
11:01:54 CST [CertsModule] Fetch etcd certs
11:01:54 CST success: [ecs-arm-kyllin]
11:01:54 CST [CertsModule] Generate etcd Certs
[certs] Generating “ca” certificate and key
[certs] admin-ecs-arm-kyllin serving cert is signed for DNS names [ecs-arm-kyllin etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local lb.kubesphere.local localhost] and IPs [127.0.0.1 ::1 192.168.0.89]
[certs] member-ecs-arm-kyllin serving cert is signed for DNS names [ecs-arm-kyllin etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local lb.kubesphere.local localhost] and IPs [127.0.0.1 ::1 192.168.0.89]
[certs] node-ecs-arm-kyllin serving cert is signed for DNS names [ecs-arm-kyllin etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local lb.kubesphere.local localhost] and IPs [127.0.0.1 ::1 192.168.0.89]
11:01:55 CST success: [LocalHost]
11:01:55 CST [CertsModule] Synchronize certs file
11:01:59 CST success: [ecs-arm-kyllin]
11:01:59 CST [CertsModule] Synchronize certs file to master
11:01:59 CST skipped: [ecs-arm-kyllin]
11:01:59 CST [InstallETCDBinaryModule] Install etcd using binary
11:02:00 CST success: [ecs-arm-kyllin]
11:02:00 CST [InstallETCDBinaryModule] Generate etcd service
11:02:00 CST success: [ecs-arm-kyllin]
11:02:00 CST [InstallETCDBinaryModule] Generate access address
11:02:00 CST success: [ecs-arm-kyllin]
11:02:00 CST [ETCDConfigureModule] Health check on exist etcd
11:02:00 CST skipped: [ecs-arm-kyllin]
11:02:00 CST [ETCDConfigureModule] Generate etcd.env config on new etcd
11:02:01 CST success: [ecs-arm-kyllin]
11:02:01 CST [ETCDConfigureModule] Refresh etcd.env config on all etcd
11:02:01 CST success: [ecs-arm-kyllin]
11:02:01 CST [ETCDConfigureModule] Restart etcd
11:02:06 CST stdout: [ecs-arm-kyllin]
Created symlink /etc/systemd/system/multi-user.target.wants/etcd.service → /etc/systemd/system/etcd.service.
11:02:06 CST success: [ecs-arm-kyllin]
11:02:06 CST [ETCDConfigureModule] Health check on all etcd
11:02:06 CST success: [ecs-arm-kyllin]
11:02:06 CST [ETCDConfigureModule] Refresh etcd.env config to exist mode on all etcd
11:02:07 CST success: [ecs-arm-kyllin]
11:02:07 CST [ETCDConfigureModule] Health check on all etcd
11:02:07 CST success: [ecs-arm-kyllin]
11:02:07 CST [ETCDBackupModule] Backup etcd data regularly
11:02:14 CST success: [ecs-arm-kyllin]
11:02:14 CST [InstallKubeBinariesModule] Synchronize kubernetes binaries
11:02:20 CST success: [ecs-arm-kyllin]
11:02:20 CST [InstallKubeBinariesModule] Synchronize kubelet
11:02:20 CST success: [ecs-arm-kyllin]
11:02:20 CST [InstallKubeBinariesModule] Generate kubelet service
11:02:21 CST success: [ecs-arm-kyllin]
11:02:21 CST [InstallKubeBinariesModule] Enable kubelet service
11:02:21 CST success: [ecs-arm-kyllin]
11:02:21 CST [InstallKubeBinariesModule] Generate kubelet env
11:02:22 CST success: [ecs-arm-kyllin]
11:02:22 CST [InitKubernetesModule] Generate kubeadm config
11:02:23 CST success: [ecs-arm-kyllin]
11:02:23 CST [InitKubernetesModule] Init cluster using kubeadm
11:02:36 CST stdout: [ecs-arm-kyllin]
W0715 11:02:23.270237 29751 utils.go:69] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.21.5
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
[certs] Using certificateDir folder “/etc/kubernetes/pki”
[certs] Generating “ca” certificate and key
[certs] Generating “apiserver” certificate and key
[certs] apiserver serving cert is signed for DNS names [ecs-arm-kyllin ecs-arm-kyllin.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost] and IPs [10.233.0.1 192.168.0.89 127.0.0.1]
[certs] Generating “apiserver-kubelet-client” certificate and key
[certs] Generating “front-proxy-ca” certificate and key
[certs] Generating “front-proxy-client” certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating “sa” key and public key
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
[kubeconfig] Writing “admin.conf” kubeconfig file
[kubeconfig] Writing “kubelet.conf” kubeconfig file
[kubeconfig] Writing “controller-manager.conf” kubeconfig file
[kubeconfig] Writing “scheduler.conf” kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
[control-plane] Creating static Pod manifest for “kube-scheduler”
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s
[apiclient] All control plane components are healthy after 9.503608 seconds
[upload-config] Storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace
[kubelet] Creating a ConfigMap “kubelet-config-1.21” in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see –upload-certs
[mark-control-plane] Marking the node ecs-arm-kyllin as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node ecs-arm-kyllin as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 4z9w96.zlsunvq6enmf3ltf
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the “cluster-info” ConfigMap in the “kube-public” namespace
[kubelet-finalize] Updating “/etc/kubernetes/kubelet.conf” to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join lb.kubesphere.local:6443 –token 4z9w96.zlsunvq6enmf3ltf \
--discovery-token-ca-cert-hash sha256:5e6c50043871f809c774c99e77b88954de2562ae6c8052a62334cd3e4d493a42 \\
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join lb.kubesphere.local:6443 –token 4z9w96.zlsunvq6enmf3ltf \
--discovery-token-ca-cert-hash sha256:5e6c50043871f809c774c99e77b88954de2562ae6c8052a62334cd3e4d493a42
11:02:36 CST success: [ecs-arm-kyllin]
11:02:36 CST [InitKubernetesModule] Copy admin.conf to ~/.kube/config
11:02:37 CST success: [ecs-arm-kyllin]
11:02:37 CST [InitKubernetesModule] Remove master taint
11:02:37 CST stdout: [ecs-arm-kyllin]
node/ecs-arm-kyllin untainted
11:02:38 CST stdout: [ecs-arm-kyllin]
error: taint “node-role.kubernetes.io/control-plane:NoSchedule” not found
11:02:38 CST [WARN] Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubectl taint nodes ecs-arm-kyllin node-role.kubernetes.io/control-plane=:NoSchedule-”
error: taint “node-role.kubernetes.io/control-plane:NoSchedule” not found: Process exited with status 1
11:02:38 CST success: [ecs-arm-kyllin]
11:02:38 CST [InitKubernetesModule] Add worker label
11:02:38 CST stdout: [ecs-arm-kyllin]
node/ecs-arm-kyllin labeled
11:02:38 CST success: [ecs-arm-kyllin]
11:02:38 CST [ClusterDNSModule] Generate coredns service
11:02:39 CST success: [ecs-arm-kyllin]
11:02:39 CST [ClusterDNSModule] Override coredns service
11:02:39 CST stdout: [ecs-arm-kyllin]
service “kube-dns” deleted
11:02:39 CST stdout: [ecs-arm-kyllin]
service/coredns created
Warning: resource clusterroles/system:coredns is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create –save-config or kubectl apply. The missing annotation will be patched automatically.
clusterrole.rbac.authorization.k8s.io/system:coredns configured
11:02:39 CST success: [ecs-arm-kyllin]
11:02:39 CST [ClusterDNSModule] Generate nodelocaldns
11:02:40 CST success: [ecs-arm-kyllin]
11:02:40 CST [ClusterDNSModule] Deploy nodelocaldns
11:02:40 CST stdout: [ecs-arm-kyllin]
serviceaccount/nodelocaldns created
daemonset.apps/nodelocaldns created
11:02:40 CST success: [ecs-arm-kyllin]
11:02:40 CST [ClusterDNSModule] Generate nodelocaldns configmap
11:02:41 CST success: [ecs-arm-kyllin]
11:02:41 CST [ClusterDNSModule] Apply nodelocaldns configmap
11:02:41 CST stdout: [ecs-arm-kyllin]
configmap/nodelocaldns created
11:02:41 CST success: [ecs-arm-kyllin]
11:02:41 CST [KubernetesStatusModule] Get kubernetes cluster status
11:02:42 CST stdout: [ecs-arm-kyllin]
v1.21.5
11:02:42 CST stdout: [ecs-arm-kyllin]
ecs-arm-kyllin v1.21.5 [map[address:192.168.0.89 type:InternalIP] map[address:ecs-arm-kyllin type:Hostname]]
11:02:45 CST stdout: [ecs-arm-kyllin]
I0715 11:02:44.017621 34923 version.go:254] remote version is much newer: v1.24.3; falling back to: stable-1.21
[upload-certs] Storing the certificates in Secret “kubeadm-certs” in the “kube-system” Namespace
[upload-certs] Using certificate key:
f323d0fb28a35549c278df784b10eb208f559eb03cce33fde50cc8ad3d51cc22
11:02:45 CST stdout: [ecs-arm-kyllin]
secret/kubeadm-certs patched
11:02:45 CST stdout: [ecs-arm-kyllin]
secret/kubeadm-certs patched
11:02:45 CST stdout: [ecs-arm-kyllin]
secret/kubeadm-certs patched
11:02:45 CST stdout: [ecs-arm-kyllin]
b2xtr3.v4imhmntd3wtmjhg
11:02:45 CST success: [ecs-arm-kyllin]
11:02:45 CST [JoinNodesModule] Generate kubeadm config
11:02:45 CST skipped: [ecs-arm-kyllin]
11:02:45 CST [JoinNodesModule] Join control-plane node
11:02:45 CST skipped: [ecs-arm-kyllin]
11:02:45 CST [JoinNodesModule] Join worker node
11:02:45 CST skipped: [ecs-arm-kyllin]
11:02:45 CST [JoinNodesModule] Copy admin.conf to ~/.kube/config
11:02:45 CST skipped: [ecs-arm-kyllin]
11:02:45 CST [JoinNodesModule] Remove master taint
11:02:45 CST skipped: [ecs-arm-kyllin]
11:02:45 CST [JoinNodesModule] Add worker label to master
11:02:45 CST skipped: [ecs-arm-kyllin]
11:02:45 CST [JoinNodesModule] Synchronize kube config to worker
11:02:45 CST skipped: [ecs-arm-kyllin]
11:02:45 CST [JoinNodesModule] Add worker label to worker
11:02:45 CST skipped: [ecs-arm-kyllin]
11:02:45 CST [DeployNetworkPluginModule] Generate calico
11:02:46 CST success: [ecs-arm-kyllin]
11:02:46 CST [DeployNetworkPluginModule] Deploy calico
11:02:47 CST stdout: [ecs-arm-kyllin]
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers created
11:02:47 CST success: [ecs-arm-kyllin]
11:02:47 CST [ConfigureKubernetesModule] Configure kubernetes
11:02:47 CST success: [ecs-arm-kyllin]
11:02:47 CST [ChownModule] Chown user $HOME/.kube dir
11:02:47 CST success: [ecs-arm-kyllin]
11:02:47 CST [AutoRenewCertsModule] Generate k8s certs renew script
11:02:48 CST success: [ecs-arm-kyllin]
11:02:48 CST [AutoRenewCertsModule] Generate k8s certs renew service
11:02:49 CST success: [ecs-arm-kyllin]
11:02:49 CST [AutoRenewCertsModule] Generate k8s certs renew timer
11:02:49 CST success: [ecs-arm-kyllin]
11:02:49 CST [AutoRenewCertsModule] Enable k8s certs renew service
11:02:49 CST success: [ecs-arm-kyllin]
11:02:49 CST [SaveKubeConfigModule] Save kube config as a configmap
11:02:49 CST success: [LocalHost]
11:02:49 CST [AddonsModule] Install addons
11:02:49 CST success: [LocalHost]
11:02:49 CST [DeployStorageClassModule] Generate OpenEBS manifest
11:02:51 CST success: [ecs-arm-kyllin]
11:02:51 CST [DeployStorageClassModule] Deploy OpenEBS as cluster default StorageClass
11:02:52 CST success: [ecs-arm-kyllin]
11:02:52 CST [DeployKubeSphereModule] Generate KubeSphere ks-installer crd manifests
11:02:52 CST success: [ecs-arm-kyllin]
11:02:52 CST [DeployKubeSphereModule] Apply ks-installer
11:02:53 CST stdout: [ecs-arm-kyllin]
namespace/kubesphere-system created
serviceaccount/ks-installer created
customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io created
clusterrole.rbac.authorization.k8s.io/ks-installer created
clusterrolebinding.rbac.authorization.k8s.io/ks-installer created
deployment.apps/ks-installer created
11:02:53 CST success: [ecs-arm-kyllin]
11:02:53 CST [DeployKubeSphereModule] Add config to ks-installer manifests
11:02:53 CST success: [ecs-arm-kyllin]
11:02:53 CST [DeployKubeSphereModule] Create the kubesphere namespace
11:02:53 CST success: [ecs-arm-kyllin]
11:02:53 CST [DeployKubeSphereModule] Setup ks-installer config
11:02:53 CST stdout: [ecs-arm-kyllin]
secret/kube-etcd-client-certs created
11:02:54 CST success: [ecs-arm-kyllin]
11:02:54 CST [DeployKubeSphereModule] Apply ks-installer
11:02:55 CST stdout: [ecs-arm-kyllin]
namespace/kubesphere-system unchanged
serviceaccount/ks-installer unchanged
customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io unchanged
clusterrole.rbac.authorization.k8s.io/ks-installer unchanged
clusterrolebinding.rbac.authorization.k8s.io/ks-installer unchanged
deployment.apps/ks-installer unchanged
clusterconfiguration.installer.kubesphere.io/ks-installer created
11:02:55 CST success: [ecs-arm-kyllin]
Please wait for the installation to complete: >>—>
13:02:55 CST failed: [ecs-arm-kyllin]
error: Pipeline[CreateClusterPipeline] execute failed: Module[CheckResultModule] exec failed:
failed: [ecs-arm-kyllin] execute task timeout, Timeout=7200000000000