创建部署问题时,请参考下面模板,你提供的信息越多,越容易及时获得解答。如果未按模板创建问题,管理员有权关闭问题。
确保帖子格式清晰易读,用 markdown code block 语法格式化代码块。
你只花一分钟创建的问题,不能指望别人花上半个小时给你解答。

操作系统信息
百度云,轻量应用服务器 LS,2核/4GB内存/80GB磁盘/6Mbps带宽

CentOS Linux release 7.6.1810 (Core)

Kubernetes版本信息
kubectl version 命令执行结果贴在下方

[root@ls ~]# kubectl version

Client Version: version.Info{Major:“1”, Minor:“22”, GitVersion:“v1.22.12”, GitCommit:“b058e1760c79f46a834ba59bd7a3486ecf28237d”, GitTreeState:“clean”, BuildDate:“2022-07-13T14:59:18Z”, GoVersion:“go1.16.15”, Compiler:“gc”, Platform:“linux/amd64”}

Server Version: version.Info{Major:“1”, Minor:“22”, GitVersion:“v1.22.12”, GitCommit:“b058e1760c79f46a834ba59bd7a3486ecf28237d”, GitTreeState:“clean”, BuildDate:“2022-07-13T14:53:39Z”, GoVersion:“go1.16.15”, Compiler:“gc”, Platform:“linux/amd64”}

容器运行时
docker version / crictl version / nerdctl version 结果贴在下方

[root@ls ~]# docker version / crictl version / nerdctl version

“docker version” accepts no arguments.

See ‘docker version –help’.

Usage: docker version [OPTIONS]

Show the Docker version information

KubeSphere版本信息
例如:v2.1.1/v3.0.0。离线安装还是在线安装。在已有K8s上安装还是使用kk安装。

./kk create cluster –with-kubernetes v1.22.12 –with-kubesphere v3.3.1

问题是什么
报错日志是什么,最好有截图。

安装报错,不知道是否安装成功,还是缺少配置

[root@ls backup]# ./kk create cluster –with-kubernetes v1.22.12 –with-kubesphere v3.3.1

                   _              

| | / /     | |        | | / /           

| |/ /    | |   | |/ /  __    

|    \| | | | '_ \ / _ \    \ / _ \ | | |

| |\  \ || | |) |  / |\  \  / |_| |

\| \/\,|./ \\| \/\|\__, |

                                    __/ |

                                   |___/

16:19:25 CST [GreetingsModule] Greetings

16:19:25 CST message: [ls.mIRQmxZ6]

Greetings, KubeKey!

16:19:25 CST success: [ls.mIRQmxZ6]

16:19:25 CST [NodePreCheckModule] A pre-check on nodes

16:19:26 CST success: [ls.mIRQmxZ6]

16:19:26 CST [ConfirmModule] Display confirmation form

+————-+——+——+———+———-+——-+——-+———+———–+——–+——–+————+————+————-+——————+————–+

| name        | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker | containerd | nfs client | ceph client | glusterfs client | time         |

+————-+——+——+———+———-+——-+——-+———+———–+——–+——–+————+————+————-+——————+————–+

| ls.mIRQmxZ6 | y    | y    | y       | y        | y     | y     |         | y         | y      |        |            |            |             |                  | CST 16:19:26 |

+————-+——+——+———+———-+——-+——-+———+———–+——–+——–+————+————+————-+——————+————–+

This is a simple check of your environment.

Before installation, ensure that your machines meet all requirements specified at

https://github.com/kubesphere/kubekey#requirements-and-recommendations

Continue this installation? [yes/no]: yes

16:19:32 CST success: [LocalHost]

16:19:32 CST [NodeBinariesModule] Download installation binaries

16:19:32 CST message: [localhost]

downloading amd64 kubeadm v1.22.12 …

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current

                                 Dload  Upload   Total   Spent    Left  Speed

100 43.7M  100 43.7M    0     0   945k      0  0:00:47  0:00:47 –:–:– 1025k

16:20:19 CST message: [localhost]

downloading amd64 kubelet v1.22.12 …

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current

                                 Dload  Upload   Total   Spent    Left  Speed

100  115M  100  115M    0     0   894k      0  0:02:11  0:02:11 –:–:– 1026k

16:22:32 CST message: [localhost]

downloading amd64 kubectl v1.22.12 …

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current

                                 Dload  Upload   Total   Spent    Left  Speed

100 44.7M  100 44.7M    0     0   994k      0  0:00:46  0:00:46 –:–:– 1034k

16:23:19 CST message: [localhost]

downloading amd64 helm v3.9.0 …

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current

                                 Dload  Upload   Total   Spent    Left  Speed

100 44.0M  100 44.0M    0     0  1010k      0  0:00:44  0:00:44 –:–:–  989k

16:24:03 CST message: [localhost]

downloading amd64 kubecni v0.9.1 …

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current

                                 Dload  Upload   Total   Spent    Left  Speed

100 37.9M  100 37.9M    0     0   988k      0  0:00:39  0:00:39 –:–:– 1034k

16:24:43 CST message: [localhost]

downloading amd64 crictl v1.24.0 …

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current

                                 Dload  Upload   Total   Spent    Left  Speed

100 13.8M  100 13.8M    0     0   993k      0  0:00:14  0:00:14 –:–:–  974k

16:24:57 CST message: [localhost]

downloading amd64 etcd v3.4.13 …

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current

                                 Dload  Upload   Total   Spent    Left  Speed

100 16.5M  100 16.5M    0     0   931k      0  0:00:18  0:00:18 –:–:–  956k

16:25:16 CST message: [localhost]

downloading amd64 docker 20.10.8 …

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current

                                 Dload  Upload   Total   Spent    Left  Speed

  4 58.1M    4 2785k    0     0   113k      0  0:08:43  0:00:24  0:08:19  111k

  4 58.1M    4 2894k    0     0   113k      0  0:08:44  0:00:25  0:08:19  111k

57 58.1M   57 33.1M    0     0   115k      0  0:08:35  0:04:54  0:03:41  114k

84 58.1M   84 49.0M    0     0   114k      0  0:08:39  0:07:18  0:01:21  116k

100 58.1M  100 58.1M    0     0   114k      0  0:08:39  0:08:39 –:–:–  110k

16:33:55 CST success: [LocalHost]

16:33:55 CST [ConfigureOSModule] Get OS release

16:33:55 CST success: [ls.mIRQmxZ6]

16:33:55 CST [ConfigureOSModule] Prepare to init OS

16:33:56 CST success: [ls.mIRQmxZ6]

16:33:56 CST [ConfigureOSModule] Generate init os script

16:33:56 CST success: [ls.mIRQmxZ6]

16:33:56 CST [ConfigureOSModule] Exec init os script

16:33:57 CST stdout: [ls.mIRQmxZ6]

setenforce: SELinux is disabled

Disabled

net.ipv4.ip_forward = 1

net.bridge.bridge-nf-call-arptables = 1

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_local_reserved_ports = 30000-32767

vm.max_map_count = 262144

vm.swappiness = 1

fs.inotify.max_user_instances = 524288

kernel.pid_max = 65535

16:33:57 CST success: [ls.mIRQmxZ6]

16:33:57 CST [ConfigureOSModule] configure the ntp server for each node

16:33:57 CST skipped: [ls.mIRQmxZ6]

16:33:57 CST [KubernetesStatusModule] Get kubernetes cluster status

16:33:57 CST success: [ls.mIRQmxZ6]

16:33:57 CST [InstallContainerModule] Sync docker binaries

16:34:00 CST success: [ls.mIRQmxZ6]

16:34:00 CST [InstallContainerModule] Generate docker service

16:34:00 CST success: [ls.mIRQmxZ6]

16:34:00 CST [InstallContainerModule] Generate docker config

16:34:00 CST success: [ls.mIRQmxZ6]

16:34:00 CST [InstallContainerModule] Enable docker

16:34:02 CST success: [ls.mIRQmxZ6]

16:34:02 CST [InstallContainerModule] Add auths to container runtime

16:34:02 CST skipped: [ls.mIRQmxZ6]

16:34:02 CST [PullModule] Start to pull images on all nodes

16:34:02 CST message: [ls.mIRQmxZ6]

downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.5

16:34:04 CST message: [ls.mIRQmxZ6]

downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.22.12

16:34:32 CST message: [ls.mIRQmxZ6]

downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.22.12

16:35:00 CST message: [ls.mIRQmxZ6]

downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.22.12

16:35:15 CST message: [ls.mIRQmxZ6]

downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.22.12

16:35:40 CST message: [ls.mIRQmxZ6]

downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.0

16:35:53 CST message: [ls.mIRQmxZ6]

downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12

16:36:16 CST message: [ls.mIRQmxZ6]

downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.23.2

16:36:47 CST message: [ls.mIRQmxZ6]

downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.23.2

16:38:03 CST message: [ls.mIRQmxZ6]

downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.23.2

16:39:13 CST message: [ls.mIRQmxZ6]

downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.23.2

16:39:21 CST success: [ls.mIRQmxZ6]

16:39:21 CST [ETCDPreCheckModule] Get etcd status

16:39:21 CST success: [ls.mIRQmxZ6]

16:39:21 CST [CertsModule] Fetch etcd certs

16:39:21 CST success: [ls.mIRQmxZ6]

16:39:21 CST [CertsModule] Generate etcd Certs

[certs] Generating “ca” certificate and key

[certs] admin-ls.mIRQmxZ6 serving cert is signed for DNS names [etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local lb.kubesphere.local localhost ls.mIRQmxZ6] and IPs [127.0.0.1 ::1 192.168.0.4]

[certs] member-ls.mIRQmxZ6 serving cert is signed for DNS names [etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local lb.kubesphere.local localhost ls.mIRQmxZ6] and IPs [127.0.0.1 ::1 192.168.0.4]

[certs] node-ls.mIRQmxZ6 serving cert is signed for DNS names [etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local lb.kubesphere.local localhost ls.mIRQmxZ6] and IPs [127.0.0.1 ::1 192.168.0.4]

16:39:22 CST success: [LocalHost]

16:39:22 CST [CertsModule] Synchronize certs file

16:39:24 CST success: [ls.mIRQmxZ6]

16:39:24 CST [CertsModule] Synchronize certs file to master

16:39:24 CST skipped: [ls.mIRQmxZ6]

16:39:24 CST [InstallETCDBinaryModule] Install etcd using binary

16:39:25 CST success: [ls.mIRQmxZ6]

16:39:25 CST [InstallETCDBinaryModule] Generate etcd service

16:39:26 CST success: [ls.mIRQmxZ6]

16:39:26 CST [InstallETCDBinaryModule] Generate access address

16:39:26 CST success: [ls.mIRQmxZ6]

16:39:26 CST [ETCDConfigureModule] Health check on exist etcd

16:39:26 CST skipped: [ls.mIRQmxZ6]

16:39:26 CST [ETCDConfigureModule] Generate etcd.env config on new etcd

16:39:26 CST success: [ls.mIRQmxZ6]

16:39:26 CST [ETCDConfigureModule] Refresh etcd.env config on all etcd

16:39:26 CST success: [ls.mIRQmxZ6]

16:39:26 CST [ETCDConfigureModule] Restart etcd

16:39:27 CST stdout: [ls.mIRQmxZ6]

Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /etc/systemd/system/etcd.service.

16:39:27 CST success: [ls.mIRQmxZ6]

16:39:27 CST [ETCDConfigureModule] Health check on all etcd

16:39:27 CST success: [ls.mIRQmxZ6]

16:39:27 CST [ETCDConfigureModule] Refresh etcd.env config to exist mode on all etcd

16:39:28 CST success: [ls.mIRQmxZ6]

16:39:28 CST [ETCDConfigureModule] Health check on all etcd

16:39:28 CST success: [ls.mIRQmxZ6]

16:39:28 CST [ETCDBackupModule] Backup etcd data regularly

16:39:28 CST success: [ls.mIRQmxZ6]

16:39:28 CST [ETCDBackupModule] Generate backup ETCD service

16:39:28 CST success: [ls.mIRQmxZ6]

16:39:28 CST [ETCDBackupModule] Generate backup ETCD timer

16:39:28 CST success: [ls.mIRQmxZ6]

16:39:28 CST [ETCDBackupModule] Enable backup etcd service

16:39:28 CST success: [ls.mIRQmxZ6]

16:39:28 CST [InstallKubeBinariesModule] Synchronize kubernetes binaries

16:39:35 CST success: [ls.mIRQmxZ6]

16:39:35 CST [InstallKubeBinariesModule] Synchronize kubelet

16:39:35 CST success: [ls.mIRQmxZ6]

16:39:35 CST [InstallKubeBinariesModule] Generate kubelet service

16:39:35 CST success: [ls.mIRQmxZ6]

16:39:35 CST [InstallKubeBinariesModule] Enable kubelet service

16:39:36 CST success: [ls.mIRQmxZ6]

16:39:36 CST [InstallKubeBinariesModule] Generate kubelet env

16:39:36 CST success: [ls.mIRQmxZ6]

16:39:36 CST [InitKubernetesModule] Generate kubeadm config

16:39:36 CST success: [ls.mIRQmxZ6]

16:39:36 CST [InitKubernetesModule] Init cluster using kubeadm

16:39:49 CST stdout: [ls.mIRQmxZ6]

W1114 16:39:36.642263   36200 utils.go:69] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.233.0.10]; the provided value is: [169.254.25.10]

[init] Using Kubernetes version: v1.22.12

[preflight] Running pre-flight checks

[preflight] Pulling images required for setting up a Kubernetes cluster

[preflight] This might take a minute or two, depending on the speed of your internet connection

[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’

[certs] Using certificateDir folder “/etc/kubernetes/pki”

[certs] Generating “ca” certificate and key

[certs] Generating “apiserver” certificate and key

[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost ls.mirqmxz6 ls.mirqmxz6.cluster.local] and IPs [10.233.0.1 192.168.0.4 127.0.0.1]

[certs] Generating “apiserver-kubelet-client” certificate and key

[certs] Generating “front-proxy-ca” certificate and key

[certs] Generating “front-proxy-client” certificate and key

[certs] External etcd mode: Skipping etcd/ca certificate authority generation

[certs] External etcd mode: Skipping etcd/server certificate generation

[certs] External etcd mode: Skipping etcd/peer certificate generation

[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation

[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation

[certs] Generating “sa” key and public key

[kubeconfig] Using kubeconfig folder “/etc/kubernetes”

[kubeconfig] Writing “admin.conf” kubeconfig file

[kubeconfig] Writing “kubelet.conf” kubeconfig file

[kubeconfig] Writing “controller-manager.conf” kubeconfig file

[kubeconfig] Writing “scheduler.conf” kubeconfig file

[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”

[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”

[kubelet-start] Starting the kubelet

[control-plane] Using manifest folder “/etc/kubernetes/manifests”

[control-plane] Creating static Pod manifest for “kube-apiserver”

[control-plane] Creating static Pod manifest for “kube-controller-manager”

[control-plane] Creating static Pod manifest for “kube-scheduler”

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s

[apiclient] All control plane components are healthy after 9.002890 seconds

[upload-config] Storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace

[kubelet] Creating a ConfigMap “kubelet-config-1.22” in namespace kube-system with the configuration for the kubelets in the cluster

[upload-certs] Skipping phase. Please see –upload-certs

[mark-control-plane] Marking the node ls.mirqmxz6 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]

[mark-control-plane] Marking the node ls.mirqmxz6 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

[bootstrap-token] Using token: mrk68c.5db4ila5dqmumoib

[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles

[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes

[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials

[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token

[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster

[bootstrap-token] Creating the “cluster-info” ConfigMap in the “kube-public” namespace

[kubelet-finalize] Updating “/etc/kubernetes/kubelet.conf” to point to a rotatable kubelet client certificate and key

[addons] Applied essential addon: CoreDNS

[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.

Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:

  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities

and service account keys on each node and then running the following as root:

  kubeadm join lb.kubesphere.local:6443 –token mrk68c.5db4ila5dqmumoib \

--discovery-token-ca-cert-hash sha256:dac3a2315219dff930b4464e601e3f81ccc4a9742e4ac439adbd740318c72d41 \

--control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join lb.kubesphere.local:6443 –token mrk68c.5db4ila5dqmumoib \

--discovery-token-ca-cert-hash sha256:dac3a2315219dff930b4464e601e3f81ccc4a9742e4ac439adbd740318c72d41

16:39:49 CST success: [ls.mIRQmxZ6]

16:39:49 CST [InitKubernetesModule] Copy admin.conf to ~/.kube/config

16:39:49 CST success: [ls.mIRQmxZ6]

16:39:49 CST [InitKubernetesModule] Remove master taint

16:39:49 CST stdout: [ls.mIRQmxZ6]

Error from server (NotFound): nodes “ls.mIRQmxZ6” not found

16:39:49 CST [WARN] Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubectl taint nodes ls.mIRQmxZ6 node-role.kubernetes.io/master=:NoSchedule-”

Error from server (NotFound): nodes “ls.mIRQmxZ6” not found: Process exited with status 1

16:39:50 CST stdout: [ls.mIRQmxZ6]

Error from server (NotFound): nodes “ls.mIRQmxZ6” not found

16:39:50 CST [WARN] Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubectl taint nodes ls.mIRQmxZ6 node-role.kubernetes.io/control-plane=:NoSchedule-”

Error from server (NotFound): nodes “ls.mIRQmxZ6” not found: Process exited with status 1

16:39:50 CST success: [ls.mIRQmxZ6]

16:39:50 CST [InitKubernetesModule] Add worker label

16:39:50 CST stdout: [ls.mIRQmxZ6]

Error from server (NotFound): nodes “ls.mIRQmxZ6” not found

16:39:50 CST message: [ls.mIRQmxZ6]

add worker label failed: Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubectl label –overwrite node ls.mIRQmxZ6 node-role.kubernetes.io/worker=”

Error from server (NotFound): nodes “ls.mIRQmxZ6” not found: Process exited with status 1

16:39:50 CST retry: [ls.mIRQmxZ6]

16:39:55 CST stdout: [ls.mIRQmxZ6]

Error from server (NotFound): nodes “ls.mIRQmxZ6” not found

16:39:55 CST message: [ls.mIRQmxZ6]

add worker label failed: Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubectl label –overwrite node ls.mIRQmxZ6 node-role.kubernetes.io/worker=”

Error from server (NotFound): nodes “ls.mIRQmxZ6” not found: Process exited with status 1

16:39:55 CST retry: [ls.mIRQmxZ6]

16:40:00 CST stdout: [ls.mIRQmxZ6]

Error from server (NotFound): nodes “ls.mIRQmxZ6” not found

16:40:00 CST message: [ls.mIRQmxZ6]

add worker label failed: Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubectl label –overwrite node ls.mIRQmxZ6 node-role.kubernetes.io/worker=”

Error from server (NotFound): nodes “ls.mIRQmxZ6” not found: Process exited with status 1

16:40:00 CST retry: [ls.mIRQmxZ6]

16:40:05 CST stdout: [ls.mIRQmxZ6]

Error from server (NotFound): nodes “ls.mIRQmxZ6” not found

16:40:05 CST message: [ls.mIRQmxZ6]

add worker label failed: Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubectl label –overwrite node ls.mIRQmxZ6 node-role.kubernetes.io/worker=”

Error from server (NotFound): nodes “ls.mIRQmxZ6” not found: Process exited with status 1

16:40:05 CST retry: [ls.mIRQmxZ6]

16:40:10 CST stdout: [ls.mIRQmxZ6]

Error from server (NotFound): nodes “ls.mIRQmxZ6” not found

16:40:10 CST message: [ls.mIRQmxZ6]

add worker label failed: Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubectl label –overwrite node ls.mIRQmxZ6 node-role.kubernetes.io/worker=”

Error from server (NotFound): nodes “ls.mIRQmxZ6” not found: Process exited with status 1

16:40:10 CST failed: [ls.mIRQmxZ6]

error: Pipeline[CreateClusterPipeline] execute failed: Module[InitKubernetesModule] exec failed:

failed: [ls.mIRQmxZ6] [AddWorkerLabel] exec failed after 5 retires: add worker label failed: Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubectl label –overwrite node ls.mIRQmxZ6 node-role.kubernetes.io/worker=”

Error from server (NotFound): nodes “ls.mIRQmxZ6” not found: Process exited with status 1

缺少 docker,glusterfs,yum安装后正常了。

[root@ls backup]# ./kk create cluster –with-kubernetes v1.22.12 –with-kubesphere v3.3.1

WARN[0000] Failed to decode the keys [“storage.options.override_kernel_check”] from “/etc/containers/storage.conf”.

                   _              

| | / /     | |        | | / /           

| |/ /    | |   | |/ /  __    

|    \| | | | '_ \ / _ \    \ / _ \ | | |

| |\  \ || | |) |  / |\  \  / |_| |

\| \/\,|./ \\| \/\|\__, |

                                    __/ |

                                   |___/

08:51:43 CST [GreetingsModule] Greetings

08:51:43 CST message: [ls.mirqmxz6]

Greetings, KubeKey!

08:51:43 CST success: [ls.mirqmxz6]

08:51:43 CST [NodePreCheckModule] A pre-check on nodes

08:51:44 CST success: [ls.mirqmxz6]

08:51:44 CST [ConfirmModule] Display confirmation form

+————-+——+——+———+———-+——-+——-+———+———–+——–+———+————+————+————-+——————+————–+

| name        | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker  | containerd | nfs client | ceph client | glusterfs client | time         |

+————-+——+——+———+———-+——-+——-+———+———–+——–+———+————+————+————-+——————+————–+

| ls.mirqmxz6 | y    | y    | y       | y        | y     | y     |         | y         | y      | 20.10.8 | v1.4.9     |            |             |                  | CST 08:51:44 |

+————-+——+——+———+———-+——-+——-+———+———–+——–+———+————+————+————-+——————+————–+

This is a simple check of your environment.

Before installation, ensure that your machines meet all requirements specified at

https://github.com/kubesphere/kubekey#requirements-and-recommendations

Continue this installation? [yes/no]: yes

08:51:47 CST success: [LocalHost]

08:51:47 CST [NodeBinariesModule] Download installation binaries

08:51:47 CST message: [localhost]

downloading amd64 kubeadm v1.23.10 …

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current

                                 Dload  Upload   Total   Spent    Left  Speed

100 43.1M  100 43.1M    0     0  1116k      0  0:00:39  0:00:39 –:–:– 1210k

08:52:27 CST message: [localhost]

downloading amd64 kubelet v1.23.10 …

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current

                                 Dload  Upload   Total   Spent    Left  Speed

100  118M  100  118M    0     0  1032k      0  0:01:57  0:01:57 –:–:– 1114k

08:54:25 CST message: [localhost]

downloading amd64 kubectl v1.23.10 …

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current

                                 Dload  Upload   Total   Spent    Left  Speed

100 44.4M  100 44.4M    0     0   928k      0  0:00:48  0:00:48 –:–:– 1240k

08:55:14 CST message: [localhost]

downloading amd64 helm v3.9.0 …

08:55:14 CST message: [localhost]

helm is existed

08:55:14 CST message: [localhost]

downloading amd64 kubecni v0.9.1 …

08:55:15 CST message: [localhost]

kubecni is existed

08:55:15 CST message: [localhost]

downloading amd64 crictl v1.24.0 …

08:55:15 CST message: [localhost]

crictl is existed

08:55:15 CST message: [localhost]

downloading amd64 etcd v3.4.13 …

08:55:15 CST message: [localhost]

etcd is existed

08:55:15 CST message: [localhost]

downloading amd64 docker 20.10.8 …

08:55:15 CST message: [localhost]

docker is existed

08:55:15 CST success: [LocalHost]

08:55:15 CST [ConfigureOSModule] Get OS release

08:55:15 CST success: [ls.mirqmxz6]

08:55:15 CST [ConfigureOSModule] Prepare to init OS

08:55:16 CST success: [ls.mirqmxz6]

08:55:16 CST [ConfigureOSModule] Generate init os script

08:55:16 CST success: [ls.mirqmxz6]

08:55:16 CST [ConfigureOSModule] Exec init os script

08:55:17 CST stdout: [ls.mirqmxz6]

setenforce: SELinux is disabled

Disabled

net.ipv4.ip_forward = 1

net.bridge.bridge-nf-call-arptables = 1

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_local_reserved_ports = 30000-32767

vm.max_map_count = 262144

vm.swappiness = 1

fs.inotify.max_user_instances = 524288

kernel.pid_max = 65535

08:55:17 CST success: [ls.mirqmxz6]

08:55:17 CST [ConfigureOSModule] configure the ntp server for each node

08:55:17 CST skipped: [ls.mirqmxz6]

08:55:17 CST [KubernetesStatusModule] Get kubernetes cluster status

08:55:17 CST stdout: [ls.mirqmxz6]

v1.22.12

08:55:18 CST stdout: [ls.mirqmxz6]

ls.mirqmxz6   v1.22.12   [map[address:192.168.0.4 type:InternalIP] map[address:ls.mirqmxz6 type:Hostname]]

08:55:22 CST stdout: [ls.mirqmxz6]

I1115 08:55:21.671466   44452 version.go:255] remote version is much newer: v1.25.4; falling back to: stable-1.22

[upload-certs] Storing the certificates in Secret “kubeadm-certs” in the “kube-system” Namespace

[upload-certs] Using certificate key:

39b69246644bfdfde3c0f4919b685689c78b30d9f088d0498b23ec5373dced6d

08:55:22 CST stdout: [ls.mirqmxz6]

secret/kubeadm-certs patched

08:55:23 CST stdout: [ls.mirqmxz6]

secret/kubeadm-certs patched

08:55:23 CST stdout: [ls.mirqmxz6]

secret/kubeadm-certs patched

08:55:23 CST stdout: [ls.mirqmxz6]

cugdwi.kg3i5hglpnx7e4kk

08:55:23 CST success: [ls.mirqmxz6]

08:55:23 CST [InstallContainerModule] Sync docker binaries

08:55:23 CST skipped: [ls.mirqmxz6]

08:55:23 CST [InstallContainerModule] Generate docker service

08:55:23 CST skipped: [ls.mirqmxz6]

08:55:23 CST [InstallContainerModule] Generate docker config

08:55:23 CST skipped: [ls.mirqmxz6]

08:55:23 CST [InstallContainerModule] Enable docker

08:55:23 CST skipped: [ls.mirqmxz6]

08:55:23 CST [InstallContainerModule] Add auths to container runtime

08:55:23 CST skipped: [ls.mirqmxz6]

08:55:23 CST [PullModule] Start to pull images on all nodes

08:55:23 CST message: [ls.mirqmxz6]

downloading image: kubesphere/pause:3.6

08:55:30 CST message: [ls.mirqmxz6]

downloading image: kubesphere/kube-apiserver:v1.23.10

08:56:05 CST message: [ls.mirqmxz6]

downloading image: kubesphere/kube-controller-manager:v1.23.10

08:56:36 CST message: [ls.mirqmxz6]

downloading image: kubesphere/kube-scheduler:v1.23.10

08:57:00 CST message: [ls.mirqmxz6]

downloading image: kubesphere/kube-proxy:v1.23.10

08:57:42 CST message: [ls.mirqmxz6]

downloading image: coredns/coredns:1.8.6

08:58:00 CST message: [ls.mirqmxz6]

downloading image: kubesphere/k8s-dns-node-cache:1.15.12

08:58:03 CST message: [ls.mirqmxz6]

downloading image: calico/kube-controllers:v3.23.2

08:58:07 CST message: [ls.mirqmxz6]

downloading image: calico/cni:v3.23.2

08:58:10 CST message: [ls.mirqmxz6]

downloading image: calico/node:v3.23.2

08:58:14 CST message: [ls.mirqmxz6]

downloading image: calico/pod2daemon-flexvol:v3.23.2

08:58:16 CST success: [ls.mirqmxz6]

08:58:16 CST [ETCDPreCheckModule] Get etcd status

08:58:16 CST stdout: [ls.mirqmxz6]

ETCD_NAME=etcd-ls.mIRQmxZ6

08:58:16 CST success: [ls.mirqmxz6]

08:58:16 CST [CertsModule] Fetch etcd certs

08:58:16 CST success: [ls.mirqmxz6]

08:58:16 CST [CertsModule] Generate etcd Certs

[certs] Using existing ca certificate authority

[certs] admin-ls.mirqmxz6 serving cert is signed for DNS names [etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local lb.kubesphere.local localhost ls.mirqmxz6] and IPs [127.0.0.1 ::1 192.168.0.4]

[certs] member-ls.mirqmxz6 serving cert is signed for DNS names [etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local lb.kubesphere.local localhost ls.mirqmxz6] and IPs [127.0.0.1 ::1 192.168.0.4]

[certs] node-ls.mirqmxz6 serving cert is signed for DNS names [etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local lb.kubesphere.local localhost ls.mirqmxz6] and IPs [127.0.0.1 ::1 192.168.0.4]

08:58:17 CST success: [LocalHost]

08:58:17 CST [CertsModule] Synchronize certs file

08:58:18 CST success: [ls.mirqmxz6]

08:58:18 CST [CertsModule] Synchronize certs file to master

08:58:18 CST skipped: [ls.mirqmxz6]

08:58:18 CST [InstallETCDBinaryModule] Install etcd using binary

08:58:19 CST success: [ls.mirqmxz6]

08:58:19 CST [InstallETCDBinaryModule] Generate etcd service

08:58:20 CST success: [ls.mirqmxz6]

08:58:20 CST [InstallETCDBinaryModule] Generate access address

08:58:20 CST success: [ls.mirqmxz6]

08:58:20 CST [ETCDConfigureModule] Health check on exist etcd

08:58:20 CST success: [ls.mirqmxz6]

08:58:20 CST [ETCDConfigureModule] Generate etcd.env config on new etcd

08:58:20 CST skipped: [ls.mirqmxz6]

08:58:20 CST [ETCDConfigureModule] Join etcd member

08:58:20 CST skipped: [ls.mirqmxz6]

08:58:20 CST [ETCDConfigureModule] Restart etcd

08:58:20 CST skipped: [ls.mirqmxz6]

08:58:20 CST [ETCDConfigureModule] Health check on new etcd

08:58:20 CST skipped: [ls.mirqmxz6]

08:58:20 CST [ETCDConfigureModule] Check etcd member

08:58:20 CST skipped: [ls.mirqmxz6]

08:58:20 CST [ETCDConfigureModule] Refresh etcd.env config on all etcd

08:58:20 CST success: [ls.mirqmxz6]

08:58:20 CST [ETCDConfigureModule] Health check on all etcd

08:58:20 CST success: [ls.mirqmxz6]

08:58:20 CST [ETCDBackupModule] Backup etcd data regularly

08:58:20 CST success: [ls.mirqmxz6]

08:58:20 CST [ETCDBackupModule] Generate backup ETCD service

08:58:20 CST success: [ls.mirqmxz6]

08:58:20 CST [ETCDBackupModule] Generate backup ETCD timer

08:58:20 CST success: [ls.mirqmxz6]

08:58:20 CST [ETCDBackupModule] Enable backup etcd service

08:58:20 CST success: [ls.mirqmxz6]

08:58:20 CST [InstallKubeBinariesModule] Synchronize kubernetes binaries

08:58:20 CST skipped: [ls.mirqmxz6]

08:58:20 CST [InstallKubeBinariesModule] Synchronize kubelet

08:58:20 CST skipped: [ls.mirqmxz6]

08:58:20 CST [InstallKubeBinariesModule] Generate kubelet service

08:58:20 CST skipped: [ls.mirqmxz6]

08:58:20 CST [InstallKubeBinariesModule] Enable kubelet service

08:58:20 CST skipped: [ls.mirqmxz6]

08:58:20 CST [InstallKubeBinariesModule] Generate kubelet env

08:58:20 CST skipped: [ls.mirqmxz6]

08:58:20 CST [InitKubernetesModule] Generate kubeadm config

08:58:20 CST skipped: [ls.mirqmxz6]

08:58:20 CST [InitKubernetesModule] Init cluster using kubeadm

08:58:20 CST skipped: [ls.mirqmxz6]

08:58:20 CST [InitKubernetesModule] Copy admin.conf to ~/.kube/config

08:58:20 CST skipped: [ls.mirqmxz6]

08:58:20 CST [InitKubernetesModule] Remove master taint

08:58:20 CST skipped: [ls.mirqmxz6]

08:58:20 CST [InitKubernetesModule] Add worker label

08:58:20 CST skipped: [ls.mirqmxz6]

08:58:20 CST [ClusterDNSModule] Generate coredns service

08:58:21 CST success: [ls.mirqmxz6]

08:58:21 CST [ClusterDNSModule] Override coredns service

08:58:21 CST stdout: [ls.mirqmxz6]

service “kube-dns” deleted

08:58:22 CST stdout: [ls.mirqmxz6]

service/coredns created

Warning: resource clusterroles/system:coredns is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create –save-config or kubectl apply. The missing annotation will be patched automatically.

clusterrole.rbac.authorization.k8s.io/system:coredns configured

08:58:22 CST success: [ls.mirqmxz6]

08:58:22 CST [ClusterDNSModule] Generate nodelocaldns

08:58:22 CST success: [ls.mirqmxz6]

08:58:22 CST [ClusterDNSModule] Deploy nodelocaldns

08:58:22 CST stdout: [ls.mirqmxz6]

serviceaccount/nodelocaldns created

daemonset.apps/nodelocaldns created

08:58:22 CST success: [ls.mirqmxz6]

08:58:22 CST [ClusterDNSModule] Generate nodelocaldns configmap

08:58:23 CST success: [ls.mirqmxz6]

08:58:23 CST [ClusterDNSModule] Apply nodelocaldns configmap

08:58:23 CST stdout: [ls.mirqmxz6]

configmap/nodelocaldns created

08:58:23 CST success: [ls.mirqmxz6]

08:58:23 CST [KubernetesStatusModule] Get kubernetes cluster status

08:58:23 CST stdout: [ls.mirqmxz6]

v1.22.12

08:58:23 CST stdout: [ls.mirqmxz6]

ls.mirqmxz6   v1.22.12   [map[address:192.168.0.4 type:InternalIP] map[address:ls.mirqmxz6 type:Hostname]]

08:58:25 CST stdout: [ls.mirqmxz6]

I1115 08:58:24.714655   49033 version.go:255] remote version is much newer: v1.25.4; falling back to: stable-1.22

[upload-certs] Storing the certificates in Secret “kubeadm-certs” in the “kube-system” Namespace

[upload-certs] Using certificate key:

cfd0d84016ca34e7f569a638b7a8db1d33d97263ad35df17c282e043dc3f9841

08:58:25 CST stdout: [ls.mirqmxz6]

secret/kubeadm-certs patched

08:58:25 CST stdout: [ls.mirqmxz6]

secret/kubeadm-certs patched

08:58:25 CST stdout: [ls.mirqmxz6]

secret/kubeadm-certs patched

08:58:25 CST stdout: [ls.mirqmxz6]

gt6drz.q41qm4cg1up2uv76

08:58:25 CST success: [ls.mirqmxz6]

08:58:25 CST [JoinNodesModule] Generate kubeadm config

08:58:25 CST skipped: [ls.mirqmxz6]

08:58:25 CST [JoinNodesModule] Join control-plane node

08:58:25 CST skipped: [ls.mirqmxz6]

08:58:25 CST [JoinNodesModule] Join worker node

08:58:25 CST skipped: [ls.mirqmxz6]

08:58:25 CST [JoinNodesModule] Copy admin.conf to ~/.kube/config

08:58:25 CST skipped: [ls.mirqmxz6]

08:58:25 CST [JoinNodesModule] Remove master taint

08:58:25 CST skipped: [ls.mirqmxz6]

08:58:25 CST [JoinNodesModule] Add worker label to master

08:58:25 CST skipped: [ls.mirqmxz6]

08:58:25 CST [JoinNodesModule] Synchronize kube config to worker

08:58:25 CST skipped: [ls.mirqmxz6]

08:58:25 CST [JoinNodesModule] Add worker label to worker

08:58:25 CST skipped: [ls.mirqmxz6]

08:58:25 CST [DeployNetworkPluginModule] Generate calico

08:58:26 CST success: [ls.mirqmxz6]

08:58:26 CST [DeployNetworkPluginModule] Deploy calico

08:58:27 CST stdout: [ls.mirqmxz6]

configmap/calico-config created

customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created

clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created

clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created

clusterrole.rbac.authorization.k8s.io/calico-node created

clusterrolebinding.rbac.authorization.k8s.io/calico-node created

daemonset.apps/calico-node created

serviceaccount/calico-node created

deployment.apps/calico-kube-controllers created

serviceaccount/calico-kube-controllers created

poddisruptionbudget.policy/calico-kube-controllers created

08:58:27 CST success: [ls.mirqmxz6]

08:58:27 CST [ConfigureKubernetesModule] Configure kubernetes

08:58:27 CST success: [ls.mirqmxz6]

08:58:27 CST [ChownModule] Chown user $HOME/.kube dir

08:58:27 CST success: [ls.mirqmxz6]

08:58:27 CST [AutoRenewCertsModule] Generate k8s certs renew script

08:58:27 CST success: [ls.mirqmxz6]

08:58:27 CST [AutoRenewCertsModule] Generate k8s certs renew service

08:58:27 CST success: [ls.mirqmxz6]

08:58:27 CST [AutoRenewCertsModule] Generate k8s certs renew timer

08:58:27 CST success: [ls.mirqmxz6]

08:58:27 CST [AutoRenewCertsModule] Enable k8s certs renew service

08:58:28 CST success: [ls.mirqmxz6]

08:58:28 CST [SaveKubeConfigModule] Save kube config as a configmap

08:58:28 CST success: [LocalHost]

08:58:28 CST [AddonsModule] Install addons

08:58:28 CST success: [LocalHost]

08:58:28 CST Pipeline[CreateClusterPipeline] execute successfully

Installation is complete.

Please check the result using the command:

        kubectl get pod -A

[root@ls backup]# kubectl get pod -A

NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE

kube-system   calico-kube-controllers-69d878584c-ssxcz   0/1     Pending   0          17s

kube-system   calico-node-2cw4r                          0/1     Running   0          18s

kube-system   coredns-5495dd7c88-7bg4p                   0/1     Pending   0          16h

kube-system   coredns-5495dd7c88-brbnb                   0/1     Pending   0          16h

kube-system   kube-apiserver-ls.mirqmxz6                 1/1     Running   0          16h

kube-system   kube-controller-manager-ls.mirqmxz6        1/1     Running   0          16h

kube-system   kube-proxy-879hs                           1/1     Running   0          16h

kube-system   kube-scheduler-ls.mirqmxz6                 1/1     Running   0          16h

kube-system   nodelocaldns-qgc82                         1/1     Running   0          22s

[root@ls backup]#