创建部署问题时,请参考下面模板,你提供的信息越多,越容易及时获得解答。
你只花一分钟创建的问题,不能指望别人花上半个小时给你解答。
发帖前请点击 发表主题 右边的 预览(👀) 按钮,确保帖子格式正确。
操作系统信息
虚拟机 Ubuntu22.04,4C/8G
Kubernetes版本信息
v1.31.1单节点
容器运行时
应该是containerd,新装的ubuntu server 完全用kubekey安装
KubeSphere版本信息
无,还没有到这一步
问题是什么
参照文档进行安装时
https://dev-guide.kubesphere.io/extension-dev-guide/zh/quickstart/prepare-development-environment/
执行
kk create cluster –with-local-storage –with-kubernetes v1.31.0 –container-manager containerd -y
卡在了拉取镜像,我的虚拟机是用NAT模式联网,我不确定是在虚拟机无法连接外网的问题,还是其他问题比如操作系统有什么配置没配置的问题? 下面是全部报错日志
07:05:31 UTC [InitKubernetesModule] Init cluster using kubeadm
^[[C\\\07:30:38 UTC stdout: [hjx]
W0319 07:05:31.548513 4792 common.go:101] your configuration file uses a deprecated API spec: “kubeadm.k8s.io/v1beta3” (kind: “ClusterConfiguration”). Please use ‘kubeadm config migrate –old-config old.yaml –new-config new.yaml’, which will write the new, similar spec using a newer API version.
W0319 07:05:31.549308 4792 common.go:101] your configuration file uses a deprecated API spec: “kubeadm.k8s.io/v1beta3” (kind: “InitConfiguration”). Please use ‘kubeadm config migrate –old-config old.yaml –new-config new.yaml’, which will write the new, similar spec using a newer API version.
W0319 07:05:31.551375 4792 utils.go:69] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.31.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using ‘kubeadm config images pull’
W0319 07:05:31.667968 4792 checks.go:846] detected that the sandbox image “kubesphere/pause:3.9” of the container runtime is inconsistent with that used by kubeadm.It is recommended to use “kubesphere/pause:3.10” as the CRI sandbox image.
[WARNING ImagePull]: failed to pull image kubesphere/kube-apiserver:v1.31.0: failed to pull image kubesphere/kube-apiserver:v1.31.0: failed to pull and unpack image "docker.io/kubesphere/kube-apiserver:v1.31.0": failed to resolve reference "docker.io/kubesphere/kube-apiserver:v1.31.0": failed to do request: Head "https://registry-1.docker.io/v2/kubesphere/kube-apiserver/manifests/v1.31.0": dial tcp 202.160.128.40:443: connect: connection refused
[WARNING ImagePull]: failed to pull image kubesphere/kube-controller-manager:v1.31.0: failed to pull image kubesphere/kube-controller-manager:v1.31.0: failed to pull and unpack image "docker.io/kubesphere/kube-controller-manager:v1.31.0": failed to resolve reference "docker.io/kubesphere/kube-controller-manager:v1.31.0": failed to do request: Head "https://registry-1.docker.io/v2/kubesphere/kube-controller-manager/manifests/v1.31.0": dial tcp 103.226.246.99:443: connect: connection refused
[WARNING ImagePull]: failed to pull image kubesphere/kube-scheduler:v1.31.0: failed to pull image kubesphere/kube-scheduler:v1.31.0: failed to pull and unpack image "docker.io/kubesphere/kube-scheduler:v1.31.0": failed to resolve reference "docker.io/kubesphere/kube-scheduler:v1.31.0": failed to do request: Head "https://registry-1.docker.io/v2/kubesphere/kube-scheduler/manifests/v1.31.0": dial tcp 69.63.186.31:443: connect: connection refused
[WARNING ImagePull]: failed to pull image kubesphere/kube-proxy:v1.31.0: failed to pull image kubesphere/kube-proxy:v1.31.0: failed to pull and unpack image "docker.io/kubesphere/kube-proxy:v1.31.0": failed to resolve reference "docker.io/kubesphere/kube-proxy:v1.31.0": failed to do request: Head "https://registry-1.docker.io/v2/kubesphere/kube-proxy/manifests/v1.31.0": dial tcp 208.101.21.43:443: connect: connection refused
[WARNING ImagePull]: failed to pull image coredns/coredns:1.9.3: failed to pull image coredns/coredns:1.9.3: failed to pull and unpack image "docker.io/coredns/coredns:1.9.3": failed to resolve reference "docker.io/coredns/coredns:1.9.3": failed to do request: Head "https://registry-1.docker.io/v2/coredns/coredns/manifests/1.9.3": dial tcp 199.59.149.204:443: connect: connection refused
[WARNING ImagePull]: failed to pull image kubesphere/pause:3.10: failed to pull image kubesphere/pause:3.10: failed to pull and unpack image "docker.io/kubesphere/pause:3.10": failed to resolve reference "docker.io/kubesphere/pause:3.10": failed to do request: Head "https://registry-1.docker.io/v2/kubesphere/pause/manifests/3.10": dial tcp 103.97.3.19:443: connect: connection refused
[certs] Using certificateDir folder “/etc/kubernetes/pki”
[certs] Generating “ca” certificate and key
[certs] Generating “apiserver” certificate and key
[certs] apiserver serving cert is signed for DNS names [hjx hjx.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost] and IPs [10.233.0.1 192.168.137.129 127.0.0.1]
[certs] Generating “apiserver-kubelet-client” certificate and key
[certs] Generating “front-proxy-ca” certificate and key
[certs] Generating “front-proxy-client” certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating “sa” key and public key
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
[kubeconfig] Writing “admin.conf” kubeconfig file
[kubeconfig] Writing “super-admin.conf” kubeconfig file
[kubeconfig] Writing “kubelet.conf” kubeconfig file
[kubeconfig] Writing “controller-manager.conf” kubeconfig file
[kubeconfig] Writing “scheduler.conf” kubeconfig file
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
[control-plane] Creating static Pod manifest for “kube-scheduler”
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 502.093485ms
[api-check] Waiting for a healthy API server. This can take up to 4m0s
[api-check] The API server is not healthy after 4m0.001001287s
Unfortunately, an error has occurred:
context deadline exceeded
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: could not initialize a Kubernetes cluster
To see the stack trace of this error execute with –v=5 or higher
07:30:38 UTC stdout: [hjx]
[reset] Reading configuration from the cluster…
[reset] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -o yaml’
W0319 07:30:38.928749 4999 reset.go:123] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get “https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s”: dial tcp 192.168.137.129:6443: connect: connection refused
[preflight] Running pre-flight checks
W0319 07:30:38.928910 4999 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Deleted contents of the etcd data directory: /var/lib/etcd
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in “/var/lib/kubelet”
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the “iptables” command.
If your cluster was setup to utilize IPVS, run ipvsadm –clear (or similar)
to reset your system’s IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
07:30:38 UTC message: [hjx]
init kubernetes cluster failed: Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubeadm init –config=/etc/kubernetes/kubeadm-config.yaml –ignore-preflight-errors=FileExisting-crictl,ImagePull”
W0319 07:05:31.548513 4792 common.go:101] your configuration file uses a deprecated API spec: “kubeadm.k8s.io/v1beta3” (kind: “ClusterConfiguration”). Please use ‘kubeadm config migrate –old-config old.yaml –new-config new.yaml’, which will write the new, similar spec using a newer API version.
W0319 07:05:31.549308 4792 common.go:101] your configuration file uses a deprecated API spec: “kubeadm.k8s.io/v1beta3” (kind: “InitConfiguration”). Please use ‘kubeadm config migrate –old-config old.yaml –new-config new.yaml’, which will write the new, similar spec using a newer API version.
W0319 07:05:31.551375 4792 utils.go:69] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.31.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using ‘kubeadm config images pull’
W0319 07:05:31.667968 4792 checks.go:846] detected that the sandbox image “kubesphere/pause:3.9” of the container runtime is inconsistent with that used by kubeadm.It is recommended to use “kubesphere/pause:3.10” as the CRI sandbox image.
[WARNING ImagePull]: failed to pull image kubesphere/kube-apiserver:v1.31.0: failed to pull image kubesphere/kube-apiserver:v1.31.0: failed to pull and unpack image "docker.io/kubesphere/kube-apiserver:v1.31.0": failed to resolve reference "docker.io/kubesphere/kube-apiserver:v1.31.0": failed to do request: Head "https://registry-1.docker.io/v2/kubesphere/kube-apiserver/manifests/v1.31.0": dial tcp 202.160.128.40:443: connect: connection refused
[WARNING ImagePull]: failed to pull image kubesphere/kube-controller-manager:v1.31.0: failed to pull image kubesphere/kube-controller-manager:v1.31.0: failed to pull and unpack image "docker.io/kubesphere/kube-controller-manager:v1.31.0": failed to resolve reference "docker.io/kubesphere/kube-controller-manager:v1.31.0": failed to do request: Head "https://registry-1.docker.io/v2/kubesphere/kube-controller-manager/manifests/v1.31.0": dial tcp 103.226.246.99:443: connect: connection refused
[WARNING ImagePull]: failed to pull image kubesphere/kube-scheduler:v1.31.0: failed to pull image kubesphere/kube-scheduler:v1.31.0: failed to pull and unpack image "docker.io/kubesphere/kube-scheduler:v1.31.0": failed to resolve reference "docker.io/kubesphere/kube-scheduler:v1.31.0": failed to do request: Head "https://registry-1.docker.io/v2/kubesphere/kube-scheduler/manifests/v1.31.0": dial tcp 69.63.186.31:443: connect: connection refused
[WARNING ImagePull]: failed to pull image kubesphere/kube-proxy:v1.31.0: failed to pull image kubesphere/kube-proxy:v1.31.0: failed to pull and unpack image "docker.io/kubesphere/kube-proxy:v1.31.0": failed to resolve reference "docker.io/kubesphere/kube-proxy:v1.31.0": failed to do request: Head "https://registry-1.docker.io/v2/kubesphere/kube-proxy/manifests/v1.31.0": dial tcp 208.101.21.43:443: connect: connection refused
[WARNING ImagePull]: failed to pull image coredns/coredns:1.9.3: failed to pull image coredns/coredns:1.9.3: failed to pull and unpack image "docker.io/coredns/coredns:1.9.3": failed to resolve reference "docker.io/coredns/coredns:1.9.3": failed to do request: Head "https://registry-1.docker.io/v2/coredns/coredns/manifests/1.9.3": dial tcp 199.59.149.204:443: connect: connection refused
[WARNING ImagePull]: failed to pull image kubesphere/pause:3.10: failed to pull image kubesphere/pause:3.10: failed to pull and unpack image "docker.io/kubesphere/pause:3.10": failed to resolve reference "docker.io/kubesphere/pause:3.10": failed to do request: Head "https://registry-1.docker.io/v2/kubesphere/pause/manifests/3.10": dial tcp 103.97.3.19:443: connect: connection refused
[certs] Using certificateDir folder “/etc/kubernetes/pki”
[certs] Generating “ca” certificate and key
[certs] Generating “apiserver” certificate and key
[certs] apiserver serving cert is signed for DNS names [hjx hjx.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost] and IPs [10.233.0.1 192.168.137.129 127.0.0.1]
[certs] Generating “apiserver-kubelet-client” certificate and key
[certs] Generating “front-proxy-ca” certificate and key
[certs] Generating “front-proxy-client” certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating “sa” key and public key
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
[kubeconfig] Writing “admin.conf” kubeconfig file
[kubeconfig] Writing “super-admin.conf” kubeconfig file
[kubeconfig] Writing “kubelet.conf” kubeconfig file
[kubeconfig] Writing “controller-manager.conf” kubeconfig file
[kubeconfig] Writing “scheduler.conf” kubeconfig file
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
[control-plane] Creating static Pod manifest for “kube-scheduler”
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 502.093485ms
[api-check] Waiting for a healthy API server. This can take up to 4m0s
[api-check] The API server is not healthy after 4m0.001001287s
Unfortunately, an error has occurred:
context deadline exceeded
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: could not initialize a Kubernetes cluster
To see the stack trace of this error execute with –v=5 or higher: Process exited with status 1
07:30:38 UTC retry: [hjx]
07:30:59 UTC stdout: [hjx]
W0319 07:30:44.029989 5011 common.go:101] your configuration file uses a deprecated API spec: “kubeadm.k8s.io/v1beta3” (kind: “ClusterConfiguration”). Please use ‘kubeadm config migrate –old-config old.yaml –new-config new.yaml’, which will write the new, similar spec using a newer API version.
W0319 07:30:44.030931 5011 common.go:101] your configuration file uses a deprecated API spec: “kubeadm.k8s.io/v1beta3” (kind: “InitConfiguration”). Please use ‘kubeadm config migrate –old-config old.yaml –new-config new.yaml’, which will write the new, similar spec using a newer API version.
W0319 07:30:44.032528 5011 utils.go:69] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.31.0
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ExternalEtcdVersion]: Get "https://192.168.137.129:2379/version": dial tcp 192.168.137.129:2379: connect: connection refused
[preflight] If you know what you are doing, you can make a check non-fatal with `–ignore-preflight-errors=…`
To see the stack trace of this error execute with –v=5 or higher
07:30:59 UTC stdout: [hjx]
[preflight] Running pre-flight checks
W0319 07:30:59.200384 5040 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Deleted contents of the etcd data directory: /var/lib/etcd
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in “/var/lib/kubelet”
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the “iptables” command.
If your cluster was setup to utilize IPVS, run ipvsadm –clear (or similar)
to reset your system’s IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
07:30:59 UTC message: [hjx]
init kubernetes cluster failed: Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubeadm init –config=/etc/kubernetes/kubeadm-config.yaml –ignore-preflight-errors=FileExisting-crictl,ImagePull”
W0319 07:30:44.029989 5011 common.go:101] your configuration file uses a deprecated API spec: “kubeadm.k8s.io/v1beta3” (kind: “ClusterConfiguration”). Please use ‘kubeadm config migrate –old-config old.yaml –new-config new.yaml’, which will write the new, similar spec using a newer API version.
W0319 07:30:44.030931 5011 common.go:101] your configuration file uses a deprecated API spec: “kubeadm.k8s.io/v1beta3” (kind: “InitConfiguration”). Please use ‘kubeadm config migrate –old-config old.yaml –new-config new.yaml’, which will write the new, similar spec using a newer API version.
W0319 07:30:44.032528 5011 utils.go:69] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.31.0
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ExternalEtcdVersion]: Get "https://192.168.137.129:2379/version": dial tcp 192.168.137.129:2379: connect: connection refused
[preflight] If you know what you are doing, you can make a check non-fatal with `–ignore-preflight-errors=…`
To see the stack trace of this error execute with –v=5 or higher: Process exited with status 1
07:30:59 UTC retry: [hjx]
07:31:19 UTC stdout: [hjx]
W0319 07:31:04.299754 5056 common.go:101] your configuration file uses a deprecated API spec: “kubeadm.k8s.io/v1beta3” (kind: “ClusterConfiguration”). Please use ‘kubeadm config migrate –old-config old.yaml –new-config new.yaml’, which will write the new, similar spec using a newer API version.
W0319 07:31:04.301883 5056 common.go:101] your configuration file uses a deprecated API spec: “kubeadm.k8s.io/v1beta3” (kind: “InitConfiguration”). Please use ‘kubeadm config migrate –old-config old.yaml –new-config new.yaml’, which will write the new, similar spec using a newer API version.
W0319 07:31:04.304394 5056 utils.go:69] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.31.0
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ExternalEtcdVersion]: Get "https://192.168.137.129:2379/version": dial tcp 192.168.137.129:2379: connect: connection refused
[preflight] If you know what you are doing, you can make a check non-fatal with `–ignore-preflight-errors=…`
To see the stack trace of this error execute with –v=5 or higher
07:31:19 UTC stdout: [hjx]
[preflight] Running pre-flight checks
W0319 07:31:19.488250 5084 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Deleted contents of the etcd data directory: /var/lib/etcd
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in “/var/lib/kubelet”
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the “iptables” command.
If your cluster was setup to utilize IPVS, run ipvsadm –clear (or similar)
to reset your system’s IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
07:31:19 UTC message: [hjx]
init kubernetes cluster failed: Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubeadm init –config=/etc/kubernetes/kubeadm-config.yaml –ignore-preflight-errors=FileExisting-crictl,ImagePull”
W0319 07:31:04.299754 5056 common.go:101] your configuration file uses a deprecated API spec: “kubeadm.k8s.io/v1beta3” (kind: “ClusterConfiguration”). Please use ‘kubeadm config migrate –old-config old.yaml –new-config new.yaml’, which will write the new, similar spec using a newer API version.
W0319 07:31:04.301883 5056 common.go:101] your configuration file uses a deprecated API spec: “kubeadm.k8s.io/v1beta3” (kind: “InitConfiguration”). Please use ‘kubeadm config migrate –old-config old.yaml –new-config new.yaml’, which will write the new, similar spec using a newer API version.
W0319 07:31:04.304394 5056 utils.go:69] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.31.0
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ExternalEtcdVersion]: Get "https://192.168.137.129:2379/version": dial tcp 192.168.137.129:2379: connect: connection refused
[preflight] If you know what you are doing, you can make a check non-fatal with `–ignore-preflight-errors=…`
To see the stack trace of this error execute with –v=5 or higher: Process exited with status 1
07:31:19 UTC failed: [hjx]
error: Pipeline[CreateClusterPipeline] execute failed: Module[InitKubernetesModule] exec failed:
failed: [hjx] [KubeadmInit] exec failed after 3 retries: init kubernetes cluster failed: Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubeadm init –config=/etc/kubernetes/kubeadm-config.yaml –ignore-preflight-errors=FileExisting-crictl,ImagePull”
W0319 07:31:04.299754 5056 common.go:101] your configuration file uses a deprecated API spec: “kubeadm.k8s.io/v1beta3” (kind: “ClusterConfiguration”). Please use ‘kubeadm config migrate –old-config old.yaml –new-config new.yaml’, which will write the new, similar spec using a newer API version.
W0319 07:31:04.301883 5056 common.go:101] your configuration file uses a deprecated API spec: “kubeadm.k8s.io/v1beta3” (kind: “InitConfiguration”). Please use ‘kubeadm config migrate –old-config old.yaml –new-config new.yaml’, which will write the new, similar spec using a newer API version.
W0319 07:31:04.304394 5056 utils.go:69] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.31.0
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ExternalEtcdVersion]: Get "https://192.168.137.129:2379/version": dial tcp 192.168.137.129:2379: connect: connection refused
[preflight] If you know what you are doing, you can make a check non-fatal with `–ignore-preflight-errors=…`
To see the stack trace of this error execute with –v=5 or higher: Process exited with status 1