• 安装部署
  • 使用kk安装etcd更新etcd.env后会导致etcd无法启动

创建部署问题时,请参考下面模板,你提供的信息越多,越容易及时获得解答。
你只花一分钟创建的问题,不能指望别人花上半个小时给你解答。
发帖前请点击 发表主题 右边的 预览(👀) 按钮,确保帖子格式正确。

操作系统信息
例如:虚拟机/物理机,Centos7.5/Ubuntu18.04,4C/8G

虚拟机ubuntu24.04

问题是什么
报错日志是什么,最好有截图。

16:55:14 CST [ETCDConfigureModule] Generate etcd.env config on new etcd

16:55:14 CST success: [k8s-master]

16:55:14 CST [ETCDConfigureModule] Refresh etcd.env config on all etcd

16:55:14 CST success: [k8s-master]

16:55:14 CST [ETCDConfigureModule] Restart etcd

16:55:14 CST stdout: [k8s-master]

Job for etcd.service failed because the control process exited with error code.

See “systemctl status etcd.service” and “journalctl -xeu etcd.service” for details.

16:55:14 CST message: [k8s-master]

start etcd failed: Failed to exec command: sudo -E /bin/bash -c “systemctl daemon-reload && systemctl restart etcd && systemctl enable etcd”

Cauchy

我看过了,是生成的etcd.env文件配置信息有问题

ETCD_LISTEN_CLIENT_URLS

ETCD_LISTEN_PEER_URLS

这两个字段监听IP导致的,调整成0.0.0.0后可以正常启动

但是我etcd启动后k8s还是无法正常启动

[kubelet-start] Starting the kubelet

[control-plane] Using manifest folder “/etc/kubernetes/manifests”

[control-plane] Creating static Pod manifest for “kube-apiserver”

[control-plane] Creating static Pod manifest for “kube-controller-manager”

[control-plane] Creating static Pod manifest for “kube-scheduler”

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s

[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred: timed out waiting for the condition
This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - ‘systemctl status kubelet’ - ‘journalctl -xeu kubelet’
Additionally, a control plane component may have crashed or exited when started by the container runtime.To troubleshoot, list all containers using your preferred container runtimes CLI.Here is one example how you may list all running Kubernetes containers by using crictl: - ‘crictl –runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause’ Once you have found the failing container, you can inspect its logs with: - ’crictl –runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID’error execution phase wait-control-plane: couldn’t initialize a Kubernetes clusterTo see the stack trace of this error execute with –v=5 or higher: Process exited with status 115:18:27 CST failed: [k8s-master]