- 已编辑
afrojewelz
为啥迁移CRI需要用到kk?kk目前没有迁移CRI的功能,并且我认为这种操作可能会涉及到迁移节点容器的驱逐或启停,都是需要用户手动去做迁移的。
afrojewelz
为啥迁移CRI需要用到kk?kk目前没有迁移CRI的功能,并且我认为这种操作可能会涉及到迁移节点容器的驱逐或启停,都是需要用户手动去做迁移的。
24sama 感谢回复 我并不是在用kk迁移runtime
我只是在我的Homelab用kk部署新版的k8s ,手动迁移对我来讲并没有太大难度 尤其是当你的kubelet可以正常运作时 的确没有什么问题 只是因为目前runtime的表现很奇怪 并没有正常运作 kk的检测可能存在大小写不敏感的问题
afrojewelz
根据K8s官方文档,这里的SystemdCgroup
是首字母大写的。
https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd-systemd
请问有相关systemdCgroup
这种形式的参考依据吗?
24sama 文档的确是在键值区分了大小写 但是作为部署脚本 增加容错度我认为不是坏事儿
另外我目前在containerd上手动切换基本成功了 而且也用上了private key ,推送etcd那些证书啊什么的挺顺利 但是又是在推送kubeadm.cfg的时候kubeletcfg的时候scp又报错了 一个头俩个大 T..T
[ETCDConfigureModule] Health check on all etcd
14:14:57 +08 success: [pvesc]
14:14:57 +08 success: [ryzenpve]
14:14:57 +08 success: [h170i]
14:14:57 +08 [ETCDBackupModule] Backup etcd data regularly
14:15:04 +08 success: [pvesc]
14:15:04 +08 success: [ryzenpve]
14:15:04 +08 success: [h170i]
14:15:04 +08 [InstallKubeBinariesModule] Synchronize kubernetes binaries
14:15:20 +08 success: [pvesc]
14:15:20 +08 success: [ryzenpve]
14:15:20 +08 success: [neopve]
14:15:20 +08 success: [h170i]
14:15:20 +08 success: [qm77prx]
14:15:20 +08 [InstallKubeBinariesModule] Synchronize kubelet
14:15:20 +08 success: [pvesc]
14:15:20 +08 success: [ryzenpve]
14:15:20 +08 success: [h170i]
14:15:20 +08 success: [neopve]
14:15:20 +08 success: [qm77prx]
14:15:20 +08 [InstallKubeBinariesModule] Generate kubelet service
14:15:22 +08 success: [pvesc]
14:15:22 +08 success: [ryzenpve]
14:15:22 +08 success: [h170i]
14:15:22 +08 success: [neopve]
14:15:22 +08 success: [qm77prx]
14:15:22 +08 [InstallKubeBinariesModule] Enable kubelet service
14:15:23 +08 success: [pvesc]
14:15:23 +08 success: [h170i]
14:15:23 +08 success: [ryzenpve]
14:15:23 +08 success: [qm77prx]
14:15:23 +08 success: [neopve]
14:15:23 +08 [InstallKubeBinariesModule] Generate kubelet env
14:15:25 +08 success: [pvesc]
14:15:25 +08 success: [ryzenpve]
14:15:25 +08 success: [h170i]
14:15:25 +08 success: [neopve]
14:15:25 +08 success: [qm77prx]
14:15:25 +08 [InitKubernetesModule] Generate kubeadm config
14:15:26 +08 message: [pvesc]
scp file /root/kubekey/pvesc/kubeadm-config.yaml to remote /etc/kubernetes/kubeadm-config.yaml failed: Failed to exec command: sudo -E /bin/bash -c "mv -f /tmp/kubekey/etc/kubernetes/kubeadm-config.yaml /etc/kubernetes/kubeadm-config.yaml"
mv: cannot stat '/tmp/kubekey/etc/kubernetes/kubeadm-config.yaml': No such file or directory: Process exited with status 1
24sama root@pvesc:~# tree /tmp/kubekey/
/tmp/kubekey/
0 directories, 0 files
用lsof -D没找到除了KK以外的进程在读写这个目录啊 /tmp/kubekey下确实啥都没有 但是~/kubekey下面cfg啥的通通都有
afrojewelz
是不是权限问题,如果使用privateKey的话,远程机器建议使用root用户
24sama
我用的pve debian自己全部都是用的root按理说不该有权限问题啊
afrojewelz
你试试./kk delete cluster -f config.yaml,然后把~/kubekey目录整个删了,再重新试试
24sama
试过 仍旧现状一样 scp不过去
afrojewelz
你这个确实很奇怪,因为在这个scp的操作之前,也会有其他scp的操作,但是日志都是显示执行成功了。
首先需要确保你执行kk的目录不在/tmp目录下,其次确保执行目录下的./kubekey目录是干净的,然后执行的时候可以加上–debug参数,看一下更详细的日志。
24sama
首先需要确保你执行kk的目录不在/tmp目录下,check
其次确保执行目录下的./kubekey目录是干净的,这个有时~ 在这里执行 有时我在.kube下执行没有定
然后执行的时候可以加上–debug参数,看一下更详细的日志。好吧 我加上来看看详细的问题
`Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubectl –no-headers=true get nodes -o custom-columns=:metadata.name,:status.nodeInfo.kubeletVersion,:status.addresses”
The connection to the server lb.kubesphere.local:6443 was refused - did you specify the right host or port?: Process exited with status 1
16:02:32 +08 stdout: [pvesc]
The connection to the server lb.kubesphere.local:6443 was refused - did you specify the right host or port?
16:02:32 +08 message: [pvesc]
get kubernetes cluster info failed: Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubectl –no-headers=true get nodes -o custom-columns=:metadata.name,:status.nodeInfo.kubeletVersion,:status.addresses”
The connection to the server lb.kubesphere.local:6443 was refused - did you specify the right host or port?: Process exited with status 1
16:02:32 +08 check remote file exist: false
16:02:32 +08 check remote file exist: false
16:02:32 +08 [ERRO] exec remoteFileCommand sudo -E /bin/bash -c “ls -l /etc/kubernetes/admin.conf 2>/dev/null |wc -l” err: failed to get SSH session: ssh: unexpected packet in response to channel open: <nil>
16:02:32 +08 check remote file exist: false
16:02:32 +08 [ERRO] exec remoteFileCommand sudo -E /bin/bash -c “ls -l /etc/kubernetes/admin.conf 2>/dev/null |wc -l” err: failed to get SSH session: read tcp 192.168.50.6:48388->192.168.50.10:22: read: connection reset by peer
16:02:32 +08 check remote file exist: false
16:02:32 +08 failed: [pvesc]
16:02:32 +08 failed: [pvesc]
16:02:32 +08 success: [ryzenpve]
16:02:32 +08 success: [ryzenpve]
16:02:32 +08 success: [h170i]
16:02:32 +08 success: [h170i]
error: Pipeline[CreateClusterPipeline] execute failed: Module[KubernetesStatusModule] exec failed:
failed: [pvesc] [GetClusterStatus] exec failed after 3 retires: get kubernetes cluster info failed: Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubectl –no-headers=true get nodes -o custom-columns=:meta data.name,:status.nodeInfo.kubeletVersion,:status.addresses”
The connection to the server lb.kubesphere.local:6443 was refused - did you specify the right host or port?: Process exited with status 1
failed: [pvesc] [GetClusterStatus] exec failed after 3 retires: get kubernetes cluster info failed: Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubectl –no-headers=true get nodes -o custom-columns=:meta data.name,:status.nodeInfo.kubeletVersion,:status.addresses”
The connection to the server lb.kubesphere.local:6443 was refused - did you specify the right host or port?: Process exited with status 1
root@pvesc:~# kubectl –no-headers=true get nodes -o custom-columns=:metadata.name
The connection to the server lb.kubesphere.local:6443 was refused - did you specify the right host or port?
root@pvesc:~# journalctl -xeu kubelet
Jun 23 16:07:19 pvesc kubelet[63114]: E0623 16:07:19.065631 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:19 pvesc kubelet[63114]: E0623 16:07:19.166649 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:19 pvesc kubelet[63114]: E0623 16:07:19.267734 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:19 pvesc kubelet[63114]: E0623 16:07:19.368104 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:19 pvesc kubelet[63114]: E0623 16:07:19.416159 63114 remote_runtime.go:116] “RunPodSandbox from runtime service failed” err="rpc error: code = Unknown desc = failed to reserve sandbox name \"kube-controll>
Jun 23 16:07:19 pvesc kubelet[63114]: E0623 16:07:19.416188 63114 kuberuntime_sandbox.go:70] “Failed to create sandbox for pod” err="rpc error: code = Unknown desc = failed to reserve sandbox name \"kube-controller-ma>
Jun 23 16:07:19 pvesc kubelet[63114]: E0623 16:07:19.416207 63114 kuberuntime_manager.go:819] “CreatePodSandbox for pod failed” err="rpc error: code = Unknown desc = failed to reserve sandbox name \"kube-controller-ma>
Jun 23 16:07:19 pvesc kubelet[63114]: E0623 16:07:19.416251 63114 pod_workers.go:951] “Error syncing pod, skipping” err="failed to \“CreatePodSandbox\” for \"kube-controller-manager-pvesc_kube-system(bce721e0b9bbac731>
Jun 23 16:07:19 pvesc kubelet[63114]: E0623 16:07:19.468772 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:19 pvesc kubelet[63114]: E0623 16:07:19.569863 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:19 pvesc kubelet[63114]: E0623 16:07:19.658234 63114 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://lb.kubesphere.lo>
Jun 23 16:07:19 pvesc kubelet[63114]: E0623 16:07:19.670804 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:19 pvesc kubelet[63114]: E0623 16:07:19.771265 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:19 pvesc kubelet[63114]: E0623 16:07:19.872198 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:19 pvesc kubelet[63114]: E0623 16:07:19.972341 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:20 pvesc kubelet[63114]: E0623 16:07:20.073244 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:20 pvesc kubelet[63114]: E0623 16:07:20.173387 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:20 pvesc kubelet[63114]: E0623 16:07:20.274472 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:20 pvesc kubelet[63114]: E0623 16:07:20.374764 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:20 pvesc kubelet[63114]: E0623 16:07:20.475737 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:20 pvesc kubelet[63114]: E0623 16:07:20.576714 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:20 pvesc kubelet[63114]: E0623 16:07:20.677711 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:20 pvesc kubelet[63114]: E0623 16:07:20.778824 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:20 pvesc kubelet[63114]: E0623 16:07:20.879343 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:20 pvesc kubelet[63114]: E0623 16:07:20.980246 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:21 pvesc kubelet[63114]: E0623 16:07:21.080610 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:21 pvesc kubelet[63114]: E0623 16:07:21.181621 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:21 pvesc kubelet[63114]: E0623 16:07:21.281779 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:21 pvesc kubelet[63114]: E0623 16:07:21.382111 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:21 pvesc kubelet[63114]: E0623 16:07:21.482159 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:21 pvesc kubelet[63114]: E0623 16:07:21.574510 63114 kubelet.go:2376] “Container runtime network not ready” networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns err>
Jun 23 16:07:21 pvesc kubelet[63114]: E0623 16:07:21.582848 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:21 pvesc kubelet[63114]: E0623 16:07:21.683652 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:21 pvesc kubelet[63114]: E0623 16:07:21.784319 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:21 pvesc kubelet[63114]: E0623 16:07:21.885216 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:21 pvesc kubelet[63114]: E0623 16:07:21.945159 63114 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://lb.kubesphere.local:6443/apis/coordination.k8s.io/v1/namespaces>
Jun 23 16:07:21 pvesc kubelet[63114]: E0623 16:07:21.985794 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:22 pvesc kubelet[63114]: E0623 16:07:22.086781 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:22 pvesc kubelet[63114]: I0623 16:07:22.087500 63114 kubelet_node_status.go:71] “Attempting to register node” node=“pvesc”
Jun 23 16:07:22 pvesc kubelet[63114]: E0623 16:07:22.087775 63114 kubelet_node_status.go:93] “Unable to register node with API server” err="Post \“https://lb.kubesphere.local:6443/api/v1/nodes\”: dial tcp 192.168.50.6>
Jun 23 16:07:22 pvesc kubelet[63114]: E0623 16:07:22.187903 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:22 pvesc kubelet[63114]: E0623 16:07:22.287952 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:22 pvesc kubelet[63114]: E0623 16:07:22.389116 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:22 pvesc kubelet[63114]: E0623 16:07:22.490307 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:22 pvesc kubelet[63114]: E0623 16:07:22.591156 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:22 pvesc kubelet[63114]: E0623 16:07:22.692169 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:22 pvesc kubelet[63114]: E0623 16:07:22.792719 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:22 pvesc kubelet[63114]: E0623 16:07:22.893710 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:22 pvesc kubelet[63114]: E0623 16:07:22.994070 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:23 pvesc kubelet[63114]: E0623 16:07:23.095045 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:23 pvesc kubelet[63114]: E0623 16:07:23.196094 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:23 pvesc kubelet[63114]: E0623 16:07:23.296709 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:23 pvesc kubelet[63114]: E0623 16:07:23.397737 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:23 pvesc kubelet[63114]: E0623 16:07:23.498759 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:23 pvesc kubelet[63114]: E0623 16:07:23.599411 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
root@pvesc:~# systemctl status -l kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Thu 2022-06-23 16:01:20 +08; 6min ago
Docs: http://kubernetes.io/docs/
Main PID: 63114 (kubelet)
Tasks: 27 (limit: 9830)
Memory: 44.7M
CPU: 6.206s
CGroup: /system.slice/kubelet.service
└─63114 /usr/local/bin/kubelet –bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf –kubeconfig=/etc/kubernetes/kubelet.conf –config=/var/lib/kubelet/config.yaml –cgroup-driver=systemd –contain>
Jun 23 16:07:31 pvesc kubelet[63114]: E0623 16:07:31.149366 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:31 pvesc kubelet[63114]: E0623 16:07:31.250069 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:31 pvesc kubelet[63114]: E0623 16:07:31.350598 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:31 pvesc kubelet[63114]: E0623 16:07:31.451677 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:31 pvesc kubelet[63114]: E0623 16:07:31.551702 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:31 pvesc kubelet[63114]: E0623 16:07:31.576248 63114 kubelet.go:2376] “Container runtime network not ready” networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns err>
Jun 23 16:07:31 pvesc kubelet[63114]: E0623 16:07:31.652201 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:31 pvesc kubelet[63114]: E0623 16:07:31.752493 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:31 pvesc kubelet[63114]: E0623 16:07:31.853365 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:31 pvesc kubelet[63114]: E0623 16:07:31.954240 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"`
先忽略scp到其他机器…
第一个运行kk的主节点kublet能启动 但是会反复启动cni环境 但是又启动不了
runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
afrojewelz
你是不是没有先执行kk delele cluster -f config.yaml清理环境?
24sama 执行了 清理过了 还手动check了那些目录是否还在 在就rm -rf
afrojewelz
/etc/kubernetes/admin.conf
这个文件还在吗。清理环境之后应该没有这个文件了
24sama 已经没有这个文件了
24sama
经过一番清理 scp命令貌似通过了 配置基本都推送到了member节点
但是kubelet 仍然起不动 一个有意思的情况是 列出缓存里有的pause镜像版本是3.6 但是我日志里pause仍然tag版本为3.5不知道有没有影响
使用多master想配置homelab成HA真的难 我几乎想手动kubeadm init了
`sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:40:56 +08 stdout: [h170i]
/root/.bashrc:行22: /usr/local/bin/kubectl: 权限不够
/root/.bashrc:行22: /usr/local/bin/kubectl: 权限不够
00:40:56 +08 command: [neopve]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:40:56 +08 stdout: [neopve]
/root/.bashrc: line 23: /usr/local/bin/kubectl: Permission denied
/root/.bashrc: line 23: /usr/local/bin/kubectl: Permission denied
00:40:56 +08 command: [h170i]
sudo -E /bin/bash -c “chmod +x /usr/local/bin/kubectl”
00:40:56 +08 stdout: [h170i]
/root/.bashrc:行22: /usr/local/bin/kubectl: 权限不够
/root/.bashrc:行22: /usr/local/bin/kubectl: 权限不够
00:40:56 +08 command: [ryzenpve]
sudo -E /bin/bash -c “tar -zxf /tmp/kubekey/cni-plugins-linux-amd64-v0.9.1.tgz -C /opt/cni/bin”
00:40:56 +08 command: [neopve]
sudo -E /bin/bash -c “chmod +x /usr/local/bin/kubectl”
00:40:56 +08 stdout: [neopve]
/root/.bashrc: line 23: /usr/local/bin/kubectl: Permission denied
/root/.bashrc: line 23: /usr/local/bin/kubectl: Permission denied
00:40:57 +08 scp local file /root/.kube/kubekey/helm/v3.6.3/amd64/helm to remote /tmp/kubekey/usr/local/bin/helm success
00:40:57 +08 scp local file /root/.kube/kubekey/helm/v3.6.3/amd64/helm to remote /tmp/kubekey/usr/local/bin/helm success
00:40:57 +08 scp local file /root/.kube/kubekey/kube/v1.22.10/amd64/kubectl to remote /tmp/kubekey/usr/local/bin/kubectl success
00:40:58 +08 command: [h170i]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/usr/local/bin/helm /usr/local/bin/helm”
00:40:58 +08 command: [neopve]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/usr/local/bin/helm /usr/local/bin/helm”
00:40:58 +08 command: [h170i]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:40:58 +08 command: [qm77prx]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/usr/local/bin/kubectl /usr/local/bin/kubectl”
00:40:58 +08 command: [qm77prx]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:40:58 +08 stdout: [qm77prx]
/root/.bashrc: line 23: /usr/local/bin/kubectl: Permission denied
/root/.bashrc: line 23: /usr/local/bin/kubectl: Permission denied
00:40:58 +08 command: [h170i]
sudo -E /bin/bash -c “chmod +x /usr/local/bin/helm”
00:40:58 +08 command: [neopve]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:40:58 +08 command: [qm77prx]
sudo -E /bin/bash -c “chmod +x /usr/local/bin/kubectl”
00:40:58 +08 stdout: [qm77prx]
/root/.bashrc: line 23: /usr/local/bin/kubectl: Permission denied
/root/.bashrc: line 23: /usr/local/bin/kubectl: Permission denied
00:40:58 +08 command: [neopve]
sudo -E /bin/bash -c “chmod +x /usr/local/bin/helm”
00:40:59 +08 scp local file /root/.kube/kubekey/cni/v0.9.1/amd64/cni-plugins-linux-amd64-v0.9.1.tgz to remote /tmp/kubekey/cni-plugins-linux-amd64-v0.9.1.tgz success
00:41:00 +08 scp local file /root/.kube/kubekey/cni/v0.9.1/amd64/cni-plugins-linux-amd64-v0.9.1.tgz to remote /tmp/kubekey/cni-plugins-linux-amd64-v0.9.1.tgz success
00:41:00 +08 scp local file /root/.kube/kubekey/helm/v3.6.3/amd64/helm to remote /tmp/kubekey/usr/local/bin/helm success
00:41:00 +08 command: [h170i]
sudo -E /bin/bash -c “tar -zxf /tmp/kubekey/cni-plugins-linux-amd64-v0.9.1.tgz -C /opt/cni/bin”
00:41:01 +08 command: [qm77prx]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/usr/local/bin/helm /usr/local/bin/helm”
00:41:01 +08 command: [neopve]
sudo -E /bin/bash -c “tar -zxf /tmp/kubekey/cni-plugins-linux-amd64-v0.9.1.tgz -C /opt/cni/bin”
00:41:01 +08 command: [qm77prx]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:41:01 +08 command: [qm77prx]
sudo -E /bin/bash -c “chmod +x /usr/local/bin/helm”
00:41:03 +08 scp local file /root/.kube/kubekey/cni/v0.9.1/amd64/cni-plugins-linux-amd64-v0.9.1.tgz to remote /tmp/kubekey/cni-plugins-linux-amd64-v0.9.1.tgz success
00:41:04 +08 command: [qm77prx]
sudo -E /bin/bash -c “tar -zxf /tmp/kubekey/cni-plugins-linux-amd64-v0.9.1.tgz -C /opt/cni/bin”
00:41:04 +08 success: [pvesc]
00:41:04 +08 success: [ryzenpve]
00:41:04 +08 success: [h170i]
00:41:04 +08 success: [neopve]
00:41:04 +08 success: [qm77prx]
00:41:04 +08 [InstallKubeBinariesModule] Synchronize kubelet
00:41:04 +08 command: [pvesc]
sudo -E /bin/bash -c “chmod +x /usr/local/bin/kubelet”
00:41:04 +08 command: [ryzenpve]
sudo -E /bin/bash -c “chmod +x /usr/local/bin/kubelet”
00:41:04 +08 command: [h170i]
sudo -E /bin/bash -c “chmod +x /usr/local/bin/kubelet”
00:41:04 +08 command: [neopve]
sudo -E /bin/bash -c “chmod +x /usr/local/bin/kubelet”
00:41:04 +08 command: [qm77prx]
sudo -E /bin/bash -c “chmod +x /usr/local/bin/kubelet”
00:41:04 +08 success: [pvesc]
00:41:04 +08 success: [ryzenpve]
00:41:04 +08 success: [h170i]
00:41:04 +08 success: [neopve]
00:41:04 +08 success: [qm77prx]
00:41:04 +08 [InstallKubeBinariesModule] Generate kubelet service
00:41:04 +08 scp local file /root/.kube/kubekey/pvesc/kubelet.service to remote /tmp/kubekey/etc/systemd/system/kubelet.service success
00:41:04 +08 scp local file /root/.kube/kubekey/ryzenpve/kubelet.service to remote /tmp/kubekey/etc/systemd/system/kubelet.service success
00:41:04 +08 scp local file /root/.kube/kubekey/h170i/kubelet.service to remote /tmp/kubekey/etc/systemd/system/kubelet.service success
00:41:04 +08 command: [pvesc]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/etc/systemd/system/kubelet.service /etc/systemd/system/kubelet.service”
00:41:05 +08 scp local file /root/.kube/kubekey/neopve/kubelet.service to remote /tmp/kubekey/etc/systemd/system/kubelet.service success
00:41:05 +08 scp local file /root/.kube/kubekey/qm77prx/kubelet.service to remote /tmp/kubekey/etc/systemd/system/kubelet.service success
00:41:05 +08 command: [ryzenpve]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/etc/systemd/system/kubelet.service /etc/systemd/system/kubelet.service”
00:41:05 +08 command: [pvesc]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:41:05 +08 command: [h170i]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/etc/systemd/system/kubelet.service /etc/systemd/system/kubelet.service”
00:41:05 +08 command: [ryzenpve]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:41:05 +08 command: [h170i]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:41:05 +08 command: [neopve]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/etc/systemd/system/kubelet.service /etc/systemd/system/kubelet.service”
00:41:05 +08 command: [qm77prx]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/etc/systemd/system/kubelet.service /etc/systemd/system/kubelet.service”
00:41:05 +08 command: [neopve]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:41:05 +08 command: [qm77prx]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:41:05 +08 success: [pvesc]
00:41:05 +08 success: [ryzenpve]
00:41:05 +08 success: [h170i]
00:41:05 +08 success: [neopve]
00:41:05 +08 success: [qm77prx]
00:41:05 +08 [InstallKubeBinariesModule] Enable kubelet service
00:41:06 +08 command: [pvesc]
sudo -E /bin/bash -c “systemctl disable kubelet && systemctl enable kubelet && ln -snf /usr/local/bin/kubelet /usr/bin/kubelet”
00:41:06 +08 stdout: [pvesc]
Removed /etc/systemd/system/multi-user.target.wants/kubelet.service.
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /etc/systemd/system/kubelet.service.
00:41:06 +08 command: [h170i]
sudo -E /bin/bash -c “systemctl disable kubelet && systemctl enable kubelet && ln -snf /usr/local/bin/kubelet /usr/bin/kubelet”
00:41:06 +08 stdout: [h170i]
Removed /etc/systemd/system/multi-user.target.wants/kubelet.service.
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /etc/systemd/system/kubelet.service.
00:41:06 +08 command: [ryzenpve]
sudo -E /bin/bash -c “systemctl disable kubelet && systemctl enable kubelet && ln -snf /usr/local/bin/kubelet /usr/bin/kubelet”
00:41:06 +08 stdout: [ryzenpve]
Removed /etc/systemd/system/multi-user.target.wants/kubelet.service.
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /etc/systemd/system/kubelet.service.
00:41:06 +08 command: [neopve]
sudo -E /bin/bash -c “systemctl disable kubelet && systemctl enable kubelet && ln -snf /usr/local/bin/kubelet /usr/bin/kubelet”
00:41:06 +08 stdout: [neopve]
Removed /etc/systemd/system/multi-user.target.wants/kubelet.service.
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /etc/systemd/system/kubelet.service.
00:41:06 +08 command: [qm77prx]
sudo -E /bin/bash -c “systemctl disable kubelet && systemctl enable kubelet && ln -snf /usr/local/bin/kubelet /usr/bin/kubelet”
00:41:06 +08 stdout: [qm77prx]
Removed /etc/systemd/system/multi-user.target.wants/kubelet.service.
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /etc/systemd/system/kubelet.service.
00:41:06 +08 success: [pvesc]
00:41:06 +08 success: [h170i]
00:41:06 +08 success: [ryzenpve]
00:41:06 +08 success: [neopve]
00:41:06 +08 success: [qm77prx]
00:41:06 +08 [InstallKubeBinariesModule] Generate kubelet env
00:41:07 +08 scp local file /root/.kube/kubekey/pvesc/10-kubeadm.conf to remote /tmp/kubekey/etc/systemd/system/kubelet.service.d/10-kubeadm.conf success
00:41:07 +08 scp local file /root/.kube/kubekey/ryzenpve/10-kubeadm.conf to remote /tmp/kubekey/etc/systemd/system/kubelet.service.d/10-kubeadm.conf success
00:41:07 +08 scp local file /root/.kube/kubekey/h170i/10-kubeadm.conf to remote /tmp/kubekey/etc/systemd/system/kubelet.service.d/10-kubeadm.conf success
00:41:07 +08 scp local file /root/.kube/kubekey/neopve/10-kubeadm.conf to remote /tmp/kubekey/etc/systemd/system/kubelet.service.d/10-kubeadm.conf success
00:41:07 +08 command: [pvesc]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/etc/systemd/system/kubelet.service.d/10-kubeadm.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf”
00:41:07 +08 scp local file /root/.kube/kubekey/qm77prx/10-kubeadm.conf to remote /tmp/kubekey/etc/systemd/system/kubelet.service.d/10-kubeadm.conf success
00:41:07 +08 command: [ryzenpve]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/etc/systemd/system/kubelet.service.d/10-kubeadm.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf”
00:41:07 +08 command: [pvesc]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:41:07 +08 command: [h170i]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/etc/systemd/system/kubelet.service.d/10-kubeadm.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf”
00:41:07 +08 command: [ryzenpve]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:41:07 +08 command: [h170i]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:41:08 +08 command: [neopve]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/etc/systemd/system/kubelet.service.d/10-kubeadm.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf”
00:41:08 +08 command: [qm77prx]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/etc/systemd/system/kubelet.service.d/10-kubeadm.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf”
00:41:08 +08 command: [neopve]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:41:08 +08 command: [qm77prx]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:41:08 +08 success: [pvesc]
00:41:08 +08 success: [ryzenpve]
00:41:08 +08 success: [h170i]
00:41:08 +08 success: [neopve]
00:41:08 +08 success: [qm77prx]
00:41:08 +08 [InitKubernetesModule] Generate kubeadm config
00:41:08 +08 command: [pvesc]
sudo -E /bin/bash -c “containerd config dump | grep SystemdCgroup”
00:41:08 +08 stdout: [pvesc]
SystemdCgroup = true
00:41:08 +08 pauseTag: 3.5, corednsTag: 1.8.0
00:41:08 +08 pauseTag: 3.5, corednsTag: 1.8.0
00:41:08 +08 pauseTag: 3.5, corednsTag: 1.8.0
00:41:08 +08 pauseTag: 3.5, corednsTag: 1.8.0
00:41:08 +08 pauseTag: 3.5, corednsTag: 1.8.0
00:41:08 +08 command: [pvesc]
sudo -E /bin/bash -c “containerd config dump | grep SystemdCgroup”
00:41:08 +08 stdout: [pvesc]
SystemdCgroup = true
00:41:08 +08 Set kubeletConfiguration: %vmap[cgroupDriver:systemd clusterDNS:[169.254.25.10] clusterDomain:cluster.local containerLogMaxFiles:3 containerLogMaxSize:5Mi evictionHard:map[memory.available:5% pid.available:5%] evictionMaxPodGracePeriod:120 evictionPressureTr ansitionPeriod:30s evictionSoft:map[memory.available:10%] evictionSoftGracePeriod:map[memory.available:2m] featureGates:map[CSIStorageCapacity:true ExpandCSIVolumes:true RotateKubeletServerCertificate:true TTLAfterFinished:true] kubeReserved:map[cpu:200m memory:250Mi] ma xPods:110 rotateCertificates:true systemReserved:map[cpu:200m memory:250Mi]]
00:41:09 +08 scp local file /root/.kube/kubekey/pvesc/kubeadm-config.yaml to remote /tmp/kubekey/etc/kubernetes/kubeadm-config.yaml success
00:41:09 +08 command: [pvesc]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/etc/kubernetes/kubeadm-config.yaml /etc/kubernetes/kubeadm-config.yaml”
00:41:09 +08 command: [pvesc]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/*”
00:41:09 +08 skipped: [h170i]
00:41:09 +08 skipped: [ryzenpve]
00:41:09 +08 success: [pvesc]
00:41:09 +08 [InitKubernetesModule] Init cluster using kubeadm
00:45:11 +08 command: [pvesc]
sudo -E /bin/bash -c “/usr/local/bin/kubeadm init –config=/etc/kubernetes/kubeadm-config.yaml –ignore-preflight-errors=FileExisting-crictl”
00:45:11 +08 stdout: [pvesc]
W0627 00:41:09.878828 34506 utils.go:69] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [192.168.50.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.22.10
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
[certs] Using certificateDir folder “/etc/kubernetes/pki”
[certs] Generating “ca” certificate and key
[certs] Generating “apiserver” certificate and key
[certs] apiserver serving cert is signed for DNS names [h170i h170i.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost neopve neopve.cluster.local pvesc pvesc.cluster.local qm77prx qm77prx .cluster.local ryzenpve ryzenpve.cluster.local sdb2640m sdb2640m.cluster.local] and IPs [192.168.50.1 192.168.50.6 127.0.0.1 192.168.50.10 192.168.50.20 192.168.50.23 192.168.50.40 192.168.50.253]
[certs] Generating “apiserver-kubelet-client” certificate and key
[certs] Generating “front-proxy-ca” certificate and key
[certs] Generating “front-proxy-client” certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating “sa” key and public key
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
[kubeconfig] Writing “admin.conf” kubeconfig file
[kubeconfig] Writing “kubelet.conf” kubeconfig file
[kubeconfig] Writing “controller-manager.conf” kubeconfig file
[kubeconfig] Writing “scheduler.conf” kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
[control-plane] Creating static Pod manifest for “kube-scheduler”
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn’t initialize a Kubernetes cluster
To see the stack trace of this error execute with –v=5 or higher
00:45:11 +08 stderr: [pvesc]
Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubeadm init –config=/etc/kubernetes/kubeadm-config.yaml –ignore-preflight-errors=FileExisting-crictl”
W0627 00:41:09.878828 34506 utils.go:69] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [192.168.50.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.22.10
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
[certs] Using certificateDir folder “/etc/kubernetes/pki”
[certs] Generating “ca” certificate and key
[certs] Generating “apiserver” certificate and key
[certs] apiserver serving cert is signed for DNS names [h170i h170i.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost neopve neopve.cluster.local pvesc pvesc.cluster.local qm77prx qm77prx .cluster.local ryzenpve ryzenpve.cluster.local sdb2640m sdb2640m.cluster.local] and IPs [192.168.50.1 192.168.50.6 127.0.0.1 192.168.50.10 192.168.50.20 192.168.50.23 192.168.50.40 192.168.50.253]
[certs] Generating “apiserver-kubelet-client” certificate and key
[certs] Generating “front-proxy-ca” certificate and key
[certs] Generating “front-proxy-client” certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating “sa” key and public key
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
[kubeconfig] Writing “admin.conf” kubeconfig file
[kubeconfig] Writing “kubelet.conf” kubeconfig file
[kubeconfig] Writing “controller-manager.conf” kubeconfig file
[kubeconfig] Writing “scheduler.conf” kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
[control-plane] Creating static Pod manifest for “kube-scheduler”
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn’t initialize a Kubernetes cluster
To see the stack trace of this error execute with –v=5 or higher: Process exited with status 1`
afrojewelz
通常来讲使用一个干净的环境来安装是最简单。非干净的环境不可避免不同的人会进行不同的配置。
24sama
1 权限我都递归把bash_completeion涉及到的位置都775了 仍然提示权限不够 让人困惑 不知道具体哪条不够 ls -la /usr/bin/kubectl是rwxrwxr-x
2 systemctl status -l kubelet 那就好长了 但是用fatal关键字窃取不到什么内容 一堆error|warn ,我看到大意包括 启动kubelet对文件系统写入好像就开始报错了 然后cni组件初始化不了
3.我用crictl ps -a一看啥都没有3.5也没运行就3.6也没运行
32816 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1 .TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvesc.16fc633d6a89ff1c", GenerateName:"", Namespace:"default" , SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.L ocation)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), An notations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFi elds:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"pvesc", UID:"pvesc", API Version:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"pvesc"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0a66e0997ab031c, ext:53 26209290, loc:(*time.Location)(0x77bb7c0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0a66e0997ab031c, ext:5326209290, loc: (*time.Location)(0x77bb7c0)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Locat ion)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingIns tance:""}': 'Post "https://lb.kubesphere.local:6443/api/v1/namespaces/default/events": dial tcp 192.168.50.6:6443: connect: con nection refused'(may retry after sleeping)
kubelet.go:2376] "Container runtime network not ready" netw orkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"