同求,用docker作container-manager创建成功过 containerd总是失败
kubespherev3.2.1如何从docker环境切换到containerd环境
- 已编辑
24sama 如果我docker info 所有节点的cgroupfs driver是systemd,
它到底成功移除了dockershim了没有呢?
如果还是没有的话 到底是往cri-dockerd迁移好呢还是往containerd好呢? 我感觉我现在clusterconfig其实没有大毛病 而是kk部署光秃秃的kubernetes推送clusterconfig 推送不到members上报错的问题. 因为Kk自己就运行到scp kube相关cfg的时候报错
但是我的membernode sshkey都copy好了 手动scp都是可以免密码的
我还是贴一下我的 clustercfg吧
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: pvesc, address: 192.168.50.6, internalAddress: 192.168.50.6, user: root, password: “123456789”}
- {name: h170i, address: 192.168.50.10, internalAddress: 192.168.50.10, user: root, password: “123456789”}
- {name: ryzenpve, address: 192.168.50.20, internalAddress: 192.168.50.20, user: root, password: “123456789”}
- {name: neopve, address: 192.168.50.23, internalAddress: 192.168.50.23, user: root, password: “123456789”}
- {name: qm77prx, address: 192.168.50.40, internalAddress: 192.168.50.40, user: root, password: “123456789”}
roleGroups:
etcd:- pvesc
- ryzenpve
- h170i
control-plane: - pvesc
- ryzenpve
- h170i
master: - pvesc
- ryzenpve
- h170i
worker: - pvesc
- h170i
- ryzenpve
- neopve
- qm77prx
controlPlaneEndpoint:Internal loadbalancer for apiservers
internalLoadbalancer: haproxydomain: lb.kubesphere.local
address: ""
port: 6443
system:ntpServers: # The ntp servers of chrony.
- 192.168.50.253
- time1.cloud.tencent.com
- ntp.aliyun.com
timezone: “Asia/Singapore”
kubernetes:
version: v1.22.10
clusterName: pvesc.lan
autoRenewCerts: true
cgroupDriver: systemd
containerManager: containerd
proxyMode: ipvs # Specify which proxy mode to use. [Default: ipvs]
featureGates: # enable featureGates, [Default: {“ExpandCSIVolumes”:true,“RotateKubeletServerCertificate”: true,“CSIStorageCapacity”:true, “TTLAfterFinished”:true}]
CSIStorageCapacity: true
ExpandCSIVolumes: true
RotateKubeletServerCertificate: true
TTLAfterFinished: true
etcd:
type: kubekey
network:
plugin: calico
calico:
ipipMode: Always # IPIP Mode to use for the IPv4 POOL created at start up. If set to a value other than Never, vxlanMode should be set to “Never”. [Always | CrossSubnet | Never] [Default: Always]
vxlanMode: Never # VXLAN Mode to use for the IPv4 POOL created at start up. If set to a value other than Never, ipipMode should be set to “Never”. [Always | CrossSubnet | Never] [Default: Never]
vethMTU: 1440
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 192.168.50.0/24multus support. https://github.com/k8snetworkplumbingwg/multus-cni
multusCNI:
enabled: false
registry:
privateRegistry: ""
namespaceOverride: ""
registryMirrors: [
“https://docker.mirrors.ustc.edu.cn”,
“https://docker.io”
]
insecureRegistries: []
addons:
- name: glusterfs
namespace: kube-system
sources:
yaml:
path:
- /root/.kube/gluster-heketi-storageclass.yaml
另外附上我的所有member节点的容器信息 :
root@pvesc:~# docker info |grep -E “runtime|river”
Storage Driver: zfs
Logging Driver: json-file
Cgroup Driver: systemd
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
- 已编辑
kubectl describe node {{ NODENAME }}
的status里面可以看到对应机器的CRI连接的是docker还是containerd。- 如果要使用privateKey去连接远程机器的话,在hosts里面需要配置为如下的形式:
{name: node3, address: 172.16.0.4, internalAddress: 172.16.0.4, privateKeyPath: "~/.ssh/id_rsa"}
- 已编辑
24sama 感谢回复 但是kubectl必须是kubelet能run起来才能运行
现状是Failed to get container runtime cgroup driver.: Failed to exec command: sudo -E /bin/bash -c “containerd config dump | grep SystemdCgroup”
: Process exited with status 1
现状是 尽管我已经配置了使用containerd以及systemd的cgroup driver,但是kubelet还是提示因为cgroupfs和systemd不一致 run不起来
containerd config dump | grep -E “SystemdCgroup” 也许需要改成
containerd config dump | grep -Ei “systemdCgroup”
- 已编辑
afrojewelz
为啥迁移CRI需要用到kk?kk目前没有迁移CRI的功能,并且我认为这种操作可能会涉及到迁移节点容器的驱逐或启停,都是需要用户手动去做迁移的。
- 已编辑
24sama 感谢回复 我并不是在用kk迁移runtime
我只是在我的Homelab用kk部署新版的k8s ,手动迁移对我来讲并没有太大难度 尤其是当你的kubelet可以正常运作时 的确没有什么问题 只是因为目前runtime的表现很奇怪 并没有正常运作 kk的检测可能存在大小写不敏感的问题
afrojewelz
根据K8s官方文档,这里的SystemdCgroup
是首字母大写的。
https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd-systemd
请问有相关systemdCgroup
这种形式的参考依据吗?
- 已编辑
24sama 文档的确是在键值区分了大小写 但是作为部署脚本 增加容错度我认为不是坏事儿
另外我目前在containerd上手动切换基本成功了 而且也用上了private key ,推送etcd那些证书啊什么的挺顺利 但是又是在推送kubeadm.cfg的时候kubeletcfg的时候scp又报错了 一个头俩个大 T..T
[ETCDConfigureModule] Health check on all etcd
14:14:57 +08 success: [pvesc]
14:14:57 +08 success: [ryzenpve]
14:14:57 +08 success: [h170i]
14:14:57 +08 [ETCDBackupModule] Backup etcd data regularly
14:15:04 +08 success: [pvesc]
14:15:04 +08 success: [ryzenpve]
14:15:04 +08 success: [h170i]
14:15:04 +08 [InstallKubeBinariesModule] Synchronize kubernetes binaries
14:15:20 +08 success: [pvesc]
14:15:20 +08 success: [ryzenpve]
14:15:20 +08 success: [neopve]
14:15:20 +08 success: [h170i]
14:15:20 +08 success: [qm77prx]
14:15:20 +08 [InstallKubeBinariesModule] Synchronize kubelet
14:15:20 +08 success: [pvesc]
14:15:20 +08 success: [ryzenpve]
14:15:20 +08 success: [h170i]
14:15:20 +08 success: [neopve]
14:15:20 +08 success: [qm77prx]
14:15:20 +08 [InstallKubeBinariesModule] Generate kubelet service
14:15:22 +08 success: [pvesc]
14:15:22 +08 success: [ryzenpve]
14:15:22 +08 success: [h170i]
14:15:22 +08 success: [neopve]
14:15:22 +08 success: [qm77prx]
14:15:22 +08 [InstallKubeBinariesModule] Enable kubelet service
14:15:23 +08 success: [pvesc]
14:15:23 +08 success: [h170i]
14:15:23 +08 success: [ryzenpve]
14:15:23 +08 success: [qm77prx]
14:15:23 +08 success: [neopve]
14:15:23 +08 [InstallKubeBinariesModule] Generate kubelet env
14:15:25 +08 success: [pvesc]
14:15:25 +08 success: [ryzenpve]
14:15:25 +08 success: [h170i]
14:15:25 +08 success: [neopve]
14:15:25 +08 success: [qm77prx]
14:15:25 +08 [InitKubernetesModule] Generate kubeadm config
14:15:26 +08 message: [pvesc]
scp file /root/kubekey/pvesc/kubeadm-config.yaml to remote /etc/kubernetes/kubeadm-config.yaml failed: Failed to exec command: sudo -E /bin/bash -c "mv -f /tmp/kubekey/etc/kubernetes/kubeadm-config.yaml /etc/kubernetes/kubeadm-config.yaml"
mv: cannot stat '/tmp/kubekey/etc/kubernetes/kubeadm-config.yaml': No such file or directory: Process exited with status 1
- 已编辑
- 如果不区分大小写对containerd的配置无关,那自然无可厚非。但是如果containerd对字段大小写敏感,那么这就是解析错误了。
- 远程机器的/tmp/kubekey/这个目录下面有kubeadm-config文件吗,如果/tmp目录被其他程序清理了会找不到文件
24sama root@pvesc:~# tree /tmp/kubekey/
/tmp/kubekey/
0 directories, 0 files
用lsof -D没找到除了KK以外的进程在读写这个目录啊 /tmp/kubekey下确实啥都没有 但是~/kubekey下面cfg啥的通通都有
afrojewelz
是不是权限问题,如果使用privateKey的话,远程机器建议使用root用户
24sama
我用的pve debian自己全部都是用的root按理说不该有权限问题啊
afrojewelz
你试试./kk delete cluster -f config.yaml,然后把~/kubekey目录整个删了,再重新试试
24sama
试过 仍旧现状一样 scp不过去
afrojewelz
你这个确实很奇怪,因为在这个scp的操作之前,也会有其他scp的操作,但是日志都是显示执行成功了。
首先需要确保你执行kk的目录不在/tmp目录下,其次确保执行目录下的./kubekey目录是干净的,然后执行的时候可以加上–debug参数,看一下更详细的日志。
- 已编辑
24sama
首先需要确保你执行kk的目录不在/tmp目录下,check
其次确保执行目录下的./kubekey目录是干净的,这个有时~ 在这里执行 有时我在.kube下执行没有定
然后执行的时候可以加上–debug参数,看一下更详细的日志。好吧 我加上来看看详细的问题
`Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubectl –no-headers=true get nodes -o custom-columns=:metadata.name,:status.nodeInfo.kubeletVersion,:status.addresses”
The connection to the server lb.kubesphere.local:6443 was refused - did you specify the right host or port?: Process exited with status 1
16:02:32 +08 stdout: [pvesc]
The connection to the server lb.kubesphere.local:6443 was refused - did you specify the right host or port?
16:02:32 +08 message: [pvesc]
get kubernetes cluster info failed: Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubectl –no-headers=true get nodes -o custom-columns=:metadata.name,:status.nodeInfo.kubeletVersion,:status.addresses”
The connection to the server lb.kubesphere.local:6443 was refused - did you specify the right host or port?: Process exited with status 1
16:02:32 +08 check remote file exist: false
16:02:32 +08 check remote file exist: false
16:02:32 +08 [ERRO] exec remoteFileCommand sudo -E /bin/bash -c “ls -l /etc/kubernetes/admin.conf 2>/dev/null |wc -l” err: failed to get SSH session: ssh: unexpected packet in response to channel open: <nil>
16:02:32 +08 check remote file exist: false
16:02:32 +08 [ERRO] exec remoteFileCommand sudo -E /bin/bash -c “ls -l /etc/kubernetes/admin.conf 2>/dev/null |wc -l” err: failed to get SSH session: read tcp 192.168.50.6:48388->192.168.50.10:22: read: connection reset by peer
16:02:32 +08 check remote file exist: false
16:02:32 +08 failed: [pvesc]
16:02:32 +08 failed: [pvesc]
16:02:32 +08 success: [ryzenpve]
16:02:32 +08 success: [ryzenpve]
16:02:32 +08 success: [h170i]
16:02:32 +08 success: [h170i]
error: Pipeline[CreateClusterPipeline] execute failed: Module[KubernetesStatusModule] exec failed:
failed: [pvesc] [GetClusterStatus] exec failed after 3 retires: get kubernetes cluster info failed: Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubectl –no-headers=true get nodes -o custom-columns=:meta data.name,:status.nodeInfo.kubeletVersion,:status.addresses”
The connection to the server lb.kubesphere.local:6443 was refused - did you specify the right host or port?: Process exited with status 1
failed: [pvesc] [GetClusterStatus] exec failed after 3 retires: get kubernetes cluster info failed: Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubectl –no-headers=true get nodes -o custom-columns=:meta data.name,:status.nodeInfo.kubeletVersion,:status.addresses”
The connection to the server lb.kubesphere.local:6443 was refused - did you specify the right host or port?: Process exited with status 1
root@pvesc:~# kubectl –no-headers=true get nodes -o custom-columns=:metadata.name
The connection to the server lb.kubesphere.local:6443 was refused - did you specify the right host or port?
root@pvesc:~# journalctl -xeu kubelet
Jun 23 16:07:19 pvesc kubelet[63114]: E0623 16:07:19.065631 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:19 pvesc kubelet[63114]: E0623 16:07:19.166649 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:19 pvesc kubelet[63114]: E0623 16:07:19.267734 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:19 pvesc kubelet[63114]: E0623 16:07:19.368104 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:19 pvesc kubelet[63114]: E0623 16:07:19.416159 63114 remote_runtime.go:116] “RunPodSandbox from runtime service failed” err="rpc error: code = Unknown desc = failed to reserve sandbox name \"kube-controll>
Jun 23 16:07:19 pvesc kubelet[63114]: E0623 16:07:19.416188 63114 kuberuntime_sandbox.go:70] “Failed to create sandbox for pod” err="rpc error: code = Unknown desc = failed to reserve sandbox name \"kube-controller-ma>
Jun 23 16:07:19 pvesc kubelet[63114]: E0623 16:07:19.416207 63114 kuberuntime_manager.go:819] “CreatePodSandbox for pod failed” err="rpc error: code = Unknown desc = failed to reserve sandbox name \"kube-controller-ma>
Jun 23 16:07:19 pvesc kubelet[63114]: E0623 16:07:19.416251 63114 pod_workers.go:951] “Error syncing pod, skipping” err="failed to \“CreatePodSandbox\” for \"kube-controller-manager-pvesc_kube-system(bce721e0b9bbac731>
Jun 23 16:07:19 pvesc kubelet[63114]: E0623 16:07:19.468772 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:19 pvesc kubelet[63114]: E0623 16:07:19.569863 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:19 pvesc kubelet[63114]: E0623 16:07:19.658234 63114 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://lb.kubesphere.lo>
Jun 23 16:07:19 pvesc kubelet[63114]: E0623 16:07:19.670804 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:19 pvesc kubelet[63114]: E0623 16:07:19.771265 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:19 pvesc kubelet[63114]: E0623 16:07:19.872198 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:19 pvesc kubelet[63114]: E0623 16:07:19.972341 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:20 pvesc kubelet[63114]: E0623 16:07:20.073244 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:20 pvesc kubelet[63114]: E0623 16:07:20.173387 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:20 pvesc kubelet[63114]: E0623 16:07:20.274472 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:20 pvesc kubelet[63114]: E0623 16:07:20.374764 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:20 pvesc kubelet[63114]: E0623 16:07:20.475737 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:20 pvesc kubelet[63114]: E0623 16:07:20.576714 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:20 pvesc kubelet[63114]: E0623 16:07:20.677711 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:20 pvesc kubelet[63114]: E0623 16:07:20.778824 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:20 pvesc kubelet[63114]: E0623 16:07:20.879343 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:20 pvesc kubelet[63114]: E0623 16:07:20.980246 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:21 pvesc kubelet[63114]: E0623 16:07:21.080610 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:21 pvesc kubelet[63114]: E0623 16:07:21.181621 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:21 pvesc kubelet[63114]: E0623 16:07:21.281779 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:21 pvesc kubelet[63114]: E0623 16:07:21.382111 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:21 pvesc kubelet[63114]: E0623 16:07:21.482159 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:21 pvesc kubelet[63114]: E0623 16:07:21.574510 63114 kubelet.go:2376] “Container runtime network not ready” networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns err>
Jun 23 16:07:21 pvesc kubelet[63114]: E0623 16:07:21.582848 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:21 pvesc kubelet[63114]: E0623 16:07:21.683652 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:21 pvesc kubelet[63114]: E0623 16:07:21.784319 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:21 pvesc kubelet[63114]: E0623 16:07:21.885216 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:21 pvesc kubelet[63114]: E0623 16:07:21.945159 63114 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://lb.kubesphere.local:6443/apis/coordination.k8s.io/v1/namespaces>
Jun 23 16:07:21 pvesc kubelet[63114]: E0623 16:07:21.985794 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:22 pvesc kubelet[63114]: E0623 16:07:22.086781 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:22 pvesc kubelet[63114]: I0623 16:07:22.087500 63114 kubelet_node_status.go:71] “Attempting to register node” node=“pvesc”
Jun 23 16:07:22 pvesc kubelet[63114]: E0623 16:07:22.087775 63114 kubelet_node_status.go:93] “Unable to register node with API server” err="Post \“https://lb.kubesphere.local:6443/api/v1/nodes\”: dial tcp 192.168.50.6>
Jun 23 16:07:22 pvesc kubelet[63114]: E0623 16:07:22.187903 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:22 pvesc kubelet[63114]: E0623 16:07:22.287952 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:22 pvesc kubelet[63114]: E0623 16:07:22.389116 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:22 pvesc kubelet[63114]: E0623 16:07:22.490307 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:22 pvesc kubelet[63114]: E0623 16:07:22.591156 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:22 pvesc kubelet[63114]: E0623 16:07:22.692169 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:22 pvesc kubelet[63114]: E0623 16:07:22.792719 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:22 pvesc kubelet[63114]: E0623 16:07:22.893710 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:22 pvesc kubelet[63114]: E0623 16:07:22.994070 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:23 pvesc kubelet[63114]: E0623 16:07:23.095045 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:23 pvesc kubelet[63114]: E0623 16:07:23.196094 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:23 pvesc kubelet[63114]: E0623 16:07:23.296709 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:23 pvesc kubelet[63114]: E0623 16:07:23.397737 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:23 pvesc kubelet[63114]: E0623 16:07:23.498759 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:23 pvesc kubelet[63114]: E0623 16:07:23.599411 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
root@pvesc:~# systemctl status -l kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Thu 2022-06-23 16:01:20 +08; 6min ago
Docs: http://kubernetes.io/docs/
Main PID: 63114 (kubelet)
Tasks: 27 (limit: 9830)
Memory: 44.7M
CPU: 6.206s
CGroup: /system.slice/kubelet.service
└─63114 /usr/local/bin/kubelet –bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf –kubeconfig=/etc/kubernetes/kubelet.conf –config=/var/lib/kubelet/config.yaml –cgroup-driver=systemd –contain>
Jun 23 16:07:31 pvesc kubelet[63114]: E0623 16:07:31.149366 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:31 pvesc kubelet[63114]: E0623 16:07:31.250069 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:31 pvesc kubelet[63114]: E0623 16:07:31.350598 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:31 pvesc kubelet[63114]: E0623 16:07:31.451677 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:31 pvesc kubelet[63114]: E0623 16:07:31.551702 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:31 pvesc kubelet[63114]: E0623 16:07:31.576248 63114 kubelet.go:2376] “Container runtime network not ready” networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns err>
Jun 23 16:07:31 pvesc kubelet[63114]: E0623 16:07:31.652201 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:31 pvesc kubelet[63114]: E0623 16:07:31.752493 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:31 pvesc kubelet[63114]: E0623 16:07:31.853365 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"
Jun 23 16:07:31 pvesc kubelet[63114]: E0623 16:07:31.954240 63114 kubelet.go:2451] “Error getting node” err="node \“pvesc\” not found"`
先忽略scp到其他机器…
第一个运行kk的主节点kublet能启动 但是会反复启动cni环境 但是又启动不了
runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
afrojewelz
你是不是没有先执行kk delele cluster -f config.yaml清理环境?