afrojewelz
你是不是没有先执行kk delele cluster -f config.yaml清理环境?
kubespherev3.2.1如何从docker环境切换到containerd环境
24sama 执行了 清理过了 还手动check了那些目录是否还在 在就rm -rf
afrojewelz
/etc/kubernetes/admin.conf
这个文件还在吗。清理环境之后应该没有这个文件了
24sama 已经没有这个文件了
24sama
经过一番清理 scp命令貌似通过了 配置基本都推送到了member节点
但是kubelet 仍然起不动 一个有意思的情况是 列出缓存里有的pause镜像版本是3.6 但是我日志里pause仍然tag版本为3.5不知道有没有影响
使用多master想配置homelab成HA真的难 我几乎想手动kubeadm init了
`sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:40:56 +08 stdout: [h170i]
/root/.bashrc:行22: /usr/local/bin/kubectl: 权限不够
/root/.bashrc:行22: /usr/local/bin/kubectl: 权限不够
00:40:56 +08 command: [neopve]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:40:56 +08 stdout: [neopve]
/root/.bashrc: line 23: /usr/local/bin/kubectl: Permission denied
/root/.bashrc: line 23: /usr/local/bin/kubectl: Permission denied
00:40:56 +08 command: [h170i]
sudo -E /bin/bash -c “chmod +x /usr/local/bin/kubectl”
00:40:56 +08 stdout: [h170i]
/root/.bashrc:行22: /usr/local/bin/kubectl: 权限不够
/root/.bashrc:行22: /usr/local/bin/kubectl: 权限不够
00:40:56 +08 command: [ryzenpve]
sudo -E /bin/bash -c “tar -zxf /tmp/kubekey/cni-plugins-linux-amd64-v0.9.1.tgz -C /opt/cni/bin”
00:40:56 +08 command: [neopve]
sudo -E /bin/bash -c “chmod +x /usr/local/bin/kubectl”
00:40:56 +08 stdout: [neopve]
/root/.bashrc: line 23: /usr/local/bin/kubectl: Permission denied
/root/.bashrc: line 23: /usr/local/bin/kubectl: Permission denied
00:40:57 +08 scp local file /root/.kube/kubekey/helm/v3.6.3/amd64/helm to remote /tmp/kubekey/usr/local/bin/helm success
00:40:57 +08 scp local file /root/.kube/kubekey/helm/v3.6.3/amd64/helm to remote /tmp/kubekey/usr/local/bin/helm success
00:40:57 +08 scp local file /root/.kube/kubekey/kube/v1.22.10/amd64/kubectl to remote /tmp/kubekey/usr/local/bin/kubectl success
00:40:58 +08 command: [h170i]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/usr/local/bin/helm /usr/local/bin/helm”
00:40:58 +08 command: [neopve]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/usr/local/bin/helm /usr/local/bin/helm”
00:40:58 +08 command: [h170i]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:40:58 +08 command: [qm77prx]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/usr/local/bin/kubectl /usr/local/bin/kubectl”
00:40:58 +08 command: [qm77prx]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:40:58 +08 stdout: [qm77prx]
/root/.bashrc: line 23: /usr/local/bin/kubectl: Permission denied
/root/.bashrc: line 23: /usr/local/bin/kubectl: Permission denied
00:40:58 +08 command: [h170i]
sudo -E /bin/bash -c “chmod +x /usr/local/bin/helm”
00:40:58 +08 command: [neopve]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:40:58 +08 command: [qm77prx]
sudo -E /bin/bash -c “chmod +x /usr/local/bin/kubectl”
00:40:58 +08 stdout: [qm77prx]
/root/.bashrc: line 23: /usr/local/bin/kubectl: Permission denied
/root/.bashrc: line 23: /usr/local/bin/kubectl: Permission denied
00:40:58 +08 command: [neopve]
sudo -E /bin/bash -c “chmod +x /usr/local/bin/helm”
00:40:59 +08 scp local file /root/.kube/kubekey/cni/v0.9.1/amd64/cni-plugins-linux-amd64-v0.9.1.tgz to remote /tmp/kubekey/cni-plugins-linux-amd64-v0.9.1.tgz success
00:41:00 +08 scp local file /root/.kube/kubekey/cni/v0.9.1/amd64/cni-plugins-linux-amd64-v0.9.1.tgz to remote /tmp/kubekey/cni-plugins-linux-amd64-v0.9.1.tgz success
00:41:00 +08 scp local file /root/.kube/kubekey/helm/v3.6.3/amd64/helm to remote /tmp/kubekey/usr/local/bin/helm success
00:41:00 +08 command: [h170i]
sudo -E /bin/bash -c “tar -zxf /tmp/kubekey/cni-plugins-linux-amd64-v0.9.1.tgz -C /opt/cni/bin”
00:41:01 +08 command: [qm77prx]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/usr/local/bin/helm /usr/local/bin/helm”
00:41:01 +08 command: [neopve]
sudo -E /bin/bash -c “tar -zxf /tmp/kubekey/cni-plugins-linux-amd64-v0.9.1.tgz -C /opt/cni/bin”
00:41:01 +08 command: [qm77prx]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:41:01 +08 command: [qm77prx]
sudo -E /bin/bash -c “chmod +x /usr/local/bin/helm”
00:41:03 +08 scp local file /root/.kube/kubekey/cni/v0.9.1/amd64/cni-plugins-linux-amd64-v0.9.1.tgz to remote /tmp/kubekey/cni-plugins-linux-amd64-v0.9.1.tgz success
00:41:04 +08 command: [qm77prx]
sudo -E /bin/bash -c “tar -zxf /tmp/kubekey/cni-plugins-linux-amd64-v0.9.1.tgz -C /opt/cni/bin”
00:41:04 +08 success: [pvesc]
00:41:04 +08 success: [ryzenpve]
00:41:04 +08 success: [h170i]
00:41:04 +08 success: [neopve]
00:41:04 +08 success: [qm77prx]
00:41:04 +08 [InstallKubeBinariesModule] Synchronize kubelet
00:41:04 +08 command: [pvesc]
sudo -E /bin/bash -c “chmod +x /usr/local/bin/kubelet”
00:41:04 +08 command: [ryzenpve]
sudo -E /bin/bash -c “chmod +x /usr/local/bin/kubelet”
00:41:04 +08 command: [h170i]
sudo -E /bin/bash -c “chmod +x /usr/local/bin/kubelet”
00:41:04 +08 command: [neopve]
sudo -E /bin/bash -c “chmod +x /usr/local/bin/kubelet”
00:41:04 +08 command: [qm77prx]
sudo -E /bin/bash -c “chmod +x /usr/local/bin/kubelet”
00:41:04 +08 success: [pvesc]
00:41:04 +08 success: [ryzenpve]
00:41:04 +08 success: [h170i]
00:41:04 +08 success: [neopve]
00:41:04 +08 success: [qm77prx]
00:41:04 +08 [InstallKubeBinariesModule] Generate kubelet service
00:41:04 +08 scp local file /root/.kube/kubekey/pvesc/kubelet.service to remote /tmp/kubekey/etc/systemd/system/kubelet.service success
00:41:04 +08 scp local file /root/.kube/kubekey/ryzenpve/kubelet.service to remote /tmp/kubekey/etc/systemd/system/kubelet.service success
00:41:04 +08 scp local file /root/.kube/kubekey/h170i/kubelet.service to remote /tmp/kubekey/etc/systemd/system/kubelet.service success
00:41:04 +08 command: [pvesc]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/etc/systemd/system/kubelet.service /etc/systemd/system/kubelet.service”
00:41:05 +08 scp local file /root/.kube/kubekey/neopve/kubelet.service to remote /tmp/kubekey/etc/systemd/system/kubelet.service success
00:41:05 +08 scp local file /root/.kube/kubekey/qm77prx/kubelet.service to remote /tmp/kubekey/etc/systemd/system/kubelet.service success
00:41:05 +08 command: [ryzenpve]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/etc/systemd/system/kubelet.service /etc/systemd/system/kubelet.service”
00:41:05 +08 command: [pvesc]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:41:05 +08 command: [h170i]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/etc/systemd/system/kubelet.service /etc/systemd/system/kubelet.service”
00:41:05 +08 command: [ryzenpve]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:41:05 +08 command: [h170i]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:41:05 +08 command: [neopve]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/etc/systemd/system/kubelet.service /etc/systemd/system/kubelet.service”
00:41:05 +08 command: [qm77prx]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/etc/systemd/system/kubelet.service /etc/systemd/system/kubelet.service”
00:41:05 +08 command: [neopve]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:41:05 +08 command: [qm77prx]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:41:05 +08 success: [pvesc]
00:41:05 +08 success: [ryzenpve]
00:41:05 +08 success: [h170i]
00:41:05 +08 success: [neopve]
00:41:05 +08 success: [qm77prx]
00:41:05 +08 [InstallKubeBinariesModule] Enable kubelet service
00:41:06 +08 command: [pvesc]
sudo -E /bin/bash -c “systemctl disable kubelet && systemctl enable kubelet && ln -snf /usr/local/bin/kubelet /usr/bin/kubelet”
00:41:06 +08 stdout: [pvesc]
Removed /etc/systemd/system/multi-user.target.wants/kubelet.service.
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /etc/systemd/system/kubelet.service.
00:41:06 +08 command: [h170i]
sudo -E /bin/bash -c “systemctl disable kubelet && systemctl enable kubelet && ln -snf /usr/local/bin/kubelet /usr/bin/kubelet”
00:41:06 +08 stdout: [h170i]
Removed /etc/systemd/system/multi-user.target.wants/kubelet.service.
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /etc/systemd/system/kubelet.service.
00:41:06 +08 command: [ryzenpve]
sudo -E /bin/bash -c “systemctl disable kubelet && systemctl enable kubelet && ln -snf /usr/local/bin/kubelet /usr/bin/kubelet”
00:41:06 +08 stdout: [ryzenpve]
Removed /etc/systemd/system/multi-user.target.wants/kubelet.service.
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /etc/systemd/system/kubelet.service.
00:41:06 +08 command: [neopve]
sudo -E /bin/bash -c “systemctl disable kubelet && systemctl enable kubelet && ln -snf /usr/local/bin/kubelet /usr/bin/kubelet”
00:41:06 +08 stdout: [neopve]
Removed /etc/systemd/system/multi-user.target.wants/kubelet.service.
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /etc/systemd/system/kubelet.service.
00:41:06 +08 command: [qm77prx]
sudo -E /bin/bash -c “systemctl disable kubelet && systemctl enable kubelet && ln -snf /usr/local/bin/kubelet /usr/bin/kubelet”
00:41:06 +08 stdout: [qm77prx]
Removed /etc/systemd/system/multi-user.target.wants/kubelet.service.
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /etc/systemd/system/kubelet.service.
00:41:06 +08 success: [pvesc]
00:41:06 +08 success: [h170i]
00:41:06 +08 success: [ryzenpve]
00:41:06 +08 success: [neopve]
00:41:06 +08 success: [qm77prx]
00:41:06 +08 [InstallKubeBinariesModule] Generate kubelet env
00:41:07 +08 scp local file /root/.kube/kubekey/pvesc/10-kubeadm.conf to remote /tmp/kubekey/etc/systemd/system/kubelet.service.d/10-kubeadm.conf success
00:41:07 +08 scp local file /root/.kube/kubekey/ryzenpve/10-kubeadm.conf to remote /tmp/kubekey/etc/systemd/system/kubelet.service.d/10-kubeadm.conf success
00:41:07 +08 scp local file /root/.kube/kubekey/h170i/10-kubeadm.conf to remote /tmp/kubekey/etc/systemd/system/kubelet.service.d/10-kubeadm.conf success
00:41:07 +08 scp local file /root/.kube/kubekey/neopve/10-kubeadm.conf to remote /tmp/kubekey/etc/systemd/system/kubelet.service.d/10-kubeadm.conf success
00:41:07 +08 command: [pvesc]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/etc/systemd/system/kubelet.service.d/10-kubeadm.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf”
00:41:07 +08 scp local file /root/.kube/kubekey/qm77prx/10-kubeadm.conf to remote /tmp/kubekey/etc/systemd/system/kubelet.service.d/10-kubeadm.conf success
00:41:07 +08 command: [ryzenpve]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/etc/systemd/system/kubelet.service.d/10-kubeadm.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf”
00:41:07 +08 command: [pvesc]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:41:07 +08 command: [h170i]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/etc/systemd/system/kubelet.service.d/10-kubeadm.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf”
00:41:07 +08 command: [ryzenpve]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:41:07 +08 command: [h170i]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:41:08 +08 command: [neopve]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/etc/systemd/system/kubelet.service.d/10-kubeadm.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf”
00:41:08 +08 command: [qm77prx]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/etc/systemd/system/kubelet.service.d/10-kubeadm.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf”
00:41:08 +08 command: [neopve]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:41:08 +08 command: [qm77prx]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/”
00:41:08 +08 success: [pvesc]
00:41:08 +08 success: [ryzenpve]
00:41:08 +08 success: [h170i]
00:41:08 +08 success: [neopve]
00:41:08 +08 success: [qm77prx]
00:41:08 +08 [InitKubernetesModule] Generate kubeadm config
00:41:08 +08 command: [pvesc]
sudo -E /bin/bash -c “containerd config dump | grep SystemdCgroup”
00:41:08 +08 stdout: [pvesc]
SystemdCgroup = true
00:41:08 +08 pauseTag: 3.5, corednsTag: 1.8.0
00:41:08 +08 pauseTag: 3.5, corednsTag: 1.8.0
00:41:08 +08 pauseTag: 3.5, corednsTag: 1.8.0
00:41:08 +08 pauseTag: 3.5, corednsTag: 1.8.0
00:41:08 +08 pauseTag: 3.5, corednsTag: 1.8.0
00:41:08 +08 command: [pvesc]
sudo -E /bin/bash -c “containerd config dump | grep SystemdCgroup”
00:41:08 +08 stdout: [pvesc]
SystemdCgroup = true
00:41:08 +08 Set kubeletConfiguration: %vmap[cgroupDriver:systemd clusterDNS:[169.254.25.10] clusterDomain:cluster.local containerLogMaxFiles:3 containerLogMaxSize:5Mi evictionHard:map[memory.available:5% pid.available:5%] evictionMaxPodGracePeriod:120 evictionPressureTr ansitionPeriod:30s evictionSoft:map[memory.available:10%] evictionSoftGracePeriod:map[memory.available:2m] featureGates:map[CSIStorageCapacity:true ExpandCSIVolumes:true RotateKubeletServerCertificate:true TTLAfterFinished:true] kubeReserved:map[cpu:200m memory:250Mi] ma xPods:110 rotateCertificates:true systemReserved:map[cpu:200m memory:250Mi]]
00:41:09 +08 scp local file /root/.kube/kubekey/pvesc/kubeadm-config.yaml to remote /tmp/kubekey/etc/kubernetes/kubeadm-config.yaml success
00:41:09 +08 command: [pvesc]
sudo -E /bin/bash -c “mv -f /tmp/kubekey/etc/kubernetes/kubeadm-config.yaml /etc/kubernetes/kubeadm-config.yaml”
00:41:09 +08 command: [pvesc]
sudo -E /bin/bash -c “rm -rf /tmp/kubekey/*”
00:41:09 +08 skipped: [h170i]
00:41:09 +08 skipped: [ryzenpve]
00:41:09 +08 success: [pvesc]
00:41:09 +08 [InitKubernetesModule] Init cluster using kubeadm
00:45:11 +08 command: [pvesc]
sudo -E /bin/bash -c “/usr/local/bin/kubeadm init –config=/etc/kubernetes/kubeadm-config.yaml –ignore-preflight-errors=FileExisting-crictl”
00:45:11 +08 stdout: [pvesc]
W0627 00:41:09.878828 34506 utils.go:69] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [192.168.50.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.22.10
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
[certs] Using certificateDir folder “/etc/kubernetes/pki”
[certs] Generating “ca” certificate and key
[certs] Generating “apiserver” certificate and key
[certs] apiserver serving cert is signed for DNS names [h170i h170i.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost neopve neopve.cluster.local pvesc pvesc.cluster.local qm77prx qm77prx .cluster.local ryzenpve ryzenpve.cluster.local sdb2640m sdb2640m.cluster.local] and IPs [192.168.50.1 192.168.50.6 127.0.0.1 192.168.50.10 192.168.50.20 192.168.50.23 192.168.50.40 192.168.50.253]
[certs] Generating “apiserver-kubelet-client” certificate and key
[certs] Generating “front-proxy-ca” certificate and key
[certs] Generating “front-proxy-client” certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating “sa” key and public key
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
[kubeconfig] Writing “admin.conf” kubeconfig file
[kubeconfig] Writing “kubelet.conf” kubeconfig file
[kubeconfig] Writing “controller-manager.conf” kubeconfig file
[kubeconfig] Writing “scheduler.conf” kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
[control-plane] Creating static Pod manifest for “kube-scheduler”
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn’t initialize a Kubernetes cluster
To see the stack trace of this error execute with –v=5 or higher
00:45:11 +08 stderr: [pvesc]
Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubeadm init –config=/etc/kubernetes/kubeadm-config.yaml –ignore-preflight-errors=FileExisting-crictl”
W0627 00:41:09.878828 34506 utils.go:69] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [192.168.50.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.22.10
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
[certs] Using certificateDir folder “/etc/kubernetes/pki”
[certs] Generating “ca” certificate and key
[certs] Generating “apiserver” certificate and key
[certs] apiserver serving cert is signed for DNS names [h170i h170i.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost neopve neopve.cluster.local pvesc pvesc.cluster.local qm77prx qm77prx .cluster.local ryzenpve ryzenpve.cluster.local sdb2640m sdb2640m.cluster.local] and IPs [192.168.50.1 192.168.50.6 127.0.0.1 192.168.50.10 192.168.50.20 192.168.50.23 192.168.50.40 192.168.50.253]
[certs] Generating “apiserver-kubelet-client” certificate and key
[certs] Generating “front-proxy-ca” certificate and key
[certs] Generating “front-proxy-client” certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating “sa” key and public key
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
[kubeconfig] Writing “admin.conf” kubeconfig file
[kubeconfig] Writing “kubelet.conf” kubeconfig file
[kubeconfig] Writing “controller-manager.conf” kubeconfig file
[kubeconfig] Writing “scheduler.conf” kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
[control-plane] Creating static Pod manifest for “kube-scheduler”
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn’t initialize a Kubernetes cluster
To see the stack trace of this error execute with –v=5 or higher: Process exited with status 1`
- 已编辑
afrojewelz
通常来讲使用一个干净的环境来安装是最简单。非干净的环境不可避免不同的人会进行不同的配置。
- 看日志里面 /user/local/bin/kubectl 权限不够
- kubelet起不来可以用 systemctl status kubelet 排查
- k8s v1.22.x 的版本官方推荐使用 pause: 3.5
- 已编辑
24sama
1 权限我都递归把bash_completeion涉及到的位置都775了 仍然提示权限不够 让人困惑 不知道具体哪条不够 ls -la /usr/bin/kubectl是rwxrwxr-x
2 systemctl status -l kubelet 那就好长了 但是用fatal关键字窃取不到什么内容 一堆error|warn ,我看到大意包括 启动kubelet对文件系统写入好像就开始报错了 然后cni组件初始化不了
3.我用crictl ps -a一看啥都没有3.5也没运行就3.6也没运行
32816 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1 .TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvesc.16fc633d6a89ff1c", GenerateName:"", Namespace:"default" , SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.L ocation)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), An notations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFi elds:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"pvesc", UID:"pvesc", API Version:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"pvesc"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0a66e0997ab031c, ext:53 26209290, loc:(*time.Location)(0x77bb7c0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0a66e0997ab031c, ext:5326209290, loc: (*time.Location)(0x77bb7c0)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Locat ion)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingIns tance:""}': 'Post "https://lb.kubesphere.local:6443/api/v1/namespaces/default/events": dial tcp 192.168.50.6:6443: connect: con nection refused'(may retry after sleeping)
kubelet.go:2376] "Container runtime network not ready" netw orkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
- crictl info
- /etc/kubernetes/kubeadm-config.yaml
看看这两个的cgroup是否都是systemd
24sama
:~# crictl info|grep -Ei “cgroup”
“ShimCgroup”: "",
“SystemdCgroup”: true
“systemdCgroup”: false,
“disableCgroup”: true,
cat /etc/kubernetes/kubeadm-config.yaml|grep -Ei “cgroup”
cgroup-driver: systemd
cgroupDriver: systemd
- 已编辑
afrojewelz
建议不加上grep,这样看不到配置的层级,然后把内容以md语法代码块的形式粘贴到这里
- 还没到cni组件初始化的任务,现在可以不管cni未初始化的日志
24sama 好的 照做
`
:~# crictl info
{
“status”: {
“conditions”: [
{
“type”: “RuntimeReady”,
“status”: true,
“reason”: "",
“message”: ""
},
{
“type”: “NetworkReady”,
“status”: false,
“reason”: “NetworkPluginNotReady”,
“message”: “Network plugin returns error: cni plugin not initialized”
}
]
},
“cniconfig”: {
“PluginDirs”: [
“/opt/cni/bin”
],
“PluginConfDir”: “/etc/cni/net.d”,
“PluginMaxConfNum”: 1,
“Prefix”: “eth”,
“Networks”: [
{
“Config”: {
“Name”: “cni-loopback”,
“CNIVersion”: “0.3.1”,
“Plugins”: [
{
“Network”: {
“type”: “loopback”,
“ipam”: {},
“dns”: {}
},
“Source”: "{\“type\”:\“loopback\”}"
}
],
“Source”: "{\n\“cniVersion\”: \“0.3.1\”,\n\“name\”: \“cni-loopback\”,\n\“plugins\”: [{\n \“type\”: \“loopback\”\n}]\n}"
},
“IFName”: “lo”
}
]
},
“config”: {
“containerd”: {
“snapshotter”: “zfs”,
“defaultRuntimeName”: “runc”,
“defaultRuntime”: {
“runtimeType”: "",
“runtimePath”: "",
“runtimeEngine”: "",
“PodAnnotations”: [],
“ContainerAnnotations”: [],
“runtimeRoot”: "",
“options”: {},
“privileged_without_host_devices”: false,
“baseRuntimeSpec”: "",
“cniConfDir”: "",
“cniMaxConfNum”: 0
},
“untrustedWorkloadRuntime”: {
“runtimeType”: "",
“runtimePath”: "",
“runtimeEngine”: "",
“PodAnnotations”: [],
“ContainerAnnotations”: [],
“runtimeRoot”: "",
“options”: {},
“privileged_without_host_devices”: false,
“baseRuntimeSpec”: "",
“cniConfDir”: "",
“cniMaxConfNum”: 0
},
“runtimes”: {
“runc”: {
“runtimeType”: “io.containerd.runc.v2”,
“runtimePath”: "",
“runtimeEngine”: "",
“PodAnnotations”: [],
“ContainerAnnotations”: [],
“runtimeRoot”: "",
“options”: {
“BinaryName”: "",
“CriuImagePath”: "",
“CriuPath”: "",
“CriuWorkPath”: "",
“IoGid”: 0,
“IoUid”: 0,
“NoNewKeyring”: false,
“NoPivotRoot”: false,
“Root”: "",
“ShimCgroup”: "",
“SystemdCgroup”: true
},
“privileged_without_host_devices”: false,
“baseRuntimeSpec”: "",
“cniConfDir”: "",
“cniMaxConfNum”: 0
}
},
“noPivot”: false,
“disableSnapshotAnnotations”: true,
“discardUnpackedLayers”: false,
“ignoreRdtNotEnabledErrors”: false
},
“cni”: {
“binDir”: “/opt/cni/bin”,
“confDir”: “/etc/cni/net.d”,
“maxConfNum”: 1,
“confTemplate”: "",
“ipPref”: ""
},
“registry”: {
“configPath”: "",
“mirrors”: {},
“configs”: {},
“auths”: {},
“headers”: {
“User-Agent”: [
“containerd/v1.6.4”
]
}
},
“imageDecryption”: {
“keyModel”: “node”
},
“disableTCPService”: true,
“streamServerAddress”: “127.0.0.1”,
“streamServerPort”: “0”,
“streamIdleTimeout”: “4h0m0s”,
“enableSelinux”: false,
“selinuxCategoryRange”: 1024,
“sandboxImage”: “k8s.gcr.io/pause:3.6”,
“statsCollectPeriod”: 10,
“systemdCgroup”: false,
“enableTLSStreaming”: false,
“x509KeyPairStreaming”: {
“tlsCertFile”: "",
“tlsKeyFile”: ""
},
“maxContainerLogSize”: 16384,
“disableCgroup”: true,
“disableApparmor”: false,
“restrictOOMScoreAdj”: false,
“maxConcurrentDownloads”: 3,
“disableProcMount”: false,
“unsetSeccompProfile”: "",
“tolerateMissingHugetlbController”: true,
“disableHugetlbController”: true,
“device_ownership_from_security_context”: false,
“ignoreImageDefinedVolumes”: false,
“netnsMountsUnderStateDir”: false,
“enableUnprivilegedPorts”: false,
“enableUnprivilegedICMP”: false,
“containerdRootDir”: “/var/lib/containerd”,
“containerdEndpoint”: “/run/containerd/containerd.sock”,
“rootDir”: “/var/lib/containerd/io.containerd.grpc.v1.cri”,
“stateDir”: “/run/containerd/io.containerd.grpc.v1.cri”
},
“golang”: “go1.17.9”,
“lastCNILoadStatus”: “cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config”,
“lastCNILoadStatus.default”: “cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config”
}
`
`
:~# cat /etc/kubernetes/kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
etcd:
external:
endpoints:
- https://192.168.50.6:2379
- https://192.168.50.20:2379
- https://192.168.50.10:2379
caFile: /etc/ssl/etcd/ssl/ca.pem
certFile: /etc/ssl/etcd/ssl/node-pvesc.pem
keyFile: /etc/ssl/etcd/ssl/node-pvesc-key.pem
dns:
type: CoreDNS
imageRepository: registry.cn-beijing.aliyuncs.com/kubesphereio
imageTag: 1.8.0
imageRepository: registry.cn-beijing.aliyuncs.com/kubesphereio
kubernetesVersion: v1.22.10
certificatesDir: /etc/kubernetes/pki
clusterName: pvesc.lan
controlPlaneEndpoint: lb.kubesphere.local:6443
networking:
dnsDomain: cluster.local
podSubnet: 10.233.64.0/18
serviceSubnet: 192.168.50.0/24
apiServer:
extraArgs:
audit-log-maxage: “30”
audit-log-maxbackup: “10”
audit-log-maxsize: “100”
bind-address: 0.0.0.0
feature-gates: CSIStorageCapacity=true,ExpandCSIVolumes=true,RotateKubeletServerCertificate=true,TTLAfterFinished=true
certSANs:
- kubernetes
- kubernetes.default
- kubernetes.default.svc
- kubernetes.default.svc.cluster.local
- localhost
- 127.0.0.1
- lb.kubesphere.local
- 192.168.50.6
- pvesc
- pvesc.cluster.local
- h170i
- h170i.cluster.local
- 192.168.50.10
- ryzenpve
- ryzenpve.cluster.local
- 192.168.50.20
- neopve
- neopve.cluster.local
- 192.168.50.23
- qm77prx
- qm77prx.cluster.local
- 192.168.50.40
- sdb2640m
- sdb2640m.cluster.local
- 192.168.50.253
- 192.168.50.1
controllerManager:
extraArgs:
node-cidr-mask-size: “24”
bind-address: 0.0.0.0
experimental-cluster-signing-duration: 87600h
feature-gates: TTLAfterFinished=true,CSIStorageCapacity=true,ExpandCSIVolumes=true,RotateKubeletServerCertificate=true
extraVolumes:
- name: host-time
hostPath: /etc/localtime
mountPath: /etc/localtime
readOnly: true
scheduler:
extraArgs:
bind-address: 0.0.0.0
feature-gates: CSIStorageCapacity=true,ExpandCSIVolumes=true,RotateKubeletServerCertificate=true,TTLAfterFinished=true
—
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.50.6
bindPort: 6443
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
kubeletExtraArgs:
cgroup-driver: systemd
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: 10.233.64.0/18
iptables:
masqueradeAll: false
masqueradeBit: 14
minSyncPeriod: 0s
syncPeriod: 30s
mode: ipvs
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
clusterDNS:
- 169.254.25.10
clusterDomain: cluster.local
containerLogMaxFiles: 3
containerLogMaxSize: 5Mi
evictionHard:
memory.available: 5%
pid.available: 5%
evictionMaxPodGracePeriod: 120
evictionPressureTransitionPeriod: 30s
evictionSoft:
memory.available: 10%
evictionSoftGracePeriod:
memory.available: 2m
featureGates:
CSIStorageCapacity: true
ExpandCSIVolumes: true
RotateKubeletServerCertificate: true
TTLAfterFinished: true
kubeReserved:
cpu: 200m
memory: 250Mi
maxPods: 110
rotateCertificates: true
systemReserved:
cpu: 200m
memory: 250Mi
`
24sama 我试试看
经过修改了containerd config.toml那个disable_cgroup = false之后
再次Journalctl -xebf -u kubelet的到的错误还在同样的位置
dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Jun 27 17:47:15 pvesc kubelet[4700]: E0627 17:47:15.733127 4700 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://lb.kubesphere.local:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 192.168.50.6:6443: connect: connection refused
Jun 27 17:47:17 pvesc kubelet[4700]: E0627 17:47:17.903097 4700 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://lb.kubesphere.local:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 192.168.50.6:6443: connect: connection refused
Jun 27 17:47:20 pvesc kubelet[4700]: I0627 17:47:20.734292 4700 server.go:687] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /"
Jun 27 17:47:20 pvesc kubelet[4700]: I0627 17:47:20.734429 4700 container_manager_linux.go:280] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Jun 27 17:47:20 pvesc kubelet[4700]: I0627 17:47:20.734494 4700 container_manager_linux.go:285] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[cpu:{i:{value:200 scale:-3} d:{Dec:<nil>} s:200m Format:DecimalSI} memory:{i:{value:262144000 scale:0} d:{Dec:<nil>} s:250Mi Format:BinarySI}] SystemReserved:map[cpu:{i:{value:200 scale:-3} d:{Dec:<nil>} s:200m Format:DecimalSI} memory:{i:{value:262144000 scale:0} d:{Dec:<nil>} s:250Mi Format:BinarySI}] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:pid.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
Jun 27 17:47:20 pvesc kubelet[4700]: I0627 17:47:20.734518 4700 topology_manager.go:133] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
Jun 27 17:47:20 pvesc kubelet[4700]: I0627 17:47:20.734535 4700 container_manager_linux.go:320] "Creating device plugin manager" devicePluginEnabled=true
Jun 27 17:47:20 pvesc kubelet[4700]: I0627 17:47:20.734654 4700 state_mem.go:36] "Initialized new in-memory state store"
Jun 27 17:47:20 pvesc kubelet[4700]: I0627 17:47:20.734838 4700 kubelet.go:418] "Attempting to sync node with API server"
Jun 27 17:47:20 pvesc kubelet[4700]: I0627 17:47:20.734857 4700 kubelet.go:279] "Adding static pod path" path="/etc/kubernetes/manifests"
Jun 27 17:47:20 pvesc kubelet[4700]: I0627 17:47:20.734879 4700 kubelet.go:290] "Adding apiserver pod source"
Jun 27 17:47:20 pvesc kubelet[4700]: I0627 17:47:20.734902 4700 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Jun 27 17:47:20 pvesc kubelet[4700]: E0627 17:47:20.735905 4700 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://lb.kubesphere.local:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.50.6:6443: connect: connection refused
Jun 27 17:47:20 pvesc kubelet[4700]: E0627 17:47:20.735944 4700 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://lb.kubesphere.local:6443/api/v1/nodes?fieldSelector=metadata.name%3Dpvesc&limit=500&resourceVersion=0": dial tcp 192.168.50.6:6443: connect: connection refused
Jun 27 17:47:20 pvesc kubelet[4700]: I0627 17:47:20.736577 4700 kuberuntime_manager.go:246] "Container runtime initialized" containerRuntime="containerd" version="v1.6.4" apiVersion="v1alpha2"
Jun 27 17:47:20 pvesc kubelet[4700]: I0627 17:47:20.736923 4700 server.go:1213] "Started kubelet"
Jun 27 17:47:20 pvesc kubelet[4700]: I0627 17:47:20.736991 4700 server.go:149] "Starting to listen" address="0.0.0.0" port=10250
Jun 27 17:47:20 pvesc kubelet[4700]: E0627 17:47:20.737234 4700 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvesc.16fc71363d58d8cd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"pvesc", UID:"pvesc", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"pvesc"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0a67d0a2bec48cd, ext:5320743307, loc:(*time.Location)(0x77bb7c0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0a67d0a2bec48cd, ext:5320743307, loc:(*time.Location)(0x77bb7c0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://lb.kubesphere.local:6443/api/v1/namespaces/default/events": dial tcp 192.168.50.6:6443: connect: connection refused'(may retry after sleeping)
Jun 27 17:47:20 pvesc kubelet[4700]: I0627 17:47:20.737423 4700 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Jun 27 17:47:20 pvesc kubelet[4700]: E0627 17:47:20.737428 4700 cri_stats_provider.go:372] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.zfs"
Jun 27 17:47:20 pvesc kubelet[4700]: E0627 17:47:20.737473 4700 kubelet.go:1343] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Jun 27 17:47:20 pvesc kubelet[4700]: I0627 17:47:20.737522 4700 volume_manager.go:291] "Starting Kubelet Volume Manager"
Jun 27 17:47:20 pvesc kubelet[4700]: I0627 17:47:20.737558 4700 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
Jun 27 17:47:20 pvesc kubelet[4700]: E0627 17:47:20.737855 4700 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: Get "https://lb.kubesphere.local:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pvesc?timeout=10s": dial tcp 192.168.50.6:6443: connect: connection refused
Jun 27 17:47:20 pvesc kubelet[4700]: E0627 17:47:20.737946 4700 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://lb.kubesphere.local:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.6:6443: connect: connection refused
Jun 27 17:47:20 pvesc kubelet[4700]: I0627 17:47:20.737950 4700 server.go:409] "Adding debug handlers to kubelet server"
Jun 27 17:47:20 pvesc kubelet[4700]: E0627 17:47:20.738018 4700 kubelet.go:2376] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Jun 27 17:47:20 pvesc kubelet[4700]: I0627 17:47:20.746600 4700 kubelet_network_linux.go:56] "Initialized protocol iptables rules." protocol=IPv4
Jun 27 17:47:20 pvesc kubelet[4700]: I0627 17:47:20.752306 4700 kubelet_network_linux.go:56] "Initialized protocol iptables rules." protocol=IPv6
Jun 27 17:47:20 pvesc kubelet[4700]: I0627 17:47:20.752328 4700 status_manager.go:160] "Starting to sync pod status with apiserver"
Jun 27 17:47:20 pvesc kubelet[4700]: I0627 17:47:20.752346 4700 kubelet.go:2006] "Starting kubelet main sync loop"
Jun 27 17:47:20 pvesc kubelet[4700]: E0627 17:47:20.752397 4700 kubelet.go:2030] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Jun 27 17:47:20 pvesc kubelet[4700]: E0627 17:47:20.752779 4700 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://lb.kubesphere.local:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.50.6:6443: connect: connection refused
Jun 27 17:47:20 pvesc kubelet[4700]: E0627 17:47:20.837884 4700 kubelet.go:2451] "Error getting node" err="node \"pvesc\" not found"
Jun 27 17:47:20 pvesc kubelet[4700]: I0627 17:47:20.839199 4700 kubelet_node_status.go:71] "Attempting to register node" node="pvesc"
Jun 27 17:47:20 pvesc kubelet[4700]: E0627 17:47:20.839560 4700 kubelet_node_status.go:93] "Unable to register node with API server" err="Post \"https://lb.kubesphere.local:6443/api/v1/nodes\": dial tcp 192.168.50.6:6443: connect: connection refused" node="pvesc"
Jun 27 17:47:20 pvesc kubelet[4700]: I0627 17:47:20.846421 4700 cpu_manager.go:209] "Starting CPU manager" policy="none"
Jun 27 17:47:20 pvesc kubelet[4700]: I0627 17:47:20.846434 4700 cpu_manager.go:210] "Reconciling" reconcilePeriod="10s"
Jun 27 17:47:20 pvesc kubelet[4700]: I0627 17:47:20.846452 4700 state_mem.go:36] "Initialized new in-memory state store"
Jun 27 17:47:20 pvesc kubelet[4700]: E0627 17:47:20.852871 4700 kubelet.go:2030] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
Jun 27 17:47:20 pvesc kubelet[4700]: I0627 17:47:20.853801 4700 policy_none.go:49] "None policy: Start"
Jun 27 17:47:20 pvesc kubelet[4700]: I0627 17:47:20.854351 4700 memory_manager.go:168] "Starting memorymanager" policy="None"
Jun 27 17:47:20 pvesc kubelet[4700]: I0627 17:47:20.854369 4700 state_mem.go:35] "Initializing new in-memory state store"
Jun 27 17:47:20 pvesc kubelet[4700]: I0627 17:47:20.909187 4700 manager.go:609] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Jun 27 17:47:20 pvesc kubelet[4700]: I0627 17:47:20.909336 4700 plugin_manager.go:114] "Starting Kubelet Plugin Manager"
Jun 27 17:47:20 pvesc kubelet[4700]: E0627 17:47:20.909607 4700 eviction_manager.go:255] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"pvesc\" not found"
Jun 27 17:47:20 pvesc kubelet[4700]: E0627 17:47:20.938644 4700 kubelet.go:2451] "Error getting node" err="node \"pvesc\" not found"
Jun 27 17:47:20 pvesc kubelet[4700]: E0627 17:47:20.938954 4700 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: Get "https://lb.kubesphere.local:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pvesc?timeout=10s": dial tcp 192.168.50.6:6443: connect: connection refused
Jun 27 17:47:21 pvesc kubelet[4700]: E0627 17:47:21.038678 4700 kubelet.go:2451] "Error getting node" err="node \"pvesc\" not found"
Jun 27 17:47:21 pvesc kubelet[4700]: I0627 17:47:21.040631 4700 kubelet_node_status.go:71] "Attempting to register node" node="pvesc"
Jun 27 17:47:21 pvesc kubelet[4700]: E0627 17:47:21.040902 4700 kubelet_node_status.go:93] "Unable to register node with API server" err="Post \"https://lb.kubesphere.local:6443/api/v1/nodes\": dial tcp 192.168.50.6:6443: connect: connection refused" node="pvesc"
Jun 27 17:47:21 pvesc kubelet[4700]: I0627 17:47:21.053139 4700 topology_manager.go:200] "Topology Admit Handler"
Jun 27 17:47:21 pvesc kubelet[4700]: I0627 17:47:21.053889 4700 topology_manager.go:200] "Topology Admit Handler"
Jun 27 17:47:21 pvesc kubelet[4700]: I0627 17:47:21.054681 4700 topology_manager.go:200] "Topology Admit Handler"
Jun 27 17:47:21 pvesc kubelet[4700]: I0627 17:47:21.054907 4700 status_manager.go:661] "Failed to get status for pod" podUID=daa1370554337bdc1de1d8ec063a910d pod="kube-system/kube-apiserver-pvesc" err="Get \"https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-pvesc\": dial tcp 192.168.50.6:6443: connect: connection refused"
Jun 27 17:47:21 pvesc kubelet[4700]: I0627 17:47:21.055686 4700 status_manager.go:661] "Failed to get status for pod" podUID=3931ed9df0f2df0c3ef1bde5295b3b58 pod="kube-system/kube-controller-manager-pvesc" err="Get \"https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pvesc\": dial tcp 192.168.50.6:6443: connect: connection refused"
Jun 27 17:47:21 pvesc kubelet[4700]: I0627 17:47:21.056399 4700 status_manager.go:661] "Failed to get status for pod" podUID=3f55819df1e5fbb5d8f9152903979a92 pod="kube-system/kube-scheduler-pvesc" err="Get \"https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-pvesc\": dial tcp 192.168.50.6:6443: connect: connection refused"
Jun 27 17:47:21 pvesc kubelet[4700]: E0627 17:47:21.138943 4700 kubelet.go:2451] "Error getting node" err="node \"pvesc\" not found"
Jun 27 17:47:21 pvesc kubelet[4700]: I0627 17:47:21.139145 4700 reconciler.go:225] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f55819df1e5fbb5d8f9152903979a92-kubeconfig\") pod \"kube-scheduler-pvesc\" (UID: \"3f55819df1e5fbb5d8f9152903979a92\") "
Jun 27 17:47:21 pvesc kubelet[4700]: I0627 17:47:21.239922 4700 reconciler.go:225] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/daa1370554337bdc1de1d8ec063a910d-usr-local-share-ca-certificates\") pod \"kube-apiserver-pvesc\" (UID: \"daa1370554337bdc1de1d8ec063a910d\") "
Jun 27 17:47:21 pvesc kubelet[4700]: I0627 17:47:21.239957 4700 reconciler.go:225] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3931ed9df0f2df0c3ef1bde5295b3b58-k8s-certs\") pod \"kube-controller-manager-pvesc\" (UID: \"3931ed9df0f2df0c3ef1bde5295b3b58\") "
Jun 27 17:47:21 pvesc kubelet[4700]: E0627 17:47:21.239993 4700 kubelet.go:2451] "Error getting node" err="node \"pvesc\" not found"
Jun 27 17:47:21 pvesc kubelet[4700]: I0627 17:47:21.240002 4700 reconciler.go:225] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3931ed9df0f2df0c3ef1bde5295b3b58-usr-share-ca-certificates\") pod \"kube-controller-manager-pvesc\" (UID: \"3931ed9df0f2df0c3ef1bde5295b3b58\") "
Jun 27 17:47:21 pvesc kubelet[4700]: I0627 17:47:21.240085 4700 reconciler.go:225] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/daa1370554337bdc1de1d8ec063a910d-k8s-certs\") pod \"kube-apiserver-pvesc\" (UID: \"daa1370554337bdc1de1d8ec063a910d\") "
Jun 27 17:47:21 pvesc kubelet[4700]: I0627 17:47:21.240114 4700 reconciler.go:225] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3931ed9df0f2df0c3ef1bde5295b3b58-etc-ca-certificates\") pod \"kube-controller-manager-pvesc\" (UID: \"3931ed9df0f2df0c3ef1bde5295b3b58\") "
Jun 27 17:47:21 pvesc kubelet[4700]: I0627 17:47:21.240146 4700 reconciler.go:225] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-time\" (UniqueName: \"kubernetes.io/host-path/3931ed9df0f2df0c3ef1bde5295b3b58-host-time\") pod \"kube-controller-manager-pvesc\" (UID: \"3931ed9df0f2df0c3ef1bde5295b3b58\") "
Jun 27 17:47:21 pvesc kubelet[4700]: I0627 17:47:21.240173 4700 reconciler.go:225] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs-0\" (UniqueName: \"kubernetes.io/host-path/daa1370554337bdc1de1d8ec063a910d-etcd-certs-0\") pod \"kube-apiserver-pvesc\" (UID: \"daa1370554337bdc1de1d8ec063a910d\") "
Jun 27 17:47:21 pvesc kubelet[4700]: I0627 17:47:21.240200 4700 reconciler.go:225] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/daa1370554337bdc1de1d8ec063a910d-usr-share-ca-certificates\") pod \"kube-apiserver-pvesc\" (UID: \"daa1370554337bdc1de1d8ec063a910d\") "
Jun 27 17:47:21 pvesc kubelet[4700]: I0627 17:47:21.240240 4700 reconciler.go:225] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/daa1370554337bdc1de1d8ec063a910d-ca-certs\") pod \"kube-apiserver-pvesc\" (UID: \"daa1370554337bdc1de1d8ec063a910d\") "
Jun 27 17:47:21 pvesc kubelet[4700]: I0627 17:47:21.240267 4700 reconciler.go:225] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3931ed9df0f2df0c3ef1bde5295b3b58-flexvolume-dir\") pod \"kube-controller-manager-pvesc\" (UID: \"3931ed9df0f2df0c3ef1bde5295b3b58\") "
Jun 27 17:47:21 pvesc kubelet[4700]: I0627 17:47:21.240294 4700 reconciler.go:225] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3931ed9df0f2df0c3ef1bde5295b3b58-kubeconfig\") pod \"kube-controller-manager-pvesc\" (UID: \"3931ed9df0f2df0c3ef1bde5295b3b58\") "
Jun 27 17:47:21 pvesc kubelet[4700]: I0627 17:47:21.240330 4700 reconciler.go:225] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3931ed9df0f2df0c3ef1bde5295b3b58-usr-local-share-ca-certificates\") pod \"kube-controller-manager-pvesc\" (UID: \"3931ed9df0f2df0c3ef1bde5295b3b58\") "
Jun 27 17:47:21 pvesc kubelet[4700]: I0627 17:47:21.240366 4700 reconciler.go:225] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/daa1370554337bdc1de1d8ec063a910d-etc-ca-certificates\") pod \"kube-apiserver-pvesc\" (UID: \"daa1370554337bdc1de1d8ec063a910d\") "
Jun 27 17:47:21 pvesc kubelet[4700]: I0627 17:47:21.240393 4700 reconciler.go:225] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3931ed9df0f2df0c3ef1bde5295b3b58-ca-certs\") pod \"kube-controller-manager-pvesc\" (UID: \"3931ed9df0f2df0c3ef1bde5295b3b58\") "
Jun 27 17:47:21 pvesc kubelet[4700]: E0627 17:47:21.339818 4700 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: Get "https://lb.kubesphere.local:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pvesc?timeout=10s": dial tcp 192.168.50.6:6443: connect: connection refused
Jun 27 17:47:21 pvesc kubelet[4700]: E0627 17:47:21.340986 4700 kubelet.go:2451] "Error getting node" err="node \"pvesc\" not found"
Jun 27 17:47:21 pvesc kubelet[4700]: E0627 17:47:21.409974 4700 remote_runtime.go:116] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to reserve sandbox name \"kube-apiserver-pvesc_kube-system_daa1370554337bdc1de1d8ec063a910d_0\": name \"kube-apiserver-pvesc_kube-system_daa1370554337bdc1de1d8ec063a910d_0\" is reserved for \"310b500ebfc8a5e524688d7b22d4b0d66aedde6df4ebfa01cc9a234592048bc1\""
Jun 27 17:47:21 pvesc kubelet[4700]: E0627 17:47:21.410007 4700 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to reserve sandbox name \"kube-apiserver-pvesc_kube-system_daa1370554337bdc1de1d8ec063a910d_0\": name \"kube-apiserver-pvesc_kube-system_daa1370554337bdc1de1d8ec063a910d_0\" is reserved for \"310b500ebfc8a5e524688d7b22d4b0d66aedde6df4ebfa01cc9a234592048bc1\"" pod="kube-system/kube-apiserver-pvesc"
Jun 27 17:47:21 pvesc kubelet[4700]: E0627 17:47:21.410025 4700 kuberuntime_manager.go:819] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to reserve sandbox name \"kube-apiserver-pvesc_kube-system_daa1370554337bdc1de1d8ec063a910d_0\": name \"kube-apiserver-pvesc_kube-system_daa1370554337bdc1de1d8ec063a910d_0\" is reserved for \"310b500ebfc8a5e524688d7b22d4b0d66aedde6df4ebfa01cc9a234592048bc1\"" pod="kube-system/kube-apiserver-pvesc"
Jun 27 17:47:21 pvesc kubelet[4700]: E0627 17:47:21.410077 4700 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-pvesc_kube-system(daa1370554337bdc1de1d8ec063a910d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-pvesc_kube-system(daa1370554337bdc1de1d8ec063a910d)\\\": rpc error: code = Unknown desc = failed to reserve sandbox name \\\"kube-apiserver-pvesc_kube-system_daa1370554337bdc1de1d8ec063a910d_0\\\": name \\\"kube-apiserver-pvesc_kube-system_daa1370554337bdc1de1d8ec063a910d_0\\\" is reserved for \\\"310b500ebfc8a5e524688d7b22d4b0d66aedde6df4ebfa01cc9a234592048bc1\\\"\"" pod="kube-system/kube-apiserver-pvesc" podUID=daa1370554337bdc1de1d8ec063a910d
Jun 27 17:47:21 pvesc kubelet[4700]: E0627 17:47:21.437767 4700 remote_runtime.go:116] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to reserve sandbox name \"kube-controller-manager-pvesc_kube-system_3931ed9df0f2df0c3ef1bde5295b3b58_0\": name \"kube-controller-manager-pvesc_kube-system_3931ed9df0f2df0c3ef1bde5295b3b58_0\" is reserved for \"f4e8404ddab2e6fd476f29393c494cdc63f22b32284315145c0406192e6542c6\""
Jun 27 17:47:21 pvesc kubelet[4700]: E0627 17:47:21.437795 4700 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to reserve sandbox name \"kube-controller-manager-pvesc_kube-system_3931ed9df0f2df0c3ef1bde5295b3b58_0\": name \"kube-controller-manager-pvesc_kube-system_3931ed9df0f2df0c3ef1bde5295b3b58_0\" is reserved for \"f4e8404ddab2e6fd476f29393c494cdc63f22b32284315145c0406192e6542c6\"" pod="kube-system/kube-controller-manager-pvesc"
Jun 27 17:47:21 pvesc kubelet[4700]: E0627 17:47:21.437815 4700 kuberuntime_manager.go:819] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to reserve sandbox name \"kube-controller-manager-pvesc_kube-system_3931ed9df0f2df0c3ef1bde5295b3b58_0\": name \"kube-controller-manager-pvesc_kube-system_3931ed9df0f2df0c3ef1bde5295b3b58_0\" is reserved for \"f4e8404ddab2e6fd476f29393c494cdc63f22b32284315145c0406192e6542c6\"" pod="kube-system/kube-controller-manager-pvesc"
Jun 27 17:47:21 pvesc kubelet[4700]: E0627 17:47:21.437861 4700 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-pvesc_kube-system(3931ed9df0f2df0c3ef1bde5295b3b58)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-pvesc_kube-system(3931ed9df0f2df0c3ef1bde5295b3b58)\\\": rpc error: code = Unknown desc = failed to reserve sandbox name \\\"kube-controller-manager-pvesc_kube-system_3931ed9df0f2df0c3ef1bde5295b3b58_0\\\": name \\\"kube-controller-manager-pvesc_kube-system_3931ed9df0f2df0c3ef1bde5295b3b58_0\\\" is reserved for \\\"f4e8404ddab2e6fd476f29393c494cdc63f22b32284315145c0406192e6542c6\\\"\"" pod="kube-system/kube-controller-manager-pvesc" podUID=3931ed9df0f2df0c3ef1bde5295b3b58
Jun 27 17:47:21 pvesc kubelet[4700]: E0627 17:47:21.440538 4700 remote_runtime.go:116] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to reserve sandbox name \"kube-scheduler-pvesc_kube-system_3f55819df1e5fbb5d8f9152903979a92_0\": name \"kube-scheduler-pvesc_kube-system_3f55819df1e5fbb5d8f9152903979a92_0\" is reserved for \"69645726b5ac7acb61a660fad2736fd27440736a8648464a59ef92bd3914ebc7\""
Jun 27 17:47:21 pvesc kubelet[4700]: E0627 17:47:21.440565 4700 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to reserve sandbox name \"kube-scheduler-pvesc_kube-system_3f55819df1e5fbb5d8f9152903979a92_0\": name \"kube-scheduler-pvesc_kube-system_3f55819df1e5fbb5d8f9152903979a92_0\" is reserved for \"69645726b5ac7acb61a660fad2736fd27440736a8648464a59ef92bd3914ebc7\"" pod="kube-system/kube-scheduler-pvesc"
Jun 27 17:47:21 pvesc kubelet[4700]: E0627 17:47:21.440580 4700 kuberuntime_manager.go:819] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to reserve sandbox name \"kube-scheduler-pvesc_kube-system_3f55819df1e5fbb5d8f9152903979a92_0\": name \"kube-scheduler-pvesc_kube-system_3f55819df1e5fbb5d8f9152903979a92_0\" is reserved for \"69645726b5ac7acb61a660fad2736fd27440736a8648464a59ef92bd3914ebc7\"" pod="kube-system/kube-scheduler-pvesc"
Jun 27 17:47:21 pvesc kubelet[4700]: E0627 17:47:21.440730 4700 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-pvesc_kube-system(3f55819df1e5fbb5d8f9152903979a92)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-pvesc_kube-system(3f55819df1e5fbb5d8f9152903979a92)\\\": rpc error: code = Unknown desc = failed to reserve sandbox name \\\"kube-scheduler-pvesc_kube-system_3f55819df1e5fbb5d8f9152903979a92_0\\\": name \\\"kube-scheduler-pvesc_kube-system_3f55819df1e5fbb5d8f9152903979a92_0\\\" is reserved for \\\"69645726b5ac7acb61a660fad2736fd27440736a8648464a59ef92bd3914ebc7\\\"\"" pod="kube-system/kube-scheduler-pvesc" podUID=3f55819df1e5fbb5d8f9152903979a92
Jun 27 17:47:21 pvesc kubelet[4700]: E0627 17:47:21.441709 4700 kubelet.go:2451] "Error getting node" err="node \"pvesc\" not found"
afrojewelz
最后用同样的参数 只把config-sample.yaml containerManger改成docker就完全没有问题,直接初始化成功了