[root@k8s bdcsmp]#kubectl get nodes
k8s.m102 Ready master,worker 6d v1.18.6
k8s.n188 Ready worker 5d23h v1.18.6
k8s.n189 NotReady worker 35m v1.18.6
k8s.n190 Ready worker 3h13m v1.18.6

查看189节点发现kubelet启动失败
1月 05 14:49:46 k8s.n189 kubelet[12445]: I0105 14:49:46.790421 12445 client.go:92] Start docker client with request timeout=2m0s
1月 05 14:49:46 k8s.n189 kubelet[12445]: W0105 14:49:46.791526 12445 docker_service.go:561] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to
1月 05 14:49:46 k8s.n189 kubelet[12445]: I0105 14:49:46.791554 12445 docker_service.go:238] Hairpin mode set to "hairpin-veth"
1月 05 14:49:46 k8s.n189 kubelet[12445]: W0105 14:49:46.792055 12445 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
1月 05 14:49:46 k8s.n189 kubelet[12445]: W0105 14:49:46.797295 12445 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
1月 05 14:49:46 k8s.n189 kubelet[12445]: I0105 14:49:46.797343 12445 docker_service.go:253] Docker cri networking managed by cni
1月 05 14:49:46 k8s.n189 kubelet[12445]: W0105 14:49:46.797439 12445 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
1月 05 14:49:46 k8s.n189 kubelet[12445]: I0105 14:49:46.810627 12445 docker_service.go:258] Docker Info: &{ID:2HMV:WU3G:QPKU:HQ4C:DMZG:IDMF:363P:NVIP:OMB7:ISIH:BW3H:OBR4 Contain
1月 05 14:49:46 k8s.n189 kubelet[12445]: I0105 14:49:46.810717 12445 docker_service.go:271] Setting cgroupDriver to systemd
1月 05 14:49:46 k8s.n189 kubelet[12445]: I0105 14:49:46.826419 12445 remote_runtime.go:59] parsed scheme: ""
1月 05 14:49:46 k8s.n189 kubelet[12445]: I0105 14:49:46.826439 12445 remote_runtime.go:59] scheme "" not registered, fallback to default scheme
1月 05 14:49:46 k8s.n189 kubelet[12445]: I0105 14:49:46.826473 12445 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock <nil> 0 <nil>}] <ni
1月 05 14:49:46 k8s.n189 kubelet[12445]: I0105 14:49:46.826491 12445 clientconn.go:933] ClientConn switching balancer to "pick_first"
1月 05 14:49:46 k8s.n189 kubelet[12445]: I0105 14:49:46.826538 12445 remote_image.go:50] parsed scheme: ""
1月 05 14:49:46 k8s.n189 kubelet[12445]: I0105 14:49:46.826546 12445 remote_image.go:50] scheme "" not registered, fallback to default scheme
1月 05 14:49:46 k8s.n189 kubelet[12445]: I0105 14:49:46.826557 12445 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock <nil> 0 <nil>}] <ni
1月 05 14:49:46 k8s.n189 kubelet[12445]: I0105 14:49:46.826563 12445 clientconn.go:933] ClientConn switching balancer to "pick_first"
1月 05 14:49:46 k8s.n189 kubelet[12445]: I0105 14:49:46.826597 12445 kubelet.go:292] Adding pod path: /etc/kubernetes/manifests
1月 05 14:49:46 k8s.n189 kubelet[12445]: I0105 14:49:46.826630 12445 kubelet.go:317] Watching apiserver
1月 05 14:49:51 k8s.n189 kubelet[12445]: W0105 14:49:51.797636 12445 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
1月 05 14:49:52 k8s.n189 kubelet[12445]: E0105 14:49:52.121003 12445 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Dep
1月 05 14:49:52 k8s.n189 kubelet[12445]: For verbose messaging see aws.Config.CredentialsChainVerboseErrors
1月 05 14:49:52 k8s.n189 kubelet[12445]: I0105 14:49:52.122350 12445 kuberuntime_manager.go:211] Container runtime docker initialized, version: 18.09.9, apiVersion: 1.39.0
1月 05 14:49:52 k8s.n189 kubelet[12445]: I0105 14:49:52.122899 12445 server.go:1126] Started kubelet
1月 05 14:49:52 k8s.n189 kubelet[12445]: E0105 14:49:52.123048 12445 kubelet.go:1306] Image garbage collection failed once. Stats initialization may not have completed yet: fail
1月 05 14:49:52 k8s.n189 kubelet[12445]: I0105 14:49:52.123086 12445 server.go:145] Starting to listen on 0.0.0.0:10250
1月 05 14:49:52 k8s.n189 kubelet[12445]: I0105 14:49:52.124132 12445 server.go:393] Adding debug handlers to kubelet server.
1月 05 14:49:52 k8s.n189 kubelet[12445]: I0105 14:49:52.124256 12445 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
1月 05 14:49:52 k8s.n189 kubelet[12445]: I0105 14:49:52.128362 12445 volume_manager.go:265] Starting Kubelet Volume Manager
1月 05 14:49:52 k8s.n189 kubelet[12445]: I0105 14:49:52.128908 12445 desired_state_of_world_populator.go:139] Desired state populator starts to run
1月 05 14:49:52 k8s.n189 kubelet[12445]: E0105 14:49:52.130487 12445 kubelet.go:2188] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady messag
1月 05 14:49:52 k8s.n189 kubelet[12445]: I0105 14:49:52.130590 12445 clientconn.go:106] parsed scheme: "unix"
1月 05 14:49:52 k8s.n189 kubelet[12445]: I0105 14:49:52.130601 12445 clientconn.go:106] scheme "unix" not registered, fallback to default scheme
1月 05 14:49:52 k8s.n189 kubelet[12445]: I0105 14:49:52.130692 12445 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock <nil>
1月 05 14:49:52 k8s.n189 kubelet[12445]: I0105 14:49:52.130702 12445 clientconn.go:933] ClientConn switching balancer to "pick_first"
1月 05 14:49:52 k8s.n189 kubelet[12445]: I0105 14:49:52.154771 12445 status_manager.go:158] Starting to sync pod status with apiserver
1月 05 14:49:52 k8s.n189 kubelet[12445]: I0105 14:49:52.154810 12445 kubelet.go:1822] Starting kubelet main sync loop.
1月 05 14:49:52 k8s.n189 kubelet[12445]: E0105 14:49:52.154853 12445 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet,
1月 05 14:49:52 k8s.n189 kubelet[12445]: I0105 14:49:52.205691 12445 cpu_manager.go:184] [cpumanager] starting with none policy
1月 05 14:49:52 k8s.n189 kubelet[12445]: I0105 14:49:52.205706 12445 cpu_manager.go:185] [cpumanager] reconciling every 10s
1月 05 14:49:52 k8s.n189 kubelet[12445]: I0105 14:49:52.205728 12445 state_mem.go:36] [cpumanager] initializing new in-memory state store
1月 05 14:49:52 k8s.n189 kubelet[12445]: I0105 14:49:52.206004 12445 state_mem.go:88] [cpumanager] updated default cpuset: ""
1月 05 14:49:52 k8s.n189 kubelet[12445]: I0105 14:49:52.206015 12445 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
1月 05 14:49:52 k8s.n189 kubelet[12445]: I0105 14:49:52.206028 12445 policy_none.go:43] [cpumanager] none policy: Start
1月 05 14:49:52 k8s.n189 kubelet[12445]: E0105 14:49:52.208631 12445 node_container_manager_linux.go:57] Failed to create ["kubepods"] cgroup
1月 05 14:49:52 k8s.n189 kubelet[12445]: F0105 14:49:52.208650 12445 kubelet.go:1384] Failed to start ContainerManager Cannot set property TasksAccounting, or unknown property.
1月 05 14:49:52 k8s.n189 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
1月 05 14:49:52 k8s.n189 systemd[1]: Unit kubelet.service entered failed state.
1月 05 14:49:52 k8s.n189 systemd[1]: kubelet.service failed.

不明白怎么处理这个

    RolandMa1986
    都是一样的服务器centos7, 其它都正常
    Linux k8s.n188 3.10.0-1160.6.1.el7.x86_64 #1 SMP Tue Nov 17 13:59:11 UTC 2020 ×86_64 ×86_64 ×86_64 GNU/Linux

    Linux k8s.n189 3.10.0-1160.6.1.el7.x86_64 #1 SMP Tue Nov 17 13:59:11 UTC 2020 ×86_64 ×86_64 ×86_64 GNU/Linux

      cadge10 可以试下,yum update,然后再systemctl stop kubelet; systemctl start kubelet。
      节点是后面单独加的?