• 安装部署
  • 使用KK2.0.0版本安装k8s集群报错 Error getting node" err="node \"sit-k8s-master1\" not found

操作系统信息
例如:虚拟机/物理机,Centos7.8,8C/32G

Kubernetes版本信息
例如:v1.21.5多节点。三台master 三台worker

容器运行时
例如,使用 docker,版本多少v20.10.9

KubeSphere版本信息
例如:v3.2.1。在线安装,全套安装。

问题是什么
报错日志是什么,最好有截图。

[certs] Generating “apiserver-kubelet-client” certificate and key

[certs] Generating “front-proxy-ca” certificate and key

[certs] Generating “front-proxy-client” certificate and key

[certs] External etcd mode: Skipping etcd/ca certificate authority generation

[certs] External etcd mode: Skipping etcd/server certificate generation

[certs] External etcd mode: Skipping etcd/peer certificate generation

[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation

[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation

[certs] Generating “sa” key and public key

[kubeconfig] Using kubeconfig folder “/etc/kubernetes”

[kubeconfig] Writing “admin.conf” kubeconfig file

[kubeconfig] Writing “kubelet.conf” kubeconfig file

[kubeconfig] Writing “controller-manager.conf” kubeconfig file

[kubeconfig] Writing “scheduler.conf” kubeconfig file

[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”

[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”

[kubelet-start] Starting the kubelet

[control-plane] Using manifest folder “/etc/kubernetes/manifests”

[control-plane] Creating static Pod manifest for “kube-apiserver”

[control-plane] Creating static Pod manifest for “kube-controller-manager”

[control-plane] Creating static Pod manifest for “kube-scheduler”

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s

[kubelet-check] Initial timeout of 40s passed.

    Unfortunately, an error has occurred:

            timed out waiting for the condition

    This error is likely caused by:

            - The kubelet is not running

            - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:

            - 'systemctl status kubelet'

            - 'journalctl -xeu kubelet'

    Additionally, a control plane component may have crashed or exited when started by the container runtime.

    To troubleshoot, list all containers using your preferred container runtimes CLI.

    Here is one example how you may list all Kubernetes containers running in docker:

            - 'docker ps -a | grep kube | grep -v pause'

            Once you have found the failing container, you can inspect its logs with:

            - 'docker logs CONTAINERID'

error execution phase wait-control-plane: couldn’t initialize a Kubernetes cluster

systemctl status kubelet

● kubelet.service - kubelet: The Kubernetes Node Agent

Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)

Drop-In: /etc/systemd/system/kubelet.service.d

       └─10-kubeadm.conf

Active: inactive (dead) since Tue 2022-06-07 16:37:07 CST; 1min 3s ago

 Docs: http://kubernetes.io/docs/

Process: 36809 ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=0/SUCCESS)

Main PID: 36809 (code=exited, status=0/SUCCESS)

Jun 07 16:37:07 sit-k8s-master1 kubelet[36809]: E0607 16:37:07.127521 36809 kubelet.go:2291] “Error getting node” err="node \“sit-k8s-master1\” not found"

Jun 07 16:37:07 sit-k8s-master1 kubelet[36809]: E0607 16:37:07.228053 36809 kubelet.go:2291] “Error getting node” err="node \“sit-k8s-master1\” not found"

Jun 07 16:37:07 sit-k8s-master1 kubelet[36809]: E0607 16:37:07.328968 36809 kubelet.go:2291] “Error getting node” err="node \“sit-k8s-master1\” not found"

Jun 07 16:37:07 sit-k8s-master1 kubelet[36809]: E0607 16:37:07.429593 36809 kubelet.go:2291] “Error getting node” err="node \“sit-k8s-master1\” not found"

Jun 07 16:37:07 sit-k8s-master1 kubelet[36809]: E0607 16:37:07.530496 36809 kubelet.go:2291] “Error getting node” err="node \“sit-k8s-master1\” not found"

Jun 07 16:37:07 sit-k8s-master1 kubelet[36809]: E0607 16:37:07.631477 36809 kubelet.go:2291] “Error getting node” err="node \“sit-k8s-master1\” not found"

Jun 07 16:37:07 sit-k8s-master1 kubelet[36809]: I0607 16:37:07.632640 36809 cni.go:239] “Unable to update cni config” err=“no networks found in /etc/cni/net.d”

Jun 07 16:37:07 sit-k8s-master1 systemd[1]: Stopping kubelet: The Kubernetes Node Agent…

Jun 07 16:37:07 sit-k8s-master1 kubelet[36809]: I0607 16:37:07.728825 36809 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/etc/kubernetes/pki/ca.crt

检查网络、DNS、防火墙,试试能不能拉到镜像,看看对应的kubelet日志 journalctl -xeu kubelet,先排查下问题在哪

    ruiyaoOps

    nameserver 10.255.254.88

    nameserver 10.255.255.88

    journalctl -xeu kubelet 命令查看的部分日志

    Jun 07 17:11:19 sit-k8s-master1 kubelet[53553]: E0607 17:11:19.228801 53553 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get “https://lb.kubesphere.local:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsit-k8s-master1&limit=500&resourceVersion=0”: dial tcp 172.30.35.80:6443: connect: connection refused

    Jun 07 17:11:19 sit-k8s-master1 kubelet[53553]: E0607 17:11:19.240807 53553 kubelet.go:2291] “Error getting node” err="node \“sit-k8s-master1\” not found"

    Jun 07 17:11:19 sit-k8s-master1 kubelet[53553]: E0607 17:11:19.341890 53553 kubelet.go:2291] “Error getting node” err="node \“sit-k8s-master1\” not found"

    Jun 07 17:11:19 sit-k8s-master1 kubelet[53553]: I0607 17:11:19.364251 53553 kubelet_node_status.go:71] “Attempting to register node” node=“sit-k8s-master1”

    Jun 07 17:11:19 sit-k8s-master1 kubelet[53553]: E0607 17:11:19.365022 53553 kubelet_node_status.go:93] “Unable to register node with API server” err="Post \“https://lb.kubesphere.local:6443/api/v1/nodes\”: dial tcp 172.30.35.80:6443: connect: connection refused" node=“sit-k8s-master1”

    Jun 07 17:11:19 sit-k8s-master1 kubelet[53553]: E0607 17:11:19.442315 53553 kubelet.go:2291] “Error getting node” err="node \“sit-k8s-master1\” not found"

    Jun 07 17:11:19 sit-k8s-master1 kubelet[53553]: E0607 17:11:19.542858 53553 kubelet.go:2291] “Error getting node” err="node \“sit-k8s-master1\” not found"

    Jun 07 17:11:19 sit-k8s-master1 kubelet[53553]: E0607 17:11:19.643644 53553 kubelet.go:2291] “Error getting node” err="node \“sit-k8s-master1\” not found"

    Jun 07 17:11:19 sit-k8s-master1 kubelet[53553]: E0607 17:11:19.744045 53553 kubelet.go:2291] “Error getting node” err="node \“sit-k8s-master1\” not found"

    Jun 07 17:11:19 sit-k8s-master1 kubelet[53553]: E0607 17:11:19.844490 53553 kubelet.go:2291] “Error getting node” err="node \“sit-k8s-master1\” not found"

    Jun 07 17:11:19 sit-k8s-master1 kubelet[53553]: E0607 17:11:19.944971 53553 kubelet.go:2291] “Error getting node” err="node \“sit-k8s-master1\” not found"

    Jun 07 17:11:20 sit-k8s-master1 kubelet[53553]: E0607 17:11:20.045282 53553 kubelet.go:2291] “Error getting node” err="node \“sit-k8s-master1\” not found"

    Jun 07 17:11:20 sit-k8s-master1 kubelet[53553]: E0607 17:11:20.146175 53553 kubelet.go:2291] “Error getting node” err="node \“sit-k8s-master1\” not found"

    Jun 07 17:11:20 sit-k8s-master1 kubelet[53553]: E0607 17:11:20.205367 53553 event.go:273] Unable to write event: ‘&v1.Event{TypeMeta:v1.TypeMeta{Kind:"“, APIVersion:”"}, ObjectMeta:v1.ObjectMeta{Name:“sit-k8s-master1.16f64b8198e1fca1”, GenerateName:"", Namespace:“default”, SelfLink:"“, UID:”“, ResourceVersion:”“, Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0×0, ext:0, loc🙁*time.Location)(nil)}}, DeletionTimestamp🙁*v1.Time)(nil), DeletionGracePeriodSeconds🙁*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:”", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:“Node”, Namespace:"", Name:“sit-k8s-master1”, UID:“sit-k8s-master1”, APIVersion:"“, ResourceVersion:”“, FieldPath:”"}, Reason:“NodeHasSufficientMemory”, Message:“Node sit-k8s-master1 status is now: NodeHasSufficientMemory”, Source:v1.EventSource{Component:“kubelet”, Host:“sit-k8s-master1”}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc09fe349458bcaa1, ext:6981826228, loc🙁*time.Location)(0×74f4aa0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc09fe349458bcaa1, ext:6981826228, loc🙁*time.Location)(0×74f4aa0)}}, Count:1, Type:“Normal”, EventTime:v1.MicroTime{Time:time.Time{wall:0×0, ext:0, loc🙁*time.Location)(nil)}}, Series🙁*v1.EventSeries)(nil), Action:"“, Related🙁*v1.ObjectReference)(nil), ReportingController:”“, ReportingInstance:”"}’: ‘Post “https://lb.kubesphere.local:6443/api/v1/namespaces/default/events”: dial tcp 172.30.36.80:6443: connect: connection refused’(may retry after sleeping