• Kubernetes
  • Kubernetes节点出现NotReady问题收集

此贴向各位收集Kubernetes节点出现NotReady问题,不管测试环境还是生产环境,不管已经解决还是没有解决,出现NotReady情况,都可以在此处记录。然后帮助大家解决及留下解决的方法。方式为:节点NotReady,执行journalctl -u kubelet -f指令,贴出上面报错的信息即可。

如kubelet看得到日志为如下:
journalctl -u kubelet -f

E0127 10:58:30.888767   20522 reflector.go:282] object-"kube-system"/"coredns-token-zwkzs": Failed to watch *v1.Secret: Get https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/secrets?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dcoredns-token-zwkzs&resourceVersion=7399132&timeout=7m50s&timeoutSeconds=470&watch=true: dial tcp 192.168.0.3:6443: connect: connection refused
    -- Logs begin at Wed 2021-01-20 10:41:55 CST. --
    Jan 29 14:13:37 node2 kubelet[10259]: Trace[1366589216]: [14.599630846s] [14.599630846s] END
    Jan 29 14:13:37 node2 kubelet[10259]: E0129 14:13:37.606881   10259 reflector.go:153] object-"base"/"seata-config": Failed to list *v1.ConfigMap: Get https://lb.kubesphere.local:6443/api/v1/namespaces/base/configmaps?fieldSelector=metadata.name%3Dseata-config&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:37 node2 kubelet[10259]: I0129 14:13:37.806851   10259 trace.go:116] Trace[109599865]: "Reflector ListAndWatch" name:object-"istio-system"/"istio-ingressgateway-certs" (started: 2021-01-29 14:13:23.206998312 +0800 CST m=+789417.398975371) (total time: 14.599835986s):
    Jan 29 14:13:37 node2 kubelet[10259]: Trace[109599865]: [14.599835986s] [14.599835986s] END
    Jan 29 14:13:37 node2 kubelet[10259]: E0129 14:13:37.806881   10259 reflector.go:153] object-"istio-system"/"istio-ingressgateway-certs": Failed to list *v1.Secret: Get https://lb.kubesphere.local:6443/api/v1/namespaces/istio-system/secrets?fieldSelector=metadata.name%3Distio-ingressgateway-certs&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:38 node2 kubelet[10259]: I0129 14:13:38.006970   10259 trace.go:116] Trace[373962950]: "Reflector ListAndWatch" name:object-"istio-system"/"istio.istio-mixer-service-account" (started: 2021-01-29 14:13:23.407134142 +0800 CST m=+789417.599111170) (total time: 14.599815682s):
    Jan 29 14:13:38 node2 kubelet[10259]: Trace[373962950]: [14.599815682s] [14.599815682s] END
    Jan 29 14:13:38 node2 kubelet[10259]: E0129 14:13:38.006990   10259 reflector.go:153] object-"istio-system"/"istio.istio-mixer-service-account": Failed to list *v1.Secret: Get https://lb.kubesphere.local:6443/api/v1/namespaces/istio-system/secrets?fieldSelector=metadata.name%3Distio.istio-mixer-service-account&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:38 node2 kubelet[10259]: I0129 14:13:38.206892   10259 trace.go:116] Trace[1890837965]: "Reflector ListAndWatch" name:object-"kubesphere-controls-system"/"kubesphere-router-serviceaccount-token-mp7rc" (started: 2021-01-29 14:13:23.607050494 +0800 CST m=+789417.799027525) (total time: 14.59981506s):
    Jan 29 14:13:38 node2 kubelet[10259]: Trace[1890837965]: [14.59981506s] [14.59981506s] END
    Jan 29 14:13:38 node2 kubelet[10259]: E0129 14:13:38.206920   10259 reflector.go:153] object-"kubesphere-controls-system"/"kubesphere-router-serviceaccount-token-mp7rc": Failed to list *v1.Secret: Get https://lb.kubesphere.local:6443/api/v1/namespaces/kubesphere-controls-system/secrets?fieldSelector=metadata.name%3Dkubesphere-router-serviceaccount-token-mp7rc&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:38 node2 kubelet[10259]: E0129 14:13:38.406836   10259 desired_state_of_world_populator.go:320] Error processing volume "jenkins-home" for pod "ks-jenkins-946b98b99-ts7s8_kubesphere-devops-system(b515ebd1-6036-402d-9ca0-c27695db865e)": error processing PVC kubesphere-devops-system/ks-jenkins: failed to fetch PVC from API server: Get https://lb.kubesphere.local:6443/api/v1/namespaces/kubesphere-devops-system/persistentvolumeclaims/ks-jenkins: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:38 node2 kubelet[10259]: I0129 14:13:38.606855   10259 trace.go:116] Trace[962292806]: "Reflector ListAndWatch" name:object-"uat-web"/"hub" (started: 2021-01-29 14:13:23.807069896 +0800 CST m=+789417.999046927) (total time: 14.799769398s):
    Jan 29 14:13:38 node2 kubelet[10259]: Trace[962292806]: [14.799769398s] [14.799769398s] END
    Jan 29 14:13:38 node2 kubelet[10259]: E0129 14:13:38.606878   10259 reflector.go:153] object-"uat-web"/"hub": Failed to list *v1.Secret: Get https://lb.kubesphere.local:6443/api/v1/namespaces/uat-web/secrets?fieldSelector=metadata.name%3Dhub&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:38 node2 kubelet[10259]: I0129 14:13:38.806843   10259 trace.go:116] Trace[1882652399]: "Reflector ListAndWatch" name:object-"default"/"nginx-ingree-nginx-ingress-token-sppw7" (started: 2021-01-29 14:13:24.00705198 +0800 CST m=+789418.199029010) (total time: 14.799773906s):
    Jan 29 14:13:38 node2 kubelet[10259]: Trace[1882652399]: [14.799773906s] [14.799773906s] END
    Jan 29 14:13:38 node2 kubelet[10259]: E0129 14:13:38.806865   10259 reflector.go:153] object-"default"/"nginx-ingree-nginx-ingress-token-sppw7": Failed to list *v1.Secret: Get https://lb.kubesphere.local:6443/api/v1/namespaces/default/secrets?fieldSelector=metadata.name%3Dnginx-ingree-nginx-ingress-token-sppw7&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:39 node2 kubelet[10259]: I0129 14:13:39.006884   10259 trace.go:116] Trace[356660762]: "Reflector ListAndWatch" name:object-"kubesphere-system"/"ks-installer-token-qgx5j" (started: 2021-01-29 14:13:24.207002073 +0800 CST m=+789418.398979135) (total time: 14.799864622s):
    Jan 29 14:13:39 node2 kubelet[10259]: Trace[356660762]: [14.799864622s] [14.799864622s] END
    Jan 29 14:13:39 node2 kubelet[10259]: E0129 14:13:39.006912   10259 reflector.go:153] object-"kubesphere-system"/"ks-installer-token-qgx5j": Failed to list *v1.Secret: Get https://lb.kubesphere.local:6443/api/v1/namespaces/kubesphere-system/secrets?fieldSelector=metadata.name%3Dks-installer-token-qgx5j&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:39 node2 kubelet[10259]: I0129 14:13:39.206887   10259 trace.go:116] Trace[1686412020]: "Reflector ListAndWatch" name:object-"dev"/"default-token-8tnqb" (started: 2021-01-29 14:13:24.406975093 +0800 CST m=+789418.598952121) (total time: 14.799895901s):
    Jan 29 14:13:39 node2 kubelet[10259]: Trace[1686412020]: [14.799895901s] [14.799895901s] END
    Jan 29 14:13:39 node2 kubelet[10259]: E0129 14:13:39.206911   10259 reflector.go:153] object-"dev"/"default-token-8tnqb": Failed to list *v1.Secret: Get https://lb.kubesphere.local:6443/api/v1/namespaces/dev/secrets?fieldSelector=metadata.name%3Ddefault-token-8tnqb&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:39 node2 kubelet[10259]: I0129 14:13:39.406908   10259 trace.go:116] Trace[1595345123]: "Reflector ListAndWatch" name:object-"kube-system"/"openebs-ndm-config" (started: 2021-01-29 14:13:24.80703304 +0800 CST m=+789418.999010086) (total time: 14.599858276s):
    Jan 29 14:13:39 node2 kubelet[10259]: Trace[1595345123]: [14.599858276s] [14.599858276s] END
    Jan 29 14:13:39 node2 kubelet[10259]: E0129 14:13:39.406931   10259 reflector.go:153] object-"kube-system"/"openebs-ndm-config": Failed to list *v1.ConfigMap: Get https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dopenebs-ndm-config&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:39 node2 kubelet[10259]: W0129 14:13:39.606846   10259 status_manager.go:530] Failed to get status for pod "ks-controller-manager-5d5bbd57f7-s8dm5_kubesphere-system(85df45aa-4593-4e70-9996-8e549c5f93f8)": Get https://lb.kubesphere.local:6443/api/v1/namespaces/kubesphere-system/pods/ks-controller-manager-5d5bbd57f7-s8dm5: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:39 node2 kubelet[10259]: I0129 14:13:39.806870   10259 trace.go:116] Trace[299624423]: "Reflector ListAndWatch" name:object-"kube-system"/"kube-proxy-token-kqw9t" (started: 2021-01-29 14:13:25.007005316 +0800 CST m=+789419.198982365) (total time: 14.799848004s):
    Jan 29 14:13:39 node2 kubelet[10259]: Trace[299624423]: [14.799848004s] [14.799848004s] END
    Jan 29 14:13:39 node2 kubelet[10259]: E0129 14:13:39.806892   10259 reflector.go:153] object-"kube-system"/"kube-proxy-token-kqw9t": Failed to list *v1.Secret: Get https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%3Dkube-proxy-token-kqw9t&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:40 node2 kubelet[10259]: I0129 14:13:40.006888   10259 trace.go:116] Trace[702181964]: "Reflector ListAndWatch" name:object-"kubesphere-system"/"ks-account-secret" (started: 2021-01-29 14:13:25.207042855 +0800 CST m=+789419.399019941) (total time: 14.799828404s):
    Jan 29 14:13:40 node2 kubelet[10259]: Trace[702181964]: [14.799828404s] [14.799828404s] END
    Jan 29 14:13:40 node2 kubelet[10259]: E0129 14:13:40.006919   10259 reflector.go:153] object-"kubesphere-system"/"ks-account-secret": Failed to list *v1.Secret: Get https://lb.kubesphere.local:6443/api/v1/namespaces/kubesphere-system/secrets?fieldSelector=metadata.name%3Dks-account-secret&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:40 node2 kubelet[10259]: I0129 14:13:40.206896   10259 trace.go:116] Trace[886293205]: "Reflector ListAndWatch" name:object-"kubesphere-logging-system"/"fluentbit-token-qq8t2" (started: 2021-01-29 14:13:25.407075435 +0800 CST m=+789419.599052476) (total time: 14.799797943s):
    Jan 29 14:13:40 node2 kubelet[10259]: Trace[886293205]: [14.799797943s] [14.799797943s] END
    Jan 29 14:13:40 node2 kubelet[10259]: E0129 14:13:40.206915   10259 reflector.go:153] object-"kubesphere-logging-system"/"fluentbit-token-qq8t2": Failed to list *v1.Secret: Get https://lb.kubesphere.local:6443/api/v1/namespaces/kubesphere-logging-system/secrets?fieldSelector=metadata.name%3Dfluentbit-token-qq8t2&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:40 node2 kubelet[10259]: I0129 14:13:40.406876   10259 trace.go:116] Trace[1734050160]: "Reflector ListAndWatch" name:object-"uat"/"default-token-bmchv" (started: 2021-01-29 14:13:25.607010409 +0800 CST m=+789419.798987461) (total time: 14.799847641s):
    Jan 29 14:13:40 node2 kubelet[10259]: Trace[1734050160]: [14.799847641s] [14.799847641s] END
    Jan 29 14:13:40 node2 kubelet[10259]: E0129 14:13:40.406902   10259 reflector.go:153] object-"uat"/"default-token-bmchv": Failed to list *v1.Secret: Get https://lb.kubesphere.local:6443/api/v1/namespaces/uat/secrets?fieldSelector=metadata.name%3Ddefault-token-bmchv&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:40 node2 kubelet[10259]: I0129 14:13:40.606904   10259 trace.go:116] Trace[1236645263]: "Reflector ListAndWatch" name:object-"istio-system"/"istio" (started: 2021-01-29 14:13:26.007001938 +0800 CST m=+789420.198978971) (total time: 14.599882687s):
    Jan 29 14:13:40 node2 kubelet[10259]: Trace[1236645263]: [14.599882687s] [14.599882687s] END
    Jan 29 14:13:40 node2 kubelet[10259]: E0129 14:13:40.606930   10259 reflector.go:153] object-"istio-system"/"istio": Failed to list *v1.ConfigMap: Get https://lb.kubesphere.local:6443/api/v1/namespaces/istio-system/configmaps?fieldSelector=metadata.name%3Distio&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:40 node2 kubelet[10259]: I0129 14:13:40.806854   10259 trace.go:116] Trace[1375258767]: "Reflector ListAndWatch" name:object-"kubesphere-devops-system"/"ks-jenkins-token-q4294" (started: 2021-01-29 14:13:26.207032547 +0800 CST m=+789420.399009574) (total time: 14.599805418s):
    Jan 29 14:13:40 node2 kubelet[10259]: Trace[1375258767]: [14.599805418s] [14.599805418s] END
    Jan 29 14:13:40 node2 kubelet[10259]: E0129 14:13:40.806879   10259 reflector.go:153] object-"kubesphere-devops-system"/"ks-jenkins-token-q4294": Failed to list *v1.Secret: Get https://lb.kubesphere.local:6443/api/v1/namespaces/kubesphere-devops-system/secrets?fieldSelector=metadata.name%3Dks-jenkins-token-q4294&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:40 node2 kubelet[10259]: E0129 14:13:40.940596   10259 controller.go:135] failed to ensure node lease exists, will retry in 7s, error: Get https://lb.kubesphere.local:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/node2?timeout=10s: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:41 node2 kubelet[10259]: I0129 14:13:41.006876   10259 trace.go:116] Trace[304274737]: "Reflector ListAndWatch" name:object-"kube-system"/"qingcloud" (started: 2021-01-29 14:13:26.406954186 +0800 CST m=+789420.598931215) (total time: 14.599901898s):
    Jan 29 14:13:41 node2 kubelet[10259]: Trace[304274737]: [14.599901898s] [14.599901898s] END
    Jan 29 14:13:41 node2 kubelet[10259]: E0129 14:13:41.006905   10259 reflector.go:153] object-"kube-system"/"qingcloud": Failed to list *v1.Secret: Get https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%3Dqingcloud&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:41 node2 kubelet[10259]: I0129 14:13:41.206956   10259 trace.go:116] Trace[191011046]: "Reflector ListAndWatch" name:object-"kube-system"/"calico-node-token-vx78n" (started: 2021-01-29 14:13:26.60699188 +0800 CST m=+789420.798968943) (total time: 14.599944508s):
    Jan 29 14:13:41 node2 kubelet[10259]: Trace[191011046]: [14.599944508s] [14.599944508s] END
    Jan 29 14:13:41 node2 kubelet[10259]: E0129 14:13:41.206985   10259 reflector.go:153] object-"kube-system"/"calico-node-token-vx78n": Failed to list *v1.Secret: Get https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%3Dcalico-node-token-vx78n&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:41 node2 kubelet[10259]: I0129 14:13:41.406892   10259 trace.go:116] Trace[1648834578]: "Reflector ListAndWatch" name:object-"kube-system"/"nfs-client-nfs-client-provisioner-token-jk5ll" (started: 2021-01-29 14:13:26.807018404 +0800 CST m=+789420.998995431) (total time: 14.599858208s):
    Jan 29 14:13:41 node2 kubelet[10259]: Trace[1648834578]: [14.599858208s] [14.599858208s] END
    Jan 29 14:13:41 node2 kubelet[10259]: E0129 14:13:41.406933   10259 reflector.go:153] object-"kube-system"/"nfs-client-nfs-client-provisioner-token-jk5ll": Failed to list *v1.Secret: Get https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%3Dnfs-client-nfs-client-provisioner-token-jk5ll&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:41 node2 kubelet[10259]: I0129 14:13:41.606898   10259 trace.go:116] Trace[2001788621]: "Reflector ListAndWatch" name:object-"kube-system"/"openebs-maya-operator-token-lclvk" (started: 2021-01-29 14:13:27.006948628 +0800 CST m=+789421.198925655) (total time: 14.599933343s):
    Jan 29 14:13:41 node2 kubelet[10259]: Trace[2001788621]: [14.599933343s] [14.599933343s] END
    Jan 29 14:13:41 node2 kubelet[10259]: E0129 14:13:41.606929   10259 reflector.go:153] object-"kube-system"/"openebs-maya-operator-token-lclvk": Failed to list *v1.Secret: Get https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%3Dopenebs-maya-operator-token-lclvk&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:41 node2 kubelet[10259]: I0129 14:13:41.806845   10259 trace.go:116] Trace[258111256]: "Reflector ListAndWatch" name:object-"istio-system"/"istio-ingressgateway-service-account-token-pkkq6" (started: 2021-01-29 14:13:27.407046541 +0800 CST m=+789421.599023579) (total time: 14.399783724s):
    Jan 29 14:13:41 node2 kubelet[10259]: Trace[258111256]: [14.399783724s] [14.399783724s] END
    Jan 29 14:13:41 node2 kubelet[10259]: E0129 14:13:41.806872   10259 reflector.go:153] object-"istio-system"/"istio-ingressgateway-service-account-token-pkkq6": Failed to list *v1.Secret: Get https://lb.kubesphere.local:6443/api/v1/namespaces/istio-system/secrets?fieldSelector=metadata.name%3Distio-ingressgateway-service-account-token-pkkq6&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:42 node2 kubelet[10259]: I0129 14:13:42.006847   10259 trace.go:116] Trace[1573879688]: "Reflector ListAndWatch" name:object-"istio-system"/"istio-sidecar-injector-service-account-token-27bzh" (started: 2021-01-29 14:13:27.607007163 +0800 CST m=+789421.798984201) (total time: 14.399823163s):
    Jan 29 14:13:42 node2 kubelet[10259]: Trace[1573879688]: [14.399823163s] [14.399823163s] END
    Jan 29 14:13:42 node2 kubelet[10259]: E0129 14:13:42.006871   10259 reflector.go:153] object-"istio-system"/"istio-sidecar-injector-service-account-token-27bzh": Failed to list *v1.Secret: Get https://lb.kubesphere.local:6443/api/v1/namespaces/istio-system/secrets?fieldSelector=metadata.name%3Distio-sidecar-injector-service-account-token-27bzh&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:42 node2 kubelet[10259]: I0129 14:13:42.206878   10259 trace.go:116] Trace[674702320]: "Reflector ListAndWatch" name:object-"base"/"repo" (started: 2021-01-29 14:13:27.807020606 +0800 CST m=+789421.998997640) (total time: 14.399839155s):
    Jan 29 14:13:42 node2 kubelet[10259]: Trace[674702320]: [14.399839155s] [14.399839155s] END
    Jan 29 14:13:42 node2 kubelet[10259]: E0129 14:13:42.206903   10259 reflector.go:153] object-"base"/"repo": Failed to list *v1.Secret: Get https://lb.kubesphere.local:6443/api/v1/namespaces/base/secrets?fieldSelector=metadata.name%3Drepo&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:42 node2 kubelet[10259]: I0129 14:13:42.406870   10259 trace.go:116] Trace[1456323082]: "Reflector ListAndWatch" name:object-"dev"/"hub" (started: 2021-01-29 14:13:28.007110082 +0800 CST m=+789422.199087132) (total time: 14.399744202s):
    Jan 29 14:13:42 node2 kubelet[10259]: Trace[1456323082]: [14.399744202s] [14.399744202s] END
    Jan 29 14:13:42 node2 kubelet[10259]: E0129 14:13:42.406893   10259 reflector.go:153] object-"dev"/"hub": Failed to list *v1.Secret: Get https://lb.kubesphere.local:6443/api/v1/namespaces/dev/secrets?fieldSelector=metadata.name%3Dhub&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:42 node2 kubelet[10259]: I0129 14:13:42.606910   10259 trace.go:116] Trace[1572988499]: "Reflector ListAndWatch" name:object-"istio-system"/"jaeger-operator-token-wfvzr" (started: 2021-01-29 14:13:28.20705826 +0800 CST m=+789422.399035299) (total time: 14.399832817s):
    Jan 29 14:13:42 node2 kubelet[10259]: Trace[1572988499]: [14.399832817s] [14.399832817s] END
    Jan 29 14:13:42 node2 kubelet[10259]: E0129 14:13:42.606936   10259 reflector.go:153] object-"istio-system"/"jaeger-operator-token-wfvzr": Failed to list *v1.Secret: Get https://lb.kubesphere.local:6443/api/v1/namespaces/istio-system/secrets?fieldSelector=metadata.name%3Djaeger-operator-token-wfvzr&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:42 node2 kubelet[10259]: I0129 14:13:42.806884   10259 trace.go:116] Trace[1944656612]: "Reflector ListAndWatch" name:object-"kubesphere-devops-system"/"ks-jenkins" (started: 2021-01-29 14:13:28.40703247 +0800 CST m=+789422.599009535) (total time: 14.399833656s):
    Jan 29 14:13:42 node2 kubelet[10259]: Trace[1944656612]: [14.399833656s] [14.399833656s] END
    Jan 29 14:13:42 node2 kubelet[10259]: E0129 14:13:42.806907   10259 reflector.go:153] object-"kubesphere-devops-system"/"ks-jenkins": Failed to list *v1.ConfigMap: Get https://lb.kubesphere.local:6443/api/v1/namespaces/kubesphere-devops-system/configmaps?fieldSelector=metadata.name%3Dks-jenkins&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:43 node2 kubelet[10259]: I0129 14:13:43.006904   10259 trace.go:116] Trace[1050698534]: "Reflector ListAndWatch" name:object-"kubesphere-system"/"ks-minio-token-6w6sq" (started: 2021-01-29 14:13:28.607014157 +0800 CST m=+789422.798991190) (total time: 14.399870018s):
    Jan 29 14:13:43 node2 kubelet[10259]: Trace[1050698534]: [14.399870018s] [14.399870018s] END
    Jan 29 14:13:43 node2 kubelet[10259]: E0129 14:13:43.006929   10259 reflector.go:153] object-"kubesphere-system"/"ks-minio-token-6w6sq": Failed to list *v1.Secret: Get https://lb.kubesphere.local:6443/api/v1/namespaces/kubesphere-system/secrets?fieldSelector=metadata.name%3Dks-minio-token-6w6sq&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:43 node2 kubelet[10259]: I0129 14:13:43.206907   10259 trace.go:116] Trace[26393369]: "Reflector ListAndWatch" name:k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46 (started: 2021-01-29 14:13:28.807022223 +0800 CST m=+789422.998999294) (total time: 14.399862837s):
    Jan 29 14:13:43 node2 kubelet[10259]: Trace[26393369]: [14.399862837s] [14.399862837s] END
    Jan 29 14:13:43 node2 kubelet[10259]: E0129 14:13:43.206928   10259 reflector.go:153] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://lb.kubesphere.local:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dnode2&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:43 node2 kubelet[10259]: I0129 14:13:43.406901   10259 trace.go:116] Trace[1677552033]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:135 (started: 2021-01-29 14:13:29.006976803 +0800 CST m=+789423.198953855) (total time: 14.399907101s):
    Jan 29 14:13:43 node2 kubelet[10259]: Trace[1677552033]: [14.399907101s] [14.399907101s] END
    Jan 29 14:13:43 node2 kubelet[10259]: E0129 14:13:43.406925   10259 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.RuntimeClass: Get https://lb.kubesphere.local:6443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:43 node2 kubelet[10259]: I0129 14:13:43.606872   10259 trace.go:116] Trace[1553041099]: "Reflector ListAndWatch" name:object-"kubesphere-logging-system"/"fluent-bit-config" (started: 2021-01-29 14:13:29.207040882 +0800 CST m=+789423.399017911) (total time: 14.399810682s):
    Jan 29 14:13:43 node2 kubelet[10259]: Trace[1553041099]: [14.399810682s] [14.399810682s] END
    Jan 29 14:13:43 node2 kubelet[10259]: E0129 14:13:43.606898   10259 reflector.go:153] object-"kubesphere-logging-system"/"fluent-bit-config": Failed to list *v1.ConfigMap: Get https://lb.kubesphere.local:6443/api/v1/namespaces/kubesphere-logging-system/configmaps?fieldSelector=metadata.name%3Dfluent-bit-config&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:43 node2 kubelet[10259]: I0129 14:13:43.806923   10259 trace.go:116] Trace[412574644]: "Reflector ListAndWatch" name:object-"dev"/"default-token-k7dpp" (started: 2021-01-29 14:13:29.40709729 +0800 CST m=+789423.599074335) (total time: 14.39979083s):
    Jan 29 14:13:43 node2 kubelet[10259]: Trace[412574644]: [14.39979083s] [14.39979083s] END
    Jan 29 14:13:43 node2 kubelet[10259]: E0129 14:13:43.806953   10259 reflector.go:153] object-"dev"/"default-token-k7dpp": Failed to list *v1.Secret: Get https://lb.kubesphere.local:6443/api/v1/namespaces/dev/secrets?fieldSelector=metadata.name%3Ddefault-token-k7dpp&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:44 node2 kubelet[10259]: I0129 14:13:44.006867   10259 trace.go:116] Trace[1901822405]: "Reflector ListAndWatch" name:object-"istio-system"/"jaeger-token-c7g92" (started: 2021-01-29 14:13:29.60714455 +0800 CST m=+789423.799121585) (total time: 14.399702092s):
    Jan 29 14:13:44 node2 kubelet[10259]: Trace[1901822405]: [14.399702092s] [14.399702092s] END
    Jan 29 14:13:44 node2 kubelet[10259]: E0129 14:13:44.006893   10259 reflector.go:153] object-"istio-system"/"jaeger-token-c7g92": Failed to list *v1.Secret: Get https://lb.kubesphere.local:6443/api/v1/namespaces/istio-system/secrets?fieldSelector=metadata.name%3Djaeger-token-c7g92&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:44 node2 kubelet[10259]: I0129 14:13:44.206891   10259 trace.go:116] Trace[570227392]: "Reflector ListAndWatch" name:object-"istio-system"/"istio.istio-sidecar-injector-service-account" (started: 2021-01-29 14:13:29.807185551 +0800 CST m=+789423.999162594) (total time: 14.399687447s):
    Jan 29 14:13:44 node2 kubelet[10259]: Trace[570227392]: [14.399687447s] [14.399687447s] END
    Jan 29 14:13:44 node2 kubelet[10259]: E0129 14:13:44.206915   10259 reflector.go:153] object-"istio-system"/"istio.istio-sidecar-injector-service-account": Failed to list *v1.Secret: Get https://lb.kubesphere.local:6443/api/v1/namespaces/istio-system/secrets?fieldSelector=metadata.name%3Distio.istio-sidecar-injector-service-account&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:44 node2 kubelet[10259]: E0129 14:13:44.383932   10259 kubelet_node_status.go:402] Error updating node status, will retry: error getting node "node2": Get https://lb.kubesphere.local:6443/api/v1/nodes/node2?resourceVersion=0&timeout=10s: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:44 node2 kubelet[10259]: E0129 14:13:44.384000   10259 kubelet_node_status.go:402] Error updating node status, will retry: error getting node "node2": Get https://lb.kubesphere.local:6443/api/v1/nodes/node2?timeout=10s: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:44 node2 kubelet[10259]: E0129 14:13:44.384038   10259 kubelet_node_status.go:402] Error updating node status, will retry: error getting node "node2": Get https://lb.kubesphere.local:6443/api/v1/nodes/node2?timeout=10s: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:44 node2 kubelet[10259]: E0129 14:13:44.384089   10259 kubelet_node_status.go:402] Error updating node status, will retry: error getting node "node2": Get https://lb.kubesphere.local:6443/api/v1/nodes/node2?timeout=10s: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:44 node2 kubelet[10259]: E0129 14:13:44.384122   10259 kubelet_node_status.go:402] Error updating node status, will retry: error getting node "node2": Get https://lb.kubesphere.local:6443/api/v1/nodes/node2?timeout=10s: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:44 node2 kubelet[10259]: E0129 14:13:44.384130   10259 kubelet_node_status.go:389] Unable to update node status: update node status exceeds retry count
    Jan 29 14:13:44 node2 kubelet[10259]: I0129 14:13:44.406873   10259 trace.go:116] Trace[1498466607]: "Reflector ListAndWatch" name:object-"kubesphere-logging-system"/"default-token-8q6qt" (started: 2021-01-29 14:13:30.007034077 +0800 CST m=+789424.199011139) (total time: 14.399817464s):
    Jan 29 14:13:44 node2 kubelet[10259]: Trace[1498466607]: [14.399817464s] [14.399817464s] END
    Jan 29 14:13:44 node2 kubelet[10259]: E0129 14:13:44.406916   10259 reflector.go:153] object-"kubesphere-logging-system"/"default-token-8q6qt": Failed to list *v1.Secret: Get https://lb.kubesphere.local:6443/api/v1/namespaces/kubesphere-logging-system/secrets?fieldSelector=metadata.name%3Ddefault-token-8q6qt&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:44 node2 kubelet[10259]: I0129 14:13:44.606887   10259 trace.go:116] Trace[676006857]: "Reflector ListAndWatch" name:object-"kubesphere-alerting-system"/"default-token-4djft" (started: 2021-01-29 14:13:30.207026489 +0800 CST m=+789424.399003553) (total time: 14.39984129s):
    Jan 29 14:13:44 node2 kubelet[10259]: Trace[676006857]: [14.39984129s] [14.39984129s] END
    Jan 29 14:13:44 node2 kubelet[10259]: E0129 14:13:44.606915   10259 reflector.go:153] object-"kubesphere-alerting-system"/"default-token-4djft": Failed to list *v1.Secret: Get https://lb.kubesphere.local:6443/api/v1/namespaces/kubesphere-alerting-system/secrets?fieldSelector=metadata.name%3Ddefault-token-4djft&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:44 node2 kubelet[10259]: I0129 14:13:44.806895   10259 trace.go:116] Trace[1155964153]: "Reflector ListAndWatch" name:object-"kubesphere-system"/"minio" (started: 2021-01-29 14:13:30.407296762 +0800 CST m=+789424.599273812) (total time: 14.39956867s):
    Jan 29 14:13:44 node2 kubelet[10259]: Trace[1155964153]: [14.39956867s] [14.39956867s] END
    Jan 29 14:13:44 node2 kubelet[10259]: E0129 14:13:44.806926   10259 reflector.go:153] object-"kubesphere-system"/"minio": Failed to list *v1.Secret: Get https://lb.kubesphere.local:6443/api/v1/namespaces/kubesphere-system/secrets?fieldSelector=metadata.name%3Dminio&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:45 node2 kubelet[10259]: I0129 14:13:45.006944   10259 trace.go:116] Trace[1353326887]: "Reflector ListAndWatch" name:object-"istio-system"/"jaeger-sampling-configuration" (started: 2021-01-29 14:13:30.607062394 +0800 CST m=+789424.799039437) (total time: 14.399860771s):
    Jan 29 14:13:45 node2 kubelet[10259]: Trace[1353326887]: [14.399860771s] [14.399860771s] END
    Jan 29 14:13:45 node2 kubelet[10259]: E0129 14:13:45.006967   10259 reflector.go:153] object-"istio-system"/"jaeger-sampling-configuration": Failed to list *v1.ConfigMap: Get https://lb.kubesphere.local:6443/api/v1/namespaces/istio-system/configmaps?fieldSelector=metadata.name%3Djaeger-sampling-configuration&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:45 node2 kubelet[10259]: I0129 14:13:45.206984   10259 trace.go:116] Trace[779911798]: "Reflector ListAndWatch" name:object-"istio-system"/"istio-mixer-service-account-token-2lhmw" (started: 2021-01-29 14:13:30.807057332 +0800 CST m=+789424.999034386) (total time: 14.399904262s):
    Jan 29 14:13:45 node2 kubelet[10259]: Trace[779911798]: [14.399904262s] [14.399904262s] END
    Jan 29 14:13:45 node2 kubelet[10259]: E0129 14:13:45.207013   10259 reflector.go:153] object-"istio-system"/"istio-mixer-service-account-token-2lhmw": Failed to list *v1.Secret: Get https://lb.kubesphere.local:6443/api/v1/namespaces/istio-system/secrets?fieldSelector=metadata.name%3Distio-mixer-service-account-token-2lhmw&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:45 node2 kubelet[10259]: I0129 14:13:45.406925   10259 trace.go:116] Trace[732459024]: "Reflector ListAndWatch" name:object-"base"/"default-token-wcqc7" (started: 2021-01-29 14:13:31.00702093 +0800 CST m=+789425.198997957) (total time: 14.399877036s):
    Jan 29 14:13:45 node2 kubelet[10259]: Trace[732459024]: [14.399877036s] [14.399877036s] END
    Jan 29 14:13:45 node2 kubelet[10259]: E0129 14:13:45.406956   10259 reflector.go:153] object-"base"/"default-token-wcqc7": Failed to list *v1.Secret: Get https://lb.kubesphere.local:6443/api/v1/namespaces/base/secrets?fieldSelector=metadata.name%3Ddefault-token-wcqc7&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:45 node2 kubelet[10259]: I0129 14:13:45.606917   10259 trace.go:116] Trace[2133826520]: "Reflector ListAndWatch" name:object-"kubesphere-devops-system"/"qingcloud" (started: 2021-01-29 14:13:31.207012845 +0800 CST m=+789425.398989904) (total time: 14.399883948s):
    Jan 29 14:13:45 node2 kubelet[10259]: Trace[2133826520]: [14.399883948s] [14.399883948s] END
    Jan 29 14:13:45 node2 kubelet[10259]: E0129 14:13:45.606939   10259 reflector.go:153] object-"kubesphere-devops-system"/"qingcloud": Failed to list *v1.Secret: Get https://lb.kubesphere.local:6443/api/v1/namespaces/kubesphere-devops-system/secrets?fieldSelector=metadata.name%3Dqingcloud&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:45 node2 kubelet[10259]: I0129 14:13:45.806886   10259 trace.go:116] Trace[503801735]: "Reflector ListAndWatch" name:object-"kube-system"/"nodelocaldns-token-ptcr8" (started: 2021-01-29 14:13:31.407057421 +0800 CST m=+789425.599034477) (total time: 14.399811768s):
    Jan 29 14:13:45 node2 kubelet[10259]: Trace[503801735]: [14.399811768s] [14.399811768s] END
    Jan 29 14:13:45 node2 kubelet[10259]: E0129 14:13:45.806908   10259 reflector.go:153] object-"kube-system"/"nodelocaldns-token-ptcr8": Failed to list *v1.Secret: Get https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%3Dnodelocaldns-token-ptcr8&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:46 node2 kubelet[10259]: I0129 14:13:46.006958   10259 trace.go:116] Trace[155850665]: "Reflector ListAndWatch" name:object-"istio-system"/"istio-sidecar-injector" (started: 2021-01-29 14:13:31.607052083 +0800 CST m=+789425.799029110) (total time: 14.399887572s):
    Jan 29 14:13:46 node2 kubelet[10259]: Trace[155850665]: [14.399887572s] [14.399887572s] END
    Jan 29 14:13:46 node2 kubelet[10259]: E0129 14:13:46.006980   10259 reflector.go:153] object-"istio-system"/"istio-sidecar-injector": Failed to list *v1.ConfigMap: Get https://lb.kubesphere.local:6443/api/v1/namespaces/istio-system/configmaps?fieldSelector=metadata.name%3Distio-sidecar-injector&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:46 node2 kubelet[10259]: I0129 14:13:46.206865   10259 trace.go:116] Trace[1660788593]: "Reflector ListAndWatch" name:object-"istio-system"/"istio-pilot-service-account-token-tlcjw" (started: 2021-01-29 14:13:31.806973375 +0800 CST m=+789425.998950403) (total time: 14.399871636s):
    Jan 29 14:13:46 node2 kubelet[10259]: Trace[1660788593]: [14.399871636s] [14.399871636s] END
    Jan 29 14:13:46 node2 kubelet[10259]: E0129 14:13:46.206889   10259 reflector.go:153] object-"istio-system"/"istio-pilot-service-account-token-tlcjw": Failed to list *v1.Secret: Get https://lb.kubesphere.local:6443/api/v1/namespaces/istio-system/secrets?fieldSelector=metadata.name%3Distio-pilot-service-account-token-tlcjw&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:46 node2 kubelet[10259]: I0129 14:13:46.406869   10259 trace.go:116] Trace[139629054]: "Reflector ListAndWatch" name:object-"kubesphere-monitoring-system"/"kube-state-metrics-token-wh858" (started: 2021-01-29 14:13:32.007160332 +0800 CST m=+789426.199137394) (total time: 14.399690414s):
    Jan 29 14:13:46 node2 kubelet[10259]: Trace[139629054]: [14.399690414s] [14.399690414s] END
    Jan 29 14:13:46 node2 kubelet[10259]: E0129 14:13:46.406901   10259 reflector.go:153] object-"kubesphere-monitoring-system"/"kube-state-metrics-token-wh858": Failed to list *v1.Secret: Get https://lb.kubesphere.local:6443/api/v1/namespaces/kubesphere-monitoring-system/secrets?fieldSelector=metadata.name%3Dkube-state-metrics-token-wh858&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:46 node2 kubelet[10259]: I0129 14:13:46.606896   10259 trace.go:116] Trace[2026113351]: "Reflector ListAndWatch" name:object-"kubesphere-logging-system"/"fluent-bit-app-config" (started: 2021-01-29 14:13:32.207029823 +0800 CST m=+789426.399006850) (total time: 14.399846805s):
    Jan 29 14:13:46 node2 kubelet[10259]: Trace[2026113351]: [14.399846805s] [14.399846805s] END
    Jan 29 14:13:46 node2 kubelet[10259]: E0129 14:13:46.606921   10259 reflector.go:153] object-"kubesphere-logging-system"/"fluent-bit-app-config": Failed to list *v1.ConfigMap: Get https://lb.kubesphere.local:6443/api/v1/namespaces/kubesphere-logging-system/configmaps?fieldSelector=metadata.name%3Dfluent-bit-app-config&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:46 node2 kubelet[10259]: I0129 14:13:46.806910   10259 trace.go:116] Trace[38344608]: "Reflector ListAndWatch" name:object-"uat-web"/"default-token-rnwhl" (started: 2021-01-29 14:13:32.407058288 +0800 CST m=+789426.599035332) (total time: 14.39982301s):
    Jan 29 14:13:46 node2 kubelet[10259]: Trace[38344608]: [14.39982301s] [14.39982301s] END
    Jan 29 14:13:46 node2 kubelet[10259]: E0129 14:13:46.806930   10259 reflector.go:153] object-"uat-web"/"default-token-rnwhl": Failed to list *v1.Secret: Get https://lb.kubesphere.local:6443/api/v1/namespaces/uat-web/secrets?fieldSelector=metadata.name%3Ddefault-token-rnwhl&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:47 node2 kubelet[10259]: I0129 14:13:47.006911   10259 trace.go:116] Trace[1880428121]: "Reflector ListAndWatch" name:object-"kubesphere-system"/"kubesphere-config" (started: 2021-01-29 14:13:32.607027908 +0800 CST m=+789426.799004961) (total time: 14.399865753s):
    Jan 29 14:13:47 node2 kubelet[10259]: Trace[1880428121]: [14.399865753s] [14.399865753s] END
    Jan 29 14:13:47 node2 kubelet[10259]: E0129 14:13:47.006938   10259 reflector.go:153] object-"kubesphere-system"/"kubesphere-config": Failed to list *v1.ConfigMap: Get https://lb.kubesphere.local:6443/api/v1/namespaces/kubesphere-system/configmaps?fieldSelector=metadata.name%3Dkubesphere-config&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:47 node2 kubelet[10259]: I0129 14:13:47.206887   10259 trace.go:116] Trace[1874961488]: "Reflector ListAndWatch" name:object-"kubesphere-controls-system"/"istio.kubesphere-router-serviceaccount" (started: 2021-01-29 14:13:32.807146844 +0800 CST m=+789426.999123886) (total time: 14.399710292s):
    Jan 29 14:13:47 node2 kubelet[10259]: Trace[1874961488]: [14.399710292s] [14.399710292s] END
    Jan 29 14:13:47 node2 kubelet[10259]: E0129 14:13:47.206908   10259 reflector.go:153] object-"kubesphere-controls-system"/"istio.kubesphere-router-serviceaccount": Failed to list *v1.Secret: Get https://lb.kubesphere.local:6443/api/v1/namespaces/kubesphere-controls-system/secrets?fieldSelector=metadata.name%3Distio.kubesphere-router-serviceaccount&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:47 node2 kubelet[10259]: I0129 14:13:47.406928   10259 trace.go:116] Trace[1248474603]: "Reflector ListAndWatch" name:object-"kube-system"/"calico-config" (started: 2021-01-29 14:13:33.007037192 +0800 CST m=+789427.199014240) (total time: 14.399850734s):
    Jan 29 14:13:47 node2 kubelet[10259]: Trace[1248474603]: [14.399850734s] [14.399850734s] END
    Jan 29 14:13:47 node2 kubelet[10259]: E0129 14:13:47.406957   10259 reflector.go:153] object-"kube-system"/"calico-config": Failed to list *v1.ConfigMap: Get https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcalico-config&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:47 node2 kubelet[10259]: I0129 14:13:47.606882   10259 trace.go:116] Trace[2095817170]: "Reflector ListAndWatch" name:object-"kubesphere-devops-system"/"default-token-zj48q" (started: 2021-01-29 14:13:33.207000998 +0800 CST m=+789427.398978061) (total time: 14.399860516s):
    Jan 29 14:13:47 node2 kubelet[10259]: Trace[2095817170]: [14.399860516s] [14.399860516s] END
    Jan 29 14:13:47 node2 kubelet[10259]: E0129 14:13:47.606909   10259 reflector.go:153] object-"kubesphere-devops-system"/"default-token-zj48q": Failed to list *v1.Secret: Get https://lb.kubesphere.local:6443/api/v1/namespaces/kubesphere-devops-system/secrets?fieldSelector=metadata.name%3Ddefault-token-zj48q&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:47 node2 kubelet[10259]: I0129 14:13:47.806838   10259 trace.go:116] Trace[576787682]: "Reflector ListAndWatch" name:object-"kube-system"/"kube-proxy" (started: 2021-01-29 14:13:33.406941062 +0800 CST m=+789427.598918089) (total time: 14.399881033s):
    Jan 29 14:13:47 node2 kubelet[10259]: Trace[576787682]: [14.399881033s] [14.399881033s] END
    Jan 29 14:13:47 node2 kubelet[10259]: E0129 14:13:47.806864   10259 reflector.go:153] object-"kube-system"/"kube-proxy": Failed to list *v1.ConfigMap: Get https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&limit=500&resourceVersion=0: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 14:13:47 node2 kubelet[10259]: E0129 14:13:47.940880   10259 controller.go:135] failed to ensure node lease exists, will retry in 7s, error: Get https://lb.kubesphere.local:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/node2?timeout=10s: write tcp 192.168.2.52:50744->192.168.2.31:6443: use of closed network connection
    Jan 29 15:04:54 node4 kubelet[8999]: Trace[997594896]: [10.599665834s] [10.599665834s] END
    Jan 29 15:04:54 node4 kubelet[8999]: E0129 15:04:54.731713    8999 reflector.go:153] object-"kubesphere-system"/"ks-account-secret": Failed to list *v1.Secret: Get https://lb.kubesphere.local:6443/api/v1/namespaces/kubesphere-system/secrets?fieldSelector=metadata.name%3Dks-account-secret&limit=500&resourceVersion=0: write tcp 192.168.2.54:57996->192.168.2.33:6443: use of closed network connection
    Jan 29 15:04:54 node4 kubelet[8999]: I0129 15:04:54.931676    8999 trace.go:116] Trace[1022874621]: "Reflector ListAndWatch" name:object-"kubesphere-system"/"redis-ha-configmap" (started: 2021-01-29 15:04:44.332404222 +0800 CST m=+792502.276612503) (total time: 10.599253951s):
    Jan 29 15:04:54 node4 kubelet[8999]: Trace[1022874621]: [10.599253951s] [10.599253951s] END
    Jan 29 15:04:54 node4 kubelet[8999]: E0129 15:04:54.931699    8999 reflector.go:153] object-"kubesphere-system"/"redis-ha-configmap": Failed to list *v1.ConfigMap: Get https://lb.kubesphere.local:6443/api/v1/namespaces/kubesphere-system/configmaps?fieldSelector=metadata.name%3Dredis-ha-configmap&limit=500&resourceVersion=0: write tcp 192.168.2.54:57996->192.168.2.33:6443: use of closed network connection
    Jan 29 15:04:55 node4 kubelet[8999]: I0129 15:04:55.131666    8999 trace.go:116] Trace[1967479719]: "Reflector ListAndWatch" name:object-"openpitrix-system"/"default-token-tdb98" (started: 2021-01-29 15:04:44.531852512 +0800 CST m=+792502.476060820) (total time: 10.599796114s):
    Jan 29 15:04:55 node4 kubelet[8999]: Trace[1967479719]: [10.599796114s] [10.599796114s] END
    Jan 29 15:04:55 node4 kubelet[8999]: E0129 15:04:55.131687    8999 reflector.go:153] object-"openpitrix-system"/"default-token-tdb98": Failed to list *v1.Secret: Get https://lb.kubesphere.local:6443/api/v1/namespaces/openpitrix-system/secrets?fieldSelector=metadata.name%3Ddefault-token-tdb98&limit=500&resourceVersion=0: write tcp 192.168.2.54:57996->192.168.2.33:6443: use of closed network connection
    Jan 29 15:04:55 node4 kubelet[8999]: E0129 15:04:55.258226    8999 controller.go:135] failed to ensure node lease exists, will retry in 7s, error: Get https://lb.kubesphere.local:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/node4?timeout=10s: write tcp 192.168.2.54:57996->192.168.2.33:6443: use of closed network connection
    Jan 29 15:04:55 node4 kubelet[8999]: I0129 15:04:55.331668    8999 trace.go:116] Trace[1081878084]: "Reflector ListAndWatch" name:object-"kubesphere-system"/"default-token-qxwzn" (started: 2021-01-29 15:04:44.73176635 +0800 CST m=+792502.675974636) (total time: 10.599875664s):
    Jan 29 15:04:55 node4 kubelet[8999]: Trace[1081878084]: [10.599875664s] [10.599875664s] END

      zcho 现在NotReady问题解决了?问题是:use of closed network connection

      • zcho 回复了此帖

        Forest-L 这个问题好解决,就是偶尔有些网络波动的时候,就容易产生

          1 个月 后

          节点间歇性NotReady,时好时坏,反反复复。
          -- Logs begin at 二 2021-03-09 08:38:16 CST. --
          3月 11 15:40:55 njscsjfwzx2 kubelet[29558]: E0311 15:40:55.226240 29558 file_linux.go:60] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
          3月 11 15:40:56 njscsjfwzx2 kubelet[29558]: E0311 15:40:56.226425 29558 file_linux.go:60] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
          3月 11 15:40:57 njscsjfwzx2 kubelet[29558]: E0311 15:40:57.226627 29558 file_linux.go:60] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
          3月 11 15:40:57 njscsjfwzx2 kubelet[29558]: E0311 15:40:57.271222 29558 controller.go:178] failed to update node lease, error: Put https://10.43.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/njscsjfwzx2?timeout=10s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
          3月 11 15:40:57 njscsjfwzx2 kubelet[29558]: E0311 15:40:57.310180 29558 kubelet_node_status.go:402] Error updating node status, will retry: error getting node "njscsjfwzx2": Get https://10.43.0.49:6443/api/v1/nodes/njscsjfwzx2?resourceVersion=0&timeout=10s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
          3月 11 15:40:58 njscsjfwzx2 kubelet[29558]: E0311 15:40:58.226833 29558 file_linux.go:60] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
          3月 11 15:40:59 njscsjfwzx2 kubelet[29558]: E0311 15:40:59.227019 29558 file_linux.go:60] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
          3月 11 15:41:00 njscsjfwzx2 kubelet[29558]: E0311 15:41:00.227215 29558 file_linux.go:60] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
          3月 11 15:41:00 njscsjfwzx2 kubelet[29558]: E0311 15:41:00.690111 29558 file.go:104] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
          3月 11 15:41:01 njscsjfwzx2 kubelet[29558]: E0311 15:41:01.227384 29558 file_linux.go:60] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
          3月 11 15:41:02 njscsjfwzx2 kubelet[29558]: E0311 15:41:02.227538 29558 file_linux.go:60] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
          3月 11 15:41:03 njscsjfwzx2 kubelet[29558]: E0311 15:41:03.227721 29558 file_linux.go:60] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
          3月 11 15:41:04 njscsjfwzx2 kubelet[29558]: E0311 15:41:04.227910 29558 file_linux.go:60] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
          3月 11 15:41:05 njscsjfwzx2 kubelet[29558]: E0311 15:41:05.228124 29558 file_linux.go:60] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
          3月 11 15:41:06 njscsjfwzx2 kubelet[29558]: E0311 15:41:06.228322 29558 file_linux.go:60] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring

          错误原因:
          3月 11 15:50:03 njscsjfwzx2 kubelet[29558]: E0311 15:50:03.808576 29558 controller.go:178] failed to update node lease, error: Put https://10.43.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/njscsjfwzx2?timeout=10s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
          3月 11 15:50:04 njscsjfwzx2 kubelet[29558]: E0311 15:50:04.810025 29558 kubelet_node_status.go:402] Error updating node status, will retry: error getting node "njscsjfwzx2": Get https://10.43.0.49:6443/api/v1/nodes/njscsjfwzx2?timeout=10s: context deadline exceeded

            1 个月 后
            14 天 后

            2021-04-20T00:30:42.760100+08:00 ccq02045 daemon.info kubelet[52017]: E0420 00:30:42.759293 52017 controller.go:177] failed to update node lease, error: Put https://lb.kubesphere.local:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ccq02045?timeout=10s: unexpected EOF
            2021-04-20T00:30:42.764944+08:00 ccq02045 daemon.info kubelet[52017]: E0420 00:30:42.759826 52017 event.go:272] Unable to write event: ‘Patch https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/events/kube-scheduler-ccq02045.1674dab75f88a5b9: unexpected EOF’ (may retry after sleeping)
            2021-04-20T00:30:42.765561+08:00 ccq02045 daemon.info kubelet[52017]: W0420 00:30:42.759884 52017 status_manager.go:530] Failed to get status for pod “kube-scheduler-ccq02045_kube-system(ebfd2fd6ad6578e9c4f9371af10daeea)”: Get https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ccq02045: unexpected EOF
            2021-04-20T00:30:43.917750+08:00 ccq02045 daemon.info kubelet[52017]: E0420 00:30:43.917674 52017 kubelet_volumes.go:154] orphaned pod “27c5b1f4-ebeb-4ff7-bc40-86cac78ba101” found, but volume paths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
            2021-04-20T00:30:44.726984+08:00 ccq02045 daemon.info etcd[18056]: 2021-04-19 16:30:44.726703 W | etcdserver: read-only range request “key:\”/registry/leases/kube-node-lease/ccq02068\“ ” with result “error:context canceled” took too long (7.338434749s) to execute
            2021-04-20T00:30:44.727539+08:00 ccq02045 daemon.info etcd[18056]: 2021-04-19 16:30:44.726847 W | etcdserver: read-only range request “key:\”/registry/pods/kube-system/kube-controller-manager-ccq02068\“ ” with result “error:context canceled” took too long (7.200896297s) to execute
            2021-04-20T00:30:45.901656+08:00 ccq02045 daemon.info kubelet[52017]: E0420 00:30:45.901563 52017 kubelet_volumes.go:154] orphaned pod “27c5b1f4-ebeb-4ff7-bc40-86cac78ba101” found, but volume paths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
            2021-04-20T00:30:47.304229+08:00 ccq02045 daemon.info etcd[18056]: 2021-04-19 16:30:47.303611 W | etcdserver: read-only range request “key:\”/registry/health\“ ” with result “error:context canceled” took too long (1.99985912s) to execute
            2021-04-20T00:30:47.502387+08:00 ccq02045 daemon.info etcd[18056]: 2021-04-19 16:30:47.501884 W | etcdserver: read-only range request “key:\”/registry/services/endpoints/kube-system/kube-scheduler\“ ” with result “error:context canceled” took too long (5.001937856s) to execute
            2021-04-20T00:30:47.914210+08:00 ccq02045 daemon.info kubelet[52017]: E0420 00:30:47.914144 52017 kubelet_volumes.go:154] orphaned pod “27c5b1f4-ebeb-4ff7-bc40-86cac78ba101” found, but volume paths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.

            集群正常用着,突然除master节点外,其他全部notready

            bglab@master:/var/log$ journalctl -u kubelet -f
            -- Logs begin at Fri 2021-01-15 10:29:53 CST. --
            Apr 22 10:11:08 master kubelet[15647]: E0422 10:11:08.277000   15647 kubelet_volumes.go:154] orphaned pod "4d3ac79b-b462-45e7-b0ba-24c955e95722" found, but volume paths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
            Apr 22 10:11:10 master kubelet[15647]: E0422 10:11:10.198625   15647 pod_workers.go:191] Error syncing pod 300158eb-8c86-4589-991e-f10521779b49 ("mysql-mysql-master-57c8cd75dc-hl2w2_test(300158eb-8c86-4589-991e-f10521779b49)"), skipping: failed to "StartContainer" for "mysql-mysql-master" with CrashLoopBackOff: "back-off 5m0s restarting failed container=mysql-mysql-master pod=mysql-mysql-master-57c8cd75dc-hl2w2_test(300158eb-8c86-4589-991e-f10521779b49)"
            Apr 22 10:11:10 master kubelet[15647]: E0422 10:11:10.288426   15647 kubelet_volumes.go:154] orphaned pod "4d3ac79b-b462-45e7-b0ba-24c955e95722" found, but volume paths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
            Apr 22 10:11:10 master kubelet[15647]: W0422 10:11:10.380264   15647 prober.go:108] No ref for container "docker://76c5cfe4436a05c065669efa8be836bcfe6ba19991db0a593db624d96c77a3fb" (apollo-test-cs-56bdcc8b48-9grxk_cp-project(bb4f8766-7a7b-43f2-b881-61df4eedf014):apollo-test-cs)
            Apr 22 10:11:10 master kubelet[15647]: W0422 10:11:10.390722   15647 prober.go:108] No ref for container "docker://e79fffefbd8301ba8200ca24da3ef7eeee23bb60f6e9b4294e5ff00ec80585b8" (apollo-dev-cs-57b7686d6f-p9nt8_cp-project(24f3bde0-3722-417a-b564-4b72d7186b68):apollo-dev-cs)
            Apr 22 10:11:11 master kubelet[15647]: W0422 10:11:11.198285   15647 kubelet_pods.go:866] Unable to retrieve pull secret cp-project/nexus-secret for cp-project/statefulset-rocketmq-namesrv-prod-0 due to secret "nexus-secret" not found.  The image pull may not succeed.
            Apr 22 10:11:11 master kubelet[15647]: E0422 10:11:11.198763   15647 pod_workers.go:191] Error syncing pod d6b80c5e-d518-41ee-b1c1-dc69cdd70133 ("nginx-mysql-master-68df85cccc-hfzh8_test(d6b80c5e-d518-41ee-b1c1-dc69cdd70133)"), skipping: failed to "StartContainer" for "nginx-mysql-master" with CrashLoopBackOff: "back-off 1m20s restarting failed container=nginx-mysql-master pod=nginx-mysql-master-68df85cccc-hfzh8_test(d6b80c5e-d518-41ee-b1c1-dc69cdd70133)"
            Apr 22 10:11:11 master kubelet[15647]: E0422 10:11:11.198870   15647 pod_workers.go:191] Error syncing pod 5d4e9ae3-9696-47dc-888f-df3c42e78319 ("nginx-mysql-slave-578b9b9fcf-mqn4c_test(5d4e9ae3-9696-47dc-888f-df3c42e78319)"), skipping: failed to "StartContainer" for "nginx-mysql-slave" with CrashLoopBackOff: "back-off 2m40s restarting failed container=nginx-mysql-slave pod=nginx-mysql-slave-578b9b9fcf-mqn4c_test(5d4e9ae3-9696-47dc-888f-df3c42e78319)"
            Apr 22 10:11:11 master kubelet[15647]: E0422 10:11:11.199708   15647 pod_workers.go:191] Error syncing pod c53af16d-08c2-490e-a9ba-88bf68362fa6 ("jaeger-es-index-cleaner-1617897300-d98b5_istio-system(c53af16d-08c2-490e-a9ba-88bf68362fa6)"), skipping: failed to "StartContainer" for "jaeger-es-index-cleaner" with ImagePullBackOff: "Back-off pulling image \"jaegertracing/jaeger-es-index-cleaner:1.17.1\""
            Apr 22 10:11:11 master kubelet[15647]: E0422 10:11:11.199782   15647 pod_workers.go:191] Error syncing pod 3c47355e-66dd-4035-94bf-fefcde203f8f ("recycler-for-mysql-pv2_default(3c47355e-66dd-4035-94bf-fefcde203f8f)"), skipping: failed to "StartContainer" for "pv-recycler" with ImagePullBackOff: "Back-off pulling image \"busybox:1.27\""
            Apr 22 10:11:12 master kubelet[15647]: E0422 10:11:12.198773   15647 pod_workers.go:191] Error syncing pod b00f2112-3b20-4b97-a889-90a07b18223b ("mysql-mysql-slave-948bdd5c-4znpf_test(b00f2112-3b20-4b97-a889-90a07b18223b)"), skipping: failed to "StartContainer" for "mysql-mysql-slave" with CrashLoopBackOff: "back-off 5m0s restarting failed container=mysql-mysql-slave pod=mysql-mysql-slave-948bdd5c-4znpf_test(b00f2112-3b20-4b97-a889-90a07b18223b)"
            Apr 22 10:11:12 master kubelet[15647]: E0422 10:11:12.199871   15647 pod_workers.go:191] Error syncing pod 7e4ccd5c-c22d-474f-932b-002d5834e45d ("jaeger-es-index-cleaner-1617810900-6g72b_istio-system(7e4ccd5c-c22d-474f-932b-002d5834e45d)"), skipping: failed to "StartContainer" for "jaeger-es-index-cleaner" with ImagePullBackOff: "Back-off pulling image \"jaegertracing/jaeger-es-index-cleaner:1.17.1\""
            Apr 22 10:11:12 master kubelet[15647]: E0422 10:11:12.245110   15647 kubelet_volumes.go:154] orphaned pod "4d3ac79b-b462-45e7-b0ba-24c955e95722" found, but volume paths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
            Apr 22 10:11:12 master kubelet[15647]: W0422 10:11:12.458974   15647 prober.go:108] No ref for container "docker://7082b2de0ab7ebb30618a2c76c634a5dd3ddf2c45e02aee517c3055d04569962" (apollo-dev-as-697cbccd-rhx5q_cp-project(d4312dfe-85ae-4fc6-bad9-7a80bf63ce8c):apollo-dev-as)
            Apr 22 10:11:13 master kubelet[15647]: E0422 10:11:13.199490   15647 pod_workers.go:191] Error syncing pod 6bf832fd-3e21-410d-9ab3-de315b2d64f5 ("jaeger-es-index-cleaner-1617724500-f8mdr_istio-system(6bf832fd-3e21-410d-9ab3-de315b2d64f5)"), skipping: failed to "StartContainer" for "jaeger-es-index-cleaner" with ImagePullBackOff: "Back-off pulling image \"jaegertracing/jaeger-es-index-cleaner:1.17.1\""
            Apr 22 10:11:13 master kubelet[15647]: E0422 10:11:13.199524   15647 pod_workers.go:191] Error syncing pod ab508745-b2ae-4b33-9e6c-335817aea4ed ("jaeger-es-index-cleaner-1618761300-g5n4d_istio-system(ab508745-b2ae-4b33-9e6c-335817aea4ed)"), skipping: failed to "StartContainer" for "jaeger-es-index-cleaner" with ImagePullBackOff: "Back-off pulling image \"jaegertracing/jaeger-es-index-cleaner:1.17.1\""
            Apr 22 10:11:14 master kubelet[15647]: E0422 10:11:14.198595   15647 pod_workers.go:191] Error syncing pod db24ebb0-9377-4d42-ab36-bdc64cb0c3aa ("ils-wms-web1-56d8d8bb84-6mhd4_wms-project(db24ebb0-9377-4d42-ab36-bdc64cb0c3aa)"), skipping: failed to "StartContainer" for "wms-web" with CrashLoopBackOff: "back-off 5m0s restarting failed container=wms-web pod=ils-wms-web1-56d8d8bb84-6mhd4_wms-project(db24ebb0-9377-4d42-ab36-bdc64cb0c3aa)"
            Apr 22 10:11:14 master kubelet[15647]: E0422 10:11:14.278442   15647 kubelet_volumes.go:154] orphaned pod "4d3ac79b-b462-45e7-b0ba-24c955e95722" found, but volume paths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.

            2、这个现象正常么?
            https://kubesphere.com.cn/forum/d/4214-k8sarping-mastermac

            kubectl -n kube-system logs -f kube-apiserver-master

            E0422 02:32:14.576919       1 writers.go:105] apiserver was unable to write a JSON response: write tcp 10.34.76.244:6443->10.34.76.242:41665: write: connection reset by peer
            E0422 02:32:14.576951       1 status.go:71] apiserver received an error that is not an metav1.Status: &net.OpError{Op:"write", Net:"tcp", Source:(*net.TCPAddr)(0xc02d40e9c0), Addr:(*net.TCPAddr)(0xc02d40e9f0), Err:(*os.SyscallError)(0xc011338f80)}
            E0422 02:32:14.577101       1 writers.go:105] apiserver was unable to write a JSON response: client disconnected
            E0422 02:32:14.578071       1 writers.go:118] apiserver was unable to write a fallback JSON response: write tcp 10.34.76.244:6443->10.34.76.242:41665: write: connection reset by peer
            E0422 02:32:14.579103       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"client disconnected"}
            E0422 02:32:14.580187       1 writers.go:118] apiserver was unable to write a fallback JSON response: client disconnected
            I0422 02:32:33.431230       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
            I0422 02:32:46.186505       1 trace.go:116] Trace[1273677765]: "List" url:/api/v1/secrets,user-agent:manager/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.34.76.242 (started: 2021-04-22 02:32:45.532860112 +0000 UTC m=+1353738.994550695) (total time: 653.602242ms):
            Trace[1273677765]: [653.601264ms] [653.33086ms] Writing http response done count:344
            E0422 02:32:54.419012       1 watch.go:256] unable to encode watch object *v1.WatchEvent: client disconnected (&streaming.encoder{writer:(*framer.lengthDelimitedFrameWriter)(0xc02cf77800), encoder:(*versioning.codec)(0xc035c143c0), buf:(*bytes.Buffer)(0xc02a07b1d0)})
            E0422 02:32:54.419105       1 watch.go:256] unable to encode watch object *v1.WatchEvent: client disconnected (&streaming.encoder{writer:(*http2.responseWriter)(0xc02e2d8130), encoder:(*versioning.codec)(0xc01df93720), buf:(*bytes.Buffer)(0xc0108398f0)})
            E0422 02:32:56.466972       1 watch.go:256] unable to encode watch object *v1.WatchEvent: write tcp 10.34.76.244:6443->10.34.76.242:5481: write: connection reset by peer (&streaming.encoder{writer:(*http2.responseWriter)(0xc02b440970), encoder:(*versioning.codec)(0xc034eedd60), buf:(*bytes.Buffer)(0xc035448540)})
            E0422 02:32:56.467042       1 watch.go:256] unable to encode watch object *v1.WatchEvent: write tcp 10.34.76.244:6443->10.34.76.242:5481: write: connection reset by peer (&streaming.encoder{writer:(*http2.responseWriter)(0xc02d59a3b0), encoder:(*versioning.codec)(0xc037978460), buf:(*bytes.Buffer)(0xc025b9ed50)})
            E0422 02:33:00.562772       1 watch.go:256] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoder{writer:(*http2.responseWriter)(0xc02ff5bb58), encoder:(*versioning.codec)(0xc024c89b80), buf:(*bytes.Buffer)(0xc0295fe9f0)})
            E0422 02:33:00.562921       1 watch.go:256] unable to encode watch object *v1.WatchEvent: client disconnected (&streaming.encoder{writer:(*framer.lengthDelimitedFrameWriter)(0xc009348c80), encoder:(*versioning.codec)(0xc037ddb180), buf:(*bytes.Buffer)(0xc0206c26c0)})
            I0422 02:33:33.433466       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
            E0422 02:33:45.619058       1 watch.go:256] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoder{writer:(*framer.lengthDelimitedFrameWriter)(0xc021d37e40), encoder:(*versioning.codec)(0xc024018000), buf:(*bytes.Buffer)(0xc02ea8d4a0)})
            I0422 02:34:33.435666       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
            I0422 02:35:28.124209       1 trace.go:116] Trace[466173950]: "List" url:/api/v1/secrets,user-agent:manager/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.34.76.242 (started: 2021-04-22 02:35:27.470738067 +0000 UTC m=+1353900.932428649) (total time: 653.443965ms):
            Trace[466173950]: [653.44329ms] [653.162519ms] Writing http response done count:344
            I0422 02:35:33.438363       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
            I0422 02:36:01.334544       1 trace.go:116] Trace[145971987]: "List" url:/api/v1/secrets,user-agent:manager/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.34.76.242 (started: 2021-04-22 02:36:00.655007322 +0000 UTC m=+1353934.116697911) (total time: 679.475739ms):
            Trace[145971987]: [679.473258ms] [679.206344ms] Writing http response done count:344
            I0422 02:36:33.440402       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
            I0422 02:36:34.026533       1 trace.go:116] Trace[1519823269]: "List" url:/api/v1/secrets,user-agent:manager/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.34.76.242 (started: 2021-04-22 02:36:33.357927253 +0000 UTC m=+1353966.819617834) (total time: 668.574287ms):
            Trace[1519823269]: [668.573687ms] [668.311798ms] Writing http response done count:344
            I0422 02:37:30.450398       1 trace.go:116] Trace[22691570]: "List" url:/api/v1/secrets,user-agent:manager/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.34.76.242 (started: 2021-04-22 02:37:29.680731049 +0000 UTC m=+1354023.142421634) (total time: 769.618807ms):
            Trace[22691570]: [769.616068ms] [769.357215ms] Writing http response done count:344
            I0422 02:37:33.442344       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
            E0422 02:37:45.234957       1 watch.go:256] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoder{writer:(*http2.responseWriter)(0xc00a235750), encoder:(*versioning.codec)(0xc037168aa0), buf:(*bytes.Buffer)(0xc033cd6e40)})
            E0422 02:37:45.234961       1 watch.go:256] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoder{writer:(*http2.responseWriter)(0xc00a235770), encoder:(*versioning.codec)(0xc037483900), buf:(*bytes.Buffer)(0xc033b96ab0)})
            E0422 02:37:53.427187       1 watch.go:256] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoder{writer:(*http2.responseWriter)(0xc009291738), encoder:(*versioning.codec)(0xc033cb7ea0), buf:(*bytes.Buffer)(0xc021ae45d0)})
            I0422 02:38:33.444789       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
            E0422 02:38:37.775129       1 writers.go:105] apiserver was unable to write a JSON response: write tcp 10.34.76.244:6443->10.34.76.242:33476: write: connection reset by peer
            E0422 02:38:37.775151       1 status.go:71] apiserver received an error that is not an metav1.Status: &net.OpError{Op:"write", Net:"tcp", Source:(*net.TCPAddr)(0xc021d62000), Addr:(*net.TCPAddr)(0xc021d62090), Err:(*os.SyscallError)(0xc0281d1860)}
            E0422 02:38:37.776283       1 writers.go:118] apiserver was unable to write a fallback JSON response: write tcp 10.34.76.244:6443->10.34.76.242:33476: write: connection reset by peer
            I0422 02:38:37.777501       1 trace.go:116] Trace[304584112]: "List" url:/api/v1/secrets,user-agent:manager/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.34.76.242 (started: 2021-04-22 02:38:37.238688264 +0000 UTC m=+1354090.700378846) (total time: 538.754541ms):
            Trace[304584112]: [538.752917ms] [538.413609ms] Writing http response done count:344
            I0422 02:38:39.227423       1 trace.go:116] Trace[984664201]: "List" url:/api/v1/secrets,user-agent:manager/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.34.76.242 (started: 2021-04-22 02:38:38.634764617 +0000 UTC m=+1354092.096455198) (total time: 592.6272ms):
            Trace[984664201]: [592.626264ms] [592.350671ms] Writing http response done count:344
            E0422 02:39:12.871034       1 writers.go:105] apiserver was unable to write a JSON response: write tcp 10.34.76.244:6443->10.34.76.242:1821: write: connection reset by peer
            E0422 02:39:12.871066       1 status.go:71] apiserver received an error that is not an metav1.Status: &net.OpError{Op:"write", Net:"tcp", Source:(*net.TCPAddr)(0xc024affdd0), Addr:(*net.TCPAddr)(0xc024affe00), Err:(*os.SyscallError)(0xc00e89e720)}
            E0422 02:39:12.872153       1 writers.go:118] apiserver was unable to write a fallback JSON response: write tcp 10.34.76.244:6443->10.34.76.242:1821: write: connection reset by peer
            I0422 02:39:12.873324       1 trace.go:116] Trace[1650306753]: "List" url:/api/v1/secrets,user-agent:manager/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.34.76.242 (started: 2021-04-22 02:39:12.33638925 +0000 UTC m=+1354125.798079833) (total time: 536.906447ms):
            Trace[1650306753]: [536.905169ms] [536.628201ms] Writing http response done count:344
            E0422 02:39:13.298968       1 watch.go:256] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoder{writer:(*framer.lengthDelimitedFrameWriter)(0xc0115d6660), encoder:(*versioning.codec)(0xc031cd9b80), buf:(*bytes.Buffer)(0xc031af2ae0)})
            I0422 02:39:14.449427       1 trace.go:116] Trace[314152759]: "List" url:/api/v1/secrets,user-agent:manager/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.34.76.242 (started: 2021-04-22 02:39:13.832304276 +0000 UTC m=+1354127.293994863) (total time: 617.08917ms):
            Trace[314152759]: [617.088181ms] [616.810862ms] Writing http response done count:344
            I0422 02:39:33.447159       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
            I0422 02:40:33.449067       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
            I0422 02:41:33.451519       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
            I0422 02:42:33.453910       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
            I0422 02:43:33.456115       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
            I0422 02:44:33.458640       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
            I0422 02:45:33.461402       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
            I0422 02:46:33.463153       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
            E0422 02:47:22.770572       1 watch.go:256] unable to encode watch object *v1.WatchEvent: write tcp 10.34.76.244:6443->10.34.76.242:41863: write: no route to host (&streaming.encoder{writer:(*http2.responseWriter)(0xc02a787820), encoder:(*versioning.codec)(0xc02b2b4dc0), buf:(*bytes.Buffer)(0xc01c51a930)})
            E0422 02:47:22.770652       1 watch.go:256] unable to encode watch object *v1.WatchEvent: client disconnected (&streaming.encoder{writer:(*framer.lengthDelimitedFrameWriter)(0xc038ff3aa0), encoder:(*versioning.codec)(0xc029134f00), buf:(*bytes.Buffer)(0xc031af2c30)})
            I0422 02:47:33.465357       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
            I0422 02:48:33.467785       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
            I0422 02:49:33.470356       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
            I0422 02:50:33.472485       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
            E0422 02:51:14.194655       1 watch.go:256] unable to encode watch object *v1.WatchEvent: write tcp 10.34.76.244:6443->10.34.76.242:47161: write: no route to host (&streaming.encoder{writer:(*http2.responseWriter)(0xc02c37f8c8), encoder:(*versioning.codec)(0xc032f183c0), buf:(*bytes.Buffer)(0xc027117ec0)})
            E0422 02:51:14.194673       1 watch.go:256] unable to encode watch object *v1.WatchEvent: write tcp 10.34.76.244:6443->10.34.76.242:47161: write: no route to host (&streaming.encoder{writer:(*http2.responseWriter)(0xc02c37f8d0), encoder:(*versioning.codec)(0xc031cd8be0), buf:(*bytes.Buffer)(0xc028b298c0)})
            E0422 02:51:16.242610       1 watch.go:256] unable to encode watch object *v1.WatchEvent: write tcp 10.34.76.244:6443->10.34.76.242:10299: write: no route to host (&streaming.encoder{writer:(*http2.responseWriter)(0xc02c0986f8), encoder:(*versioning.codec)(0xc031684c80), buf:(*bytes.Buffer)(0xc0257d6cc0)})
            E0422 02:51:16.242944       1 watch.go:256] unable to encode watch object *v1.WatchEvent: client disconnected (&streaming.encoder{writer:(*framer.lengthDelimitedFrameWriter)(0xc00c55c760), encoder:(*versioning.codec)(0xc038d73a40), buf:(*bytes.Buffer)(0xc02adb7ef0)})
            E0422 02:51:16.242967       1 watch.go:256] unable to encode watch object *v1.WatchEvent: client disconnected (&streaming.encoder{writer:(*framer.lengthDelimitedFrameWriter)(0xc02e2206c0), encoder:(*versioning.codec)(0xc031a66140), buf:(*bytes.Buffer)(0xc025311d10)})
            I0422 02:51:33.474810       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
            I0422 02:52:33.477240       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
            I0422 02:53:33.479398       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
            E0422 02:53:51.890670       1 watch.go:256] unable to encode watch object *v1.WatchEvent: client disconnected (&streaming.encoder{writer:(*http2.responseWriter)(0xc00018dfb8), encoder:(*versioning.codec)(0xc02ac35900), buf:(*bytes.Buffer)(0xc0266454a0)})
            I0422 02:54:33.481910       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
            E0422 02:55:01.522570       1 watch.go:256] unable to encode watch object *v1.WatchEvent: write tcp 10.34.76.244:6443->10.34.76.242:2379: write: no route to host (&streaming.encoder{writer:(*http2.responseWriter)(0xc02de90df0), encoder:(*versioning.codec)(0xc00b68fcc0), buf:(*bytes.Buffer)(0xc0225ef650)})
            E0422 02:55:01.522654       1 watch.go:256] unable to encode watch object *v1.WatchEvent: client disconnected (&streaming.encoder{writer:(*http2.responseWriter)(0xc007883520), encoder:(*versioning.codec)(0xc0113c4320), buf:(*bytes.Buffer)(0xc02e45c2d0)})
            E0422 02:55:01.522656       1 watch.go:256] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoder{writer:(*http2.responseWriter)(0xc00070bc08), encoder:(*versioning.codec)(0xc008ecc1e0), buf:(*bytes.Buffer)(0xc02671f440)})
            E0422 02:55:01.522738       1 watch.go:256] unable to encode watch object *v1.WatchEvent: client disconnected (&streaming.encoder{writer:(*framer.lengthDelimitedFrameWriter)(0xc0257065a0), encoder:(*versioning.codec)(0xc00a376780), buf:(*bytes.Buffer)(0xc0206c2a50)})
            E0422 02:55:15.858598       1 watch.go:256] unable to encode watch object *v1.WatchEvent: write tcp 10.34.76.244:6443->10.34.76.242:36194: write: no route to host (&streaming.encoder{writer:(*framer.lengthDelimitedFrameWriter)(0xc0222bbbe0), encoder:(*versioning.codec)(0xc0202de140), buf:(*bytes.Buffer)(0xc020ab4780)})
            E0422 02:55:28.146644       1 watch.go:256] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoder{writer:(*http2.responseWriter)(0xc02a881900), encoder:(*versioning.codec)(0xc0202b4e60), buf:(*bytes.Buffer)(0xc024e67aa0)})
            I0422 02:55:33.484310       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
            E0422 02:55:44.530635       1 watch.go:256] unable to encode watch object *v1.WatchEvent: write tcp 10.34.76.244:6443->10.34.76.242:49900: write: no route to host (&streaming.encoder{writer:(*http2.responseWriter)(0xc006840d40), encoder:(*versioning.codec)(0xc0294d4140), buf:(*bytes.Buffer)(0xc0256e8120)})
            E0422 02:55:44.530677       1 watch.go:256] unable to encode watch object *v1.WatchEvent: client disconnected (&streaming.encoder{writer:(*http2.responseWriter)(0xc00a26f638), encoder:(*versioning.codec)(0xc026751a40), buf:(*bytes.Buffer)(0xc0223b6a80)})
            E0422 02:55:44.530740       1 watch.go:256] unable to encode watch object *v1.WatchEvent: write tcp 10.34.76.244:6443->10.34.76.242:49900: write: no route to host (&streaming.encoder{writer:(*framer.lengthDelimitedFrameWriter)(0xc02c6f2580), encoder:(*versioning.codec)(0xc02ca9e640), buf:(*bytes.Buffer)(0xc020f1b650)})
            E0422 02:55:48.626812       1 watch.go:256] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoder{writer:(*framer.lengthDelimitedFrameWriter)(0xc02216bd00), encoder:(*versioning.codec)(0xc0205a1900), buf:(*bytes.Buffer)(0xc0175e8b40)})
            E0422 02:56:31.634566       1 watch.go:256] unable to encode watch object *v1.WatchEvent: write tcp 10.34.76.244:6443->10.34.76.242:7043: write: no route to host (&streaming.encoder{writer:(*http2.responseWriter)(0xc03079a158), encoder:(*versioning.codec)(0xc02dda8e60), buf:(*bytes.Buffer)(0xc027c546c0)})
            I0422 02:56:33.486457       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
            E0422 02:56:48.018741       1 watch.go:256] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoder{writer:(*framer.lengthDelimitedFrameWriter)(0xc028131e40), encoder:(*versioning.codec)(0xc037f2adc0), buf:(*bytes.Buffer)(0xc036e5e8d0)})
            E0422 02:56:48.018766       1 watch.go:256] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoder{writer:(*http2.responseWriter)(0xc02e55c0e8), encoder:(*versioning.codec)(0xc02dda86e0), buf:(*bytes.Buffer)(0xc032692ba0)})
            E0422 02:56:48.018770       1 watch.go:256] unable to encode watch object *v1.WatchEvent: client disconnected (&streaming.encoder{writer:(*http2.responseWriter)(0xc03079a080), encoder:(*versioning.codec)(0xc0242b3220), buf:(*bytes.Buffer)(0xc02b15a6c0)})
            E0422 02:56:58.258708       1 writers.go:105] apiserver was unable to write a JSON response: write tcp 10.34.76.244:6443->10.34.76.242:51610: write: no route to host
            E0422 02:56:58.258736       1 status.go:71] apiserver received an error that is not an metav1.Status: &net.OpError{Op:"write", Net:"tcp", Source:(*net.TCPAddr)(0xc02efe3770), Addr:(*net.TCPAddr)(0xc02efe37a0), Err:(*os.SyscallError)(0xc0312a8560)}
            E0422 02:56:58.258749       1 watch.go:256] unable to encode watch object *v1.WatchEvent: write tcp 10.34.76.244:6443->10.34.76.242:51610: write: no route to host (&streaming.encoder{writer:(*http2.responseWriter)(0xc007d5a438), encoder:(*versioning.codec)(0xc037f2bc20), buf:(*bytes.Buffer)(0xc01f9059b0)})
            E0422 02:56:58.258768       1 runtime.go:78] Observed a panic: &errors.errorString{s:"killing connection/stream because serving request timed out and response had been started"} (killing connection/stream because serving request timed out and response had been started)
            goroutine 576734935 [running]:
            k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x3bc1a80, 0xc000506580)
                    /workspace/anago-v1.17.9-rc.0.37+d1c2f63bd4fc89/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa3
            k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0xc017b07c90, 0x1, 0x1)
                    /workspace/anago-v1.17.9-rc.0.37+d1c2f63bd4fc89/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x82
            panic(0x3bc1a80, 0xc000506580)
                    /usr/local/go/src/runtime/panic.go:679 +0x1b2
            k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).timeout(0xc00ed09640, 0xc0352aec80)
                    /workspace/anago-v1.17.9-rc.0.37+d1c2f63bd4fc89/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:257 +0x1cf
            k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP(0xc0094ef140, 0x4fe0260, 0xc02853d0a0, 0xc03633c600)
                    /workspace/anago-v1.17.9-rc.0.37+d1c2f63bd4fc89/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:141 +0x310
            k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithWaitGroup.func1(0x4fe0260, 0xc02853d0a0, 0xc03633c500)
                    /workspace/anago-v1.17.9-rc.0.37+d1c2f63bd4fc89/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/waitgroup.go:47 +0x10f
            net/http.HandlerFunc.ServeHTTP(0xc009830ea0, 0x4fe0260, 0xc02853d0a0, 0xc03633c500)
                    /usr/local/go/src/net/http/server.go:2007 +0x44
            k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithRequestInfo.func1(0x4fe0260, 0xc02853d0a0, 0xc03633c400)
                    /workspace/anago-v1.17.9-rc.0.37+d1c2f63bd4fc89/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/requestinfo.go:39 +0x274
            net/http.HandlerFunc.ServeHTTP(0xc009830ed0, 0x4fe0260, 0xc02853d0a0, 0xc03633c400)
                    /usr/local/go/src/net/http/server.go:2007 +0x44
            k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithCacheControl.func1(0x4fe0260, 0xc02853d0a0, 0xc03633c400)
                    /workspace/anago-v1.17.9-rc.0.37+d1c2f63bd4fc89/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/cachecontrol.go:31 +0xa8
            net/http.HandlerFunc.ServeHTTP(0xc0094ef160, 0x4fe0260, 0xc02853d0a0, 0xc03633c400)
                    /usr/local/go/src/net/http/server.go:2007 +0x44
            k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.WithLogging.func1(0x4fd3260, 0xc03808e348, 0xc03633c300)
                    /workspace/anago-v1.17.9-rc.0.37+d1c2f63bd4fc89/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:89 +0x2ca
            net/http.HandlerFunc.ServeHTTP(0xc0094ef180, 0x4fd3260, 0xc03808e348, 0xc03633c300)
                    /usr/local/go/src/net/http/server.go:2007 +0x44
            k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.withPanicRecovery.func1(0x4fd3260, 0xc03808e348, 0xc03633c300)
                    /workspace/anago-v1.17.9-rc.0.37+d1c2f63bd4fc89/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/wrap.go:51 +0x13e
            net/http.HandlerFunc.ServeHTTP(0xc0094ef1a0, 0x4fd3260, 0xc03808e348, 0xc03633c300)
                    /usr/local/go/src/net/http/server.go:2007 +0x44
            k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.(*APIServerHandler).ServeHTTP(0xc009830f00, 0x4fd3260, 0xc03808e348, 0xc03633c300)
                    /workspace/anago-v1.17.9-rc.0.37+d1c2f63bd4fc89/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:189 +0x51
            net/http.serverHandler.ServeHTTP(0xc00aed8000, 0x4fd3260, 0xc03808e348, 0xc03633c300)
                    /usr/local/go/src/net/http/server.go:2802 +0xa4
            net/http.initNPNRequest.ServeHTTP(0x4fecf20, 0xc02efe37d0, 0xc029e86380, 0xc00aed8000, 0x4fd3260, 0xc03808e348, 0xc03633c300)
                    /usr/local/go/src/net/http/server.go:3366 +0x8d
            k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*serverConn).runHandler(0xc02b17ed80, 0xc03808e348, 0xc03633c300, 0xc00ed094c0)
                    /workspace/anago-v1.17.9-rc.0.37+d1c2f63bd4fc89/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/server.go:2149 +0x9f
            created by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*serverConn).processHeaders
                    /workspace/anago-v1.17.9-rc.0.37+d1c2f63bd4fc89/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/server.go:1883 +0x4eb
            E0422 02:56:58.258817       1 wrap.go:39] apiserver panic'd on GET /api/v1/secrets?limit=500&resourceVersion=0
            I0422 02:56:58.258888       1 log.go:172] http2: panic serving 10.34.76.242:51610: killing connection/stream because serving request timed out and response had been started
            goroutine 576734935 [running]:
            k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*serverConn).runHandler.func1(0xc03808e348, 0xc017b07f67, 0xc02b17ed80)
                    /workspace/anago-v1.17.9-rc.0.37+d1c2f63bd4fc89/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/server.go:2142 +0x16b
            panic(0x3bc1a80, 0xc000506580)
                    /usr/local/go/src/runtime/panic.go:679 +0x1b2
            k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0xc017b07c90, 0x1, 0x1)
                    /workspace/anago-v1.17.9-rc.0.37+d1c2f63bd4fc89/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0x105
            panic(0x3bc1a80, 0xc000506580)
                    /usr/local/go/src/runtime/panic.go:679 +0x1b2
            k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).timeout(0xc00ed09640, 0xc0352aec80)
                    /workspace/anago-v1.17.9-rc.0.37+d1c2f63bd4fc89/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:257 +0x1cf
            k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP(0xc0094ef140, 0x4fe0260, 0xc02853d0a0, 0xc03633c600)
                    /workspace/anago-v1.17.9-rc.0.37+d1c2f63bd4fc89/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:141 +0x310
            k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithWaitGroup.func1(0x4fe0260, 0xc02853d0a0, 0xc03633c500)
                    /workspace/anago-v1.17.9-rc.0.37+d1c2f63bd4fc89/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/waitgroup.go:47 +0x10f
            net/http.HandlerFunc.ServeHTTP(0xc009830ea0, 0x4fe0260, 0xc02853d0a0, 0xc03633c500)
                    /usr/local/go/src/net/http/server.go:2007 +0x44
            k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithRequestInfo.func1(0x4fe0260, 0xc02853d0a0, 0xc03633c400)
                    /workspace/anago-v1.17.9-rc.0.37+d1c2f63bd4fc89/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/requestinfo.go:39 +0x274
            net/http.HandlerFunc.ServeHTTP(0xc009830ed0, 0x4fe0260, 0xc02853d0a0, 0xc03633c400)
                    /usr/local/go/src/net/http/server.go:2007 +0x44
            k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithCacheControl.func1(0x4fe0260, 0xc02853d0a0, 0xc03633c400)
                    /workspace/anago-v1.17.9-rc.0.37+d1c2f63bd4fc89/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/cachecontrol.go:31 +0xa8
            net/http.HandlerFunc.ServeHTTP(0xc0094ef160, 0x4fe0260, 0xc02853d0a0, 0xc03633c400)
                    /usr/local/go/src/net/http/server.go:2007 +0x44
            k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.WithLogging.func1(0x4fd3260, 0xc03808e348, 0xc03633c300)
                    /workspace/anago-v1.17.9-rc.0.37+d1c2f63bd4fc89/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:89 +0x2ca
            net/http.HandlerFunc.ServeHTTP(0xc0094ef180, 0x4fd3260, 0xc03808e348, 0xc03633c300)
                    /usr/local/go/src/net/http/server.go:2007 +0x44
            k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.withPanicRecovery.func1(0x4fd3260, 0xc03808e348, 0xc03633c300)
                    /workspace/anago-v1.17.9-rc.0.37+d1c2f63bd4fc89/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/wrap.go:51 +0x13e
            net/http.HandlerFunc.ServeHTTP(0xc0094ef1a0, 0x4fd3260, 0xc03808e348, 0xc03633c300)
                    /usr/local/go/src/net/http/server.go:2007 +0x44
            k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.(*APIServerHandler).ServeHTTP(0xc009830f00, 0x4fd3260, 0xc03808e348, 0xc03633c300)
                    /workspace/anago-v1.17.9-rc.0.37+d1c2f63bd4fc89/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:189 +0x51
            net/http.serverHandler.ServeHTTP(0xc00aed8000, 0x4fd3260, 0xc03808e348, 0xc03633c300)
                    /usr/local/go/src/net/http/server.go:2802 +0xa4
            net/http.initNPNRequest.ServeHTTP(0x4fecf20, 0xc02efe37d0, 0xc029e86380, 0xc00aed8000, 0x4fd3260, 0xc03808e348, 0xc03633c300)
                    /usr/local/go/src/net/http/server.go:3366 +0x8d
            k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*serverConn).runHandler(0xc02b17ed80, 0xc03808e348, 0xc03633c300, 0xc00ed094c0)
                    /workspace/anago-v1.17.9-rc.0.37+d1c2f63bd4fc89/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/server.go:2149 +0x9f
            created by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*serverConn).processHeaders
                    /workspace/anago-v1.17.9-rc.0.37+d1c2f63bd4fc89/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/server.go:1883 +0x4eb
            E0422 02:56:58.259854       1 writers.go:118] apiserver was unable to write a fallback JSON response: write tcp 10.34.76.244:6443->10.34.76.242:51610: write: no route to host

            这个是master节点的日志 除master节点外其它两台node都是Notready状态
            -- Logs begin at Wed 2021-04-21 21:45:40 CST. --
            Apr 22 21:23:54 k8s-master kubelet[114611]: I0422 21:23:54.195395 114611 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-bk422" (UniqueName: "kubernetes.io/secret/27893b83-a9b3-41ea-9f0c-8835a8e41457-coredns-token-bk422") pod "coredns-66bff467f8-kqxhz" (UID: "27893b83-a9b3-41ea-9f0c-8835a8e41457")
            Apr 22 21:23:54 k8s-master kubelet[114611]: I0422 21:23:54.195455 114611 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flannel-token-vmx8s" (UniqueName: "kubernetes.io/secret/a74054e6-ccd3-44fc-8d38-b24f7ef1dacd-flannel-token-vmx8s") pod "kube-flannel-ds-ndf9q" (UID: "a74054e6-ccd3-44fc-8d38-b24f7ef1dacd")
            Apr 22 21:23:54 k8s-master kubelet[114611]: I0422 21:23:54.195498 114611 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5c487e41-4b26-49d8-93a8-9e1b576d1f03-config-volume") pod "coredns-66bff467f8-qdnkk" (UID: "5c487e41-4b26-49d8-93a8-9e1b576d1f03")
            Apr 22 21:23:54 k8s-master kubelet[114611]: I0422 21:23:54.195516 114611 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni" (UniqueName: "kubernetes.io/host-path/a74054e6-ccd3-44fc-8d38-b24f7ef1dacd-cni") pod "kube-flannel-ds-ndf9q" (UID: "a74054e6-ccd3-44fc-8d38-b24f7ef1dacd")
            Apr 22 21:23:54 k8s-master kubelet[114611]: I0422 21:23:54.195534 114611 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flannel-cfg" (UniqueName: "kubernetes.io/configmap/a74054e6-ccd3-44fc-8d38-b24f7ef1dacd-flannel-cfg") pod "kube-flannel-ds-ndf9q" (UID: "a74054e6-ccd3-44fc-8d38-b24f7ef1dacd")
            Apr 22 21:23:54 k8s-master kubelet[114611]: I0422 21:23:54.195553 114611 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "run" (UniqueName: "kubernetes.io/host-path/a74054e6-ccd3-44fc-8d38-b24f7ef1dacd-run") pod "kube-flannel-ds-ndf9q" (UID: "a74054e6-ccd3-44fc-8d38-b24f7ef1dacd")
            Apr 22 21:23:54 k8s-master kubelet[114611]: I0422 21:23:54.195599 114611 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-bk422" (UniqueName: "kubernetes.io/secret/5c487e41-4b26-49d8-93a8-9e1b576d1f03-coredns-token-bk422") pod "coredns-66bff467f8-qdnkk" (UID: "5c487e41-4b26-49d8-93a8-9e1b576d1f03")
            Apr 22 21:23:54 k8s-master kubelet[114611]: I0422 21:23:54.195630 114611 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/27893b83-a9b3-41ea-9f0c-8835a8e41457-config-volume") pod "coredns-66bff467f8-kqxhz" (UID: "27893b83-a9b3-41ea-9f0c-8835a8e41457")
            Apr 22 21:23:54 k8s-master kubelet[114611]: I0422 21:23:54.195657 114611 reconciler.go:157] Reconciler: start to sync state
            Apr 22 21:23:54 k8s-master kubelet[114611]: I0422 21:23:54.408891 114611 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 4de37adac6a4359ae923677f526bbb154e880aa8d8e8be4b17553ccca7cb1fa5
            下面是两台node节点的日志错误信息时网络插件为准备好 按说我node节点加入集群后不是可以共享master的images吗为啥我这边没有images信息
            node01
            – Logs begin at Thu 2021-04-22 08:02:03 CST. –
            Apr 22 21:32:00 k8s-node01 kubelet[19448]: W0422 21:32:00.297004 19448 cni.go:237] Unable to update cni config: no valid networks found in /etc/cni/net.d
            Apr 22 21:32:00 k8s-node01 kubelet[19448]: E0422 21:32:00.701308 19448 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed pulling image “k8s.gcr.io/pause:3.2”: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
            Apr 22 21:32:00 k8s-node01 kubelet[19448]: E0422 21:32:00.701336 19448 kuberuntime_sandbox.go:68] CreatePodSandbox for pod “kube-proxy-l9qs5_kube-system(9151cac7-082d-422d-9a00-0a8014768937)” failed: rpc error: code = Unknown desc = failed pulling image “k8s.gcr.io/pause:3.2”: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
            Apr 22 21:32:00 k8s-node01 kubelet[19448]: E0422 21:32:00.701349 19448 kuberuntime_manager.go:727] createPodSandbox for pod “kube-proxy-l9qs5_kube-system(9151cac7-082d-422d-9a00-0a8014768937)” failed: rpc error: code = Unknown desc = failed pulling image “k8s.gcr.io/pause:3.2”: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
            Apr 22 21:32:00 k8s-node01 kubelet[19448]: E0422 21:32:00.701384 19448 pod_workers.go:191] Error syncing pod 9151cac7-082d-422d-9a00-0a8014768937 (“kube-proxy-l9qs5_kube-system(9151cac7-082d-422d-9a00-0a8014768937)”), skipping: failed to “CreatePodSandbox” for “kube-proxy-l9qs5_kube-system(9151cac7-082d-422d-9a00-0a8014768937)” with CreatePodSandboxError: "CreatePodSandbox for pod \“kube-proxy-l9qs5_kube-system(9151cac7-082d-422d-9a00-0a8014768937)\” failed: rpc error: code = Unknown desc = failed pulling image \“k8s.gcr.io/pause:3.2\”: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
            Apr 22 21:32:00 k8s-node01 kubelet[19448]: E0422 21:32:00.794467 19448 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
            Apr 22 21:32:02 k8s-node01 kubelet[19448]: E0422 21:32:02.573106 19448 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed pulling image “k8s.gcr.io/pause:3.2”: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
            Apr 22 21:32:02 k8s-node01 kubelet[19448]: E0422 21:32:02.573134 19448 kuberuntime_sandbox.go:68] CreatePodSandbox for pod “kube-flannel-ds-rk4wl_kube-system(8cf5c178-fd6f-4992-bada-30b54513c6c3)” failed: rpc error: code = Unknown desc = failed pulling image “k8s.gcr.io/pause:3.2”: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
            Apr 22 21:32:02 k8s-node01 kubelet[19448]: E0422 21:32:02.573146 19448 kuberuntime_manager.go:727] createPodSandbox for pod “kube-flannel-ds-rk4wl_kube-system(8cf5c178-fd6f-4992-bada-30b54513c6c3)” failed: rpc error: code = Unknown desc = failed pulling image “k8s.gcr.io/pause:3.2”: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
            Apr 22 21:32:02 k8s-node01 kubelet[19448]: E0422 21:32:02.573184 19448 pod_workers.go:191] Error syncing pod 8cf5c178-fd6f-4992-bada-30b54513c6c3 (“kube-flannel-ds-rk4wl_kube-system(8cf5c178-fd6f-4992-bada-30b54513c6c3)”), skipping: failed to “CreatePodSandbox” for “kube-flannel-ds-rk4wl_kube-system(8cf5c178-fd6f-4992-bada-30b54513c6c3)” with CreatePodSandboxError: "CreatePodSandbox for pod \“kube-flannel-ds-rk4wl_kube-system(8cf5c178-fd6f-4992-bada-30b54513c6c3)\” failed: rpc error: code = Unknown desc = failed pulling image \“k8s.gcr.io/pause:3.2\”: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
            Apr 22 21:32:05 k8s-node01 kubelet[19448]: W0422 21:32:05.299931 19448 cni.go:202] Error validating CNI config list {“cniVersion”:"",“name”:“cbr0”,“plugins”:[{“delegate”:{“isDefaultGateway”:true},“name”:“cbr0”,“type”:“flannel”}]}: [plugin flannel does not support config version ""]
            Apr 22 21:32:05 k8s-node01 kubelet[19448]: W0422 21:32:05.299945 19448 cni.go:237] Unable to update cni config: no valid networks found in /etc/cni/net.d
            Apr 22 21:32:05 k8s-node01 kubelet[19448]: E0422 21:32:05.805564 19448 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
            Apr 22 21:32:10 k8s-node01 kubelet[19448]: W0422 21:32:10.302550 19448 cni.go:202] Error validating CNI config list {“cniVersion”:"",“name”:“cbr0”,“plugins”:[{“delegate”:{“isDefaultGateway”:true},“name”:“cbr0”,“type”:“flannel”}]}: [plugin flannel does not support config version ""]
            Apr 22 21:32:10 k8s-node01 kubelet[19448]: W0422 21:32:10.302563 19448 cni.go:237] Unable to update cni config: no valid networks found in /etc/cni/net.d
            Apr 22 21:32:10 k8s-node01 kubelet[19448]: E0422 21:32:10.815240 19448 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
            Apr 22 21:32:14 k8s-node01 kubelet[19448]: E0422 21:32:14.175155 19448 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
            Apr 22 21:32:14 k8s-node01 kubelet[19448]: For verbose messaging see aws.Config.CredentialsChainVerboseErrors
            Apr 22 21:32:15 k8s-node01 kubelet[19448]: W0422 21:32:15.304877 19448 cni.go:202] Error validating CNI config list {“cniVersion”:"",“name”:“cbr0”,“plugins”:[{“delegate”:{“isDefaultGateway”:true},“name”:“cbr0”,“type”:“flannel”}]}: [plugin flannel does not support config version ""]
            Apr 22 21:32:15 k8s-node01 kubelet[19448]: W0422 21:32:15.304891 19448 cni.go:237] Unable to update cni config: no valid networks found in /etc/cni/net.d
            Apr 22 21:32:15 k8s-node01 kubelet[19448]: E0422 21:32:15.826068 19448 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
            Apr 22 21:32:16 k8s-node01 kubelet[19448]: E0422 21:32:16.208728 19448 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
            Apr 22 21:32:16 k8s-node01 kubelet[19448]: For verbose messaging see aws.Config.CredentialsChainVerboseErrors
            Apr 22 21:32:20 k8s-node01 kubelet[19448]: W0422 21:32:20.307716 19448 cni.go:202] Error validating CNI config list {“cniVersion”:"",“name”:“cbr0”,“plugins”:[{“delegate”:{“isDefaultGateway”:true},“name”:“cbr0”,“type”:“flannel”}]}: [plugin flannel does not support config version ""]
            Apr 22 21:32:20 k8s-node01 kubelet[19448]: W0422 21:32:20.307729 19448 cni.go:237] Unable to update cni config: no valid networks found in /etc/cni/net.d
            Apr 22 21:32:20 k8s-node01 kubelet[19448]: E0422 21:32:20.838115 19448 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
            Apr 22 21:32:25 k8s-node01 kubelet[19448]: W0422 21:32:25.310531 19448 cni.go:202] Error validating CNI config list {“cniVersion”:"",“name”:“cbr0”,“plugins”:[{“delegate”:{“isDefaultGateway”:true},“name”:“cbr0”,“type”:“flannel”}]}: [plugin flannel does not support config version ""]
            Apr 22 21:32:25 k8s-node01 kubelet[19448]: W0422 21:32:25.310546 19448 cni.go:237] Unable to update cni config: no valid networks found in /etc/cni/net.d
            Apr 22 21:32:25 k8s-node01 kubelet[19448]: E0422 21:32:25.850370 19448 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
            Apr 22 21:32:29 k8s-node01 kubelet[19448]: E0422 21:32:29.557205 19448 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed pulling image “k8s.gcr.io/pause:3.2”: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
            Apr 22 21:32:29 k8s-node01 kubelet[19448]: E0422 21:32:29.557256 19448 kuberuntime_sandbox.go:68] CreatePodSandbox for pod “kube-proxy-l9qs5_kube-system(9151cac7-082d-422d-9a00-0a8014768937)” failed: rpc error: code = Unknown desc = failed pulling image “k8s.gcr.io/pause:3.2”: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
            Apr 22 21:32:29 k8s-node01 kubelet[19448]: E0422 21:32:29.557272 19448 kuberuntime_manager.go:727] createPodSandbox for pod “kube-proxy-l9qs5_kube-system(9151cac7-082d-422d-9a00-0a8014768937)” failed: rpc error: code = Unknown desc = failed pulling image “k8s.gcr.io/pause:3.2”: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
            Apr 22 21:32:29 k8s-node01 kubelet[19448]: E0422 21:32:29.557310 19448 pod_workers.go:191] Error syncing pod 9151cac7-082d-422d-9a00-0a8014768937 (“kube-proxy-l9qs5_kube-system(9151cac7-082d-422d-9a00-0a8014768937)”), skipping: failed to “CreatePodSandbox” for “kube-proxy-l9qs5_kube-system(9151cac7-082d-422d-9a00-0a8014768937)” with CreatePodSandboxError: "CreatePodSandbox for pod \“kube-proxy-l9qs5_kube-system(9151cac7-082d-422d-9a00-0a8014768937)\” failed: rpc error: code = Unknown desc = failed pulling image \“k8s.gcr.io/pause:3.2\”: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
            Apr 22 21:32:30 k8s-node01 kubelet[19448]: W0422 21:32:30.315943 19448 cni.go:202] Error validating CNI config list {“cniVersion”:"",“name”:“cbr0”,“plugins”:[{“delegate”:{“isDefaultGateway”:true},“name”:“cbr0”,“type”:“flannel”}]}: [plugin flannel does not support config version ""]
            Apr 22 21:32:30 k8s-node01 kubelet[19448]: W0422 21:32:30.315966 19448 cni.go:237] Unable to update cni config: no valid networks found in /etc/cni/net.d
            Apr 22 21:32:30 k8s-node01 kubelet[19448]: E0422 21:32:30.861974 19448 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
            Apr 22 21:32:31 k8s-node01 kubelet[19448]: E0422 21:32:31.588465 19448 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed pulling image “k8s.gcr.io/pause:3.2”: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
            Apr 22 21:32:31 k8s-node01 kubelet[19448]: E0422 21:32:31.588494 19448 kuberuntime_sandbox.go:68] CreatePodSandbox for pod “kube-flannel-ds-rk4wl_kube-system(8cf5c178-fd6f-4992-bada-30b54513c6c3)” failed: rpc error: code = Unknown desc = failed pulling image “k8s.gcr.io/pause:3.2”: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
            Apr 22 21:32:31 k8s-node01 kubelet[19448]: E0422 21:32:31.588507 19448 kuberuntime_manager.go:727] createPodSandbox for pod “kube-flannel-ds-rk4wl_kube-system(8cf5c178-fd6f-4992-bada-30b54513c6c3)” failed: rpc error: code = Unknown desc = failed pulling image “k8s.gcr.io/pause:3.2”: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
            Apr 22 21:32:31 k8s-node01 kubelet[19448]: E0422 21:32:31.588543 19448 pod_workers.go:191] Error syncing pod 8cf5c178-fd6f-4992-bada-30b54513c6c3 (“kube-flannel-ds-rk4wl_kube-system(8cf5c178-fd6f-4992-bada-30b54513c6c3)”), skipping: failed to “CreatePodSandbox” for “kube-flannel-ds-rk4wl_kube-system(8cf5c178-fd6f-4992-bada-30b54513c6c3)” with CreatePodSandboxError: "CreatePodSandbox for pod \“kube-flannel-ds-rk4wl_kube-system(8cf5c178-fd6f-4992-bada-30b54513c6c3)\” failed: rpc error: code = Unknown desc = failed pulling image \“k8s.gcr.io/pause:3.2\”: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"

            9 个月 后

            [root@k8s-node4 ~]# journalctl -u kubelet -f

            -- Logs begin at Thu 2022-01-27 07:52:57 CST. –

            Jan 27 21:46:12 k8s-node4 kubelet[40297]: E0127 21:46:12.300188 40297 kubelet.go:2291] “Error getting node” err="node \“k8s-node4\” not found"

            Jan 27 21:46:12 k8s-node4 kubelet[40297]: E0127 21:46:12.401027 40297 kubelet.go:2291] “Error getting node” err="node \“k8s-node4\” not found"

            Jan 27 21:46:12 k8s-node4 kubelet[40297]: E0127 21:46:12.501119 40297 kubelet.go:2291] “Error getting node” err="node \“k8s-node4\” not found"

            Jan 27 21:46:12 k8s-node4 kubelet[40297]: E0127 21:46:12.602746 40297 kubelet.go:2291] “Error getting node” err="node \“k8s-node4\” not found"

            Jan 27 21:46:12 k8s-node4 kubelet[40297]: E0127 21:46:12.703370 40297 kubelet.go:2291] “Error getting node” err="node \“k8s-node4\” not found"

            Jan 27 21:46:12 k8s-node4 kubelet[40297]: E0127 21:46:12.804184 40297 kubelet.go:2291] “Error getting node” err="node \“k8s-node4\” not found"

            Jan 27 21:46:12 k8s-node4 kubelet[40297]: E0127 21:46:12.904746 40297 kubelet.go:2291] “Error getting node” err="node \“k8s-node4\” not found"

            Jan 27 21:46:13 k8s-node4 kubelet[40297]: E0127 21:46:13.005405 40297 kubelet.go:2291] “Error getting node” err="node \“k8s-node4\” not found"

              2 年 后

              clh-cod 我也遇到了一样的问题,日志也是一样的,请问下找到具体是什么原因导致的么