Kubesphere 升级 V2.1后 集群中的node 节点每天都要挂掉一个,查询原因,发现集群升级后api出问题。

# kubectl describe pod kube-apiserver-master0 -n kube-system
..............................................................
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       :NoExecute
Events:
  Type     Reason     Age                   From              Message
  ----     ------     ----                  ----              -------
  Warning  Unhealthy  14m (x409 over 6d4h)  kubelet, master0  Liveness probe failed: HTTP probe failed with statuscode: 500

最后看到 master0和master1均是 :“Liveness probe failed: HTTP probe failed with statuscode: 500”

请大家帮忙分析一下问题原因,谢谢

    1735802356

    master0节点标红日志如下:

    Oct 28 15:03:34 master0 kubelet: I1028 15:03:34.913340    1987 kubelet_getters.go:172] status for pod kube-scheduler-master0 updated to {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-19 23:55:37 +0800 CST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-10-28 10:40:03 +0800 CST  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-10-28 10:40:03 +0800 CST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-19 23:55:37 +0800 CST  }]    192.168.3.2 192.168.3.2 [{192.168.3.2}] 2020-10-19 23:55:37 +0800 CST [] [{kube-scheduler {nil &ContainerStateRunning{StartedAt:2020-10-28 10:40:02 +0800 CST,} nil} {nil nil &ContainerStateTerminated{ExitCode:255,Signal:0,Reason:Error,Message:,StartedAt:2020-10-28 06:41:31 +0800 CST,FinishedAt:2020-10-28 10:39:58 +0800 CST,ContainerID:docker://c05cb0a99b3919f6ced1d5a99b6cbf2a064f004543154d659cdbed75b22191ea,}} true 37 kubesphere/hyperkube:v1.16.7 docker-pullable://kubesphere/hyperkube@sha256:b4285fd78d62c5bc9ef28dac4a88b2914727ddc8c82a32003d6a2ef2dd0caf3c docker://7e381593fd35dcb730d068e70264ef801a192338949a93f22b67b5e5a7643260 0xc002f6b250}] Burstable []}

    挂掉的node节点错误日志如下:

    Oct 28 06:41:32 node2 kubelet: E1028 06:41:32.473548    1933 kubelet_node_status.go:388] Error updating node status, will retry: failed to patch status "{\"status\":{\"$setElementOrder/conditions\":[{\"type\":\"NetworkUnavailable\"},{\"type\":\"MemoryPressure\"},{\"type\":\"DiskPressure\"},{\"type\":\"PIDPressure\"},{\"type\":\"Ready\"}],\"conditions\":[{\"lastHeartbeatTime\":\"2020-10-27T22:41:22Z\",\"type\":\"MemoryPressure\"},{\"lastHeartbeatTime\":\"2020-10-27T22:41:22Z\",\"type\":\"DiskPressure\"},{\"lastHeartbeatTime\":\"2020-10-27T22:41:22Z\",\"type\":\"PIDPressure\"},{\"lastHeartbeatTime\":\"2020-10-27T22:41:22Z\",\"type\":\"Ready\"}]}}" for node "node2": Patch https://192.168.3.5:6443/api/v1/nodes/node2/status?timeout=10s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)

    1735802356 能看下pod的日志嘛
    pod的日志如下:

    # kubectl logs -f kube-apiserver-master0 -n kube-system
    I1028 07:43:48.849155       1 trace.go:116] Trace[302023365]: "List" url:/apis/batch/v1/jobs (started: 2020-10-28 07:43:44.911400628 +0000 UTC m=+542894.581290892) (total time: 3.937553399s):
    Trace[302023365]: [3.936264064s] [3.936203707s] Listing from storage done
    I1028 07:43:48.850327       1 trace.go:116] Trace[465987386]: "Get" url:/api/v1/namespaces/kubesphere-controls-system/configmaps/kubesphere-router-crm-yunwei-nginx (started: 2020-10-28 07:43:43.125638704 +0000 UTC m=+542892.795528996) (total time: 5.724655588s):
    Trace[465987386]: [5.724607277s] [5.724589848s] About to write a response
    I1028 07:43:48.850517       1 trace.go:116] Trace[1140572667]: "List etcd3" key:/minions,resourceVersion:,limit:0,continue: (started: 2020-10-28 07:43:47.057457351 +0000 UTC m=+542896.727347665) (total time: 1.793043804s):
    Trace[1140572667]: [1.793043804s] [1.793043804s] END
    I1028 07:43:48.850606       1 trace.go:116] Trace[1305675272]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager (started: 2020-10-28 07:43:44.081697489 +0000 UTC m=+542893.751587792) (total time: 4.76888305s):
    Trace[1305675272]: [4.76883691s] [4.768769385s] About to write a response
    I1028 07:43:48.850842       1 trace.go:116] Trace[965883403]: "Get" url:/api/v1/namespaces/kubesphere-controls-system/configmaps/kubesphere-router-mts-pro-nginx (started: 2020-10-28 07:43:45.850958474 +0000 UTC m=+542895.520848786) (total time: 2.99986112s):
    Trace[965883403]: [2.999824403s] [2.999807938s] About to write a response
    I1028 07:43:48.850853       1 trace.go:116] Trace[654818441]: "List" url:/api/v1/nodes (started: 2020-10-28 07:43:47.057427781 +0000 UTC m=+542896.727318126) (total time: 1.793412667s):
    Trace[654818441]: [1.793096473s] [1.793076074s] Listing from storage done
    I1028 07:44:29.075418       1 trace.go:116] Trace[202883044]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-apiserver-master0/log (started: 2020-10-28 07:44:27.585915684 +0000 UTC m=+542937.255805976) (total time: 1.489478043s):
    Trace[202883044]: [1.489477167s] [1.485480804s] Transformed response object
    I1028 07:44:32.918692       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
    I1028 07:45:32.920304       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
    I1028 07:45:51.537195       1 trace.go:116] Trace[382828506]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-apiserver-master0/log (started: 2020-10-28 07:44:38.608638804 +0000 UTC m=+542948.278529065) (total time: 1m12.928505347s):

    1735802356
    不会。
    这是升级Kubesphere-v2.1产生的问题。升级后我还进去看过,firewalld 始终关闭状态。

    我在pod的日志中找到这样一行日志:
    E1028 02:40:19.333201 1 status.go:71] apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:"etcdserver: request timed out"}

    不知道是不是etcdserver产生的bug