• 安装部署
  • 4.1.1版本的kubesphere部署后whizard-telemetry-apiserver状态异常

创建部署问题时,请参考下面模板,你提供的信息越多,越容易及时获得解答。如果未按模板创建问题,管理员有权关闭问题。
确保帖子格式清晰易读,用 markdown code block 语法格式化代码块。
你只花一分钟创建的问题,不能指望别人花上半个小时给你解答。

操作系统信息
物理机,kylinv10,64核 128G

Kubernetes版本信息
1.28.8

容器运行时
使用的是containerd 1.7.16

KubeSphere版本信息
4.1.1 离线部署,自行部署了kubernetes1.28.8.

问题是什么

whizard-telemetry-apiserver-678b94d976-z4fgz 容器会变成异常状态,重新部署后会正常,但是一段时间之后还是会异常,搞不明白能正常运行,但是又会自行变成异常。

要么就是一直起不来。
Error: failed to init kubesphere client: failed to read token file "/var/run/secrets/kubesphere.io/serviceaccount/token": open /var/run/secrets/kubesphere.io/serviceaccount/token: no such file or directory

2024/12/11 11:02:02 failed to init kubesphere client: failed to read token file "/var/run/secrets/kubesphere.io/serviceaccount/token": open /var/run/secrets/kubesphere.io/serviceaccount/token: no such file or directory

[root@master1 ~]#

[root@master1 ~]# kubectl get pod -n extension-whizard-telemetry -o wide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES

whizard-telemetry-apiserver-678b94d976-z4fgz 0/1 CrashLoopBackOff 10 (2m23s ago) 28m 10.244.180.12 master2 <none> <none>

下面是把一些运行情况和描述信息发出。

[root@master1 ~]# kubectl get pod -n extension-whizard-telemetry -o wide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES

whizard-telemetry-apiserver-678b94d976-z4fgz 0/1 CrashLoopBackOff 10 (2m23s ago) 28m 10.244.180.12 master2 <none> <none>

[root@master1 ~]# kubectl describe pod -n extension-whizard-telemetry whizard-telemetry-apiserver-678b94d976-z4fgz

Name: whizard-telemetry-apiserver-678b94d976-z4fgz

Namespace: extension-whizard-telemetry

Priority: 0

Service Account: whizard-telemetry

Node: master2/192.168.160.2

Start Time: Wed, 11 Dec 2024 10:40:55 +0800

Labels: app=whizard-telemetry-apiserver

app.kubernetes.io/instance=whizard-telemetry

app.kubernetes.io/name=whizard-telemetry

pod-template-hash=678b94d976

tier=backend

Annotations: cni.projectcalico.org/containerID: 5c525371ee30de0c237f1c0ee28ad59b8a7e632bd97f0346d9adf6fd7b63695c

cni.projectcalico.org/podIP: 10.244.180.12/32

cni.projectcalico.org/podIPs: 10.244.180.12/32

kubesphere.io/restartedAt: 2024-12-10T03:25:48.324Z

kubesphere.io/serviceaccount-name: whizard-telemetry

Status: Running

IP: 10.244.180.12

IPs:

IP: 10.244.180.12

Controlled By: ReplicaSet/whizard-telemetry-apiserver-678b94d976

Containers:

whizard-telemetry:

Container ID: containerd://b7a5bd576fb323639260cd0fd329829ff5e8bb4a71f0484de7d7f7898ba9cc66

Image: docker.io/kubesphere/whizard-telemetry-apiserver:v1.2.2

Image ID: sha256:0366ef744c3f850fd60abe5ac081608978a47b0c5d08128203200d65cc071081

Port: 9090/TCP

Host Port: 0/TCP

Command:

apiserver

--logtostderr=true

State: Waiting

Reason: CrashLoopBackOff

Last State: Terminated

Reason: Error

Exit Code: 1

Started: Wed, 11 Dec 2024 11:07:03 +0800

Finished: Wed, 11 Dec 2024 11:07:03 +0800

Ready: False

Restart Count: 10

Limits:

cpu: 1

memory: 1Gi

Requests:

cpu: 20m

memory: 50Mi

Environment: <none>

Mounts:

/etc/localtime from host-time (ro)

/etc/whizard-telemetry/ from whizard-telemetry-config (rw)

/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-spnpg (ro)

Conditions:

Type Status

Initialized True

Ready False

ContainersReady False

PodScheduled True

Volumes:

whizard-telemetry-config:

Type: ConfigMap (a volume populated by a ConfigMap)

Name: whizard-telemetry-config

Optional: false

host-time:

Type: HostPath (bare host directory volume)

Path: /etc/localtime

HostPathType:

kube-api-access-spnpg:

Type: Projected (a volume that contains injected data from multiple sources)

TokenExpirationSeconds: 3607

ConfigMapName: kube-root-ca.crt

ConfigMapOptional: <nil>

DownwardAPI: true

QoS Class: Burstable

Node-Selectors: <none>

Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 1s

node.kubernetes.io/unreachable:NoExecute op=Exists for 1s

Events:

Type Reason Age From Message

---- ------ ---- ---- -------

Normal Scheduled 28m default-scheduler Successfully assigned extension-whizard-telemetry/whizard-telemetry-apiserver-678b94d976-z4fgz to master2

Normal Pulled 27m (x5 over 28m) kubelet Container image "docker.io/kubesphere/whizard-telemetry-apiserver:v1.2.2" already present on machine

Normal Created 27m (x5 over 28m) kubelet Created container whizard-telemetry

Normal Started 27m (x5 over 28m) kubelet Started container whizard-telemetry

Warning BackOff 3m31s (x118 over 28m) kubelet Back-off restarting failed container whizard-telemetry in pod whizard-telemetry-apiserver-678b94d976-z4fgz_extension-whizard-telemetry(c743f99a-0a48-4660-8e13-36b86d83ff7e)

[root@master1 ~]# kubectl logs -n extension-whizard-telemetry whizard-telemetry-apiserver-678b94d976-z4fgz

Error: failed to init kubesphere client: failed to read token file "/var/run/secrets/kubesphere.io/serviceaccount/token": open /var/run/secrets/kubesphere.io/serviceaccount/token: no such file or directory

2024/12/11 11:07:03 failed to init kubesphere client: failed to read token file "/var/run/secrets/kubesphere.io/serviceaccount/token": open /var/run/secrets/kubesphere.io/serviceaccount/token: no such file or directory

[root@master1 ~]#

重新部署后的日志信息

重新部署后又能运行正常,但是一段时间之后还是会异常。

容器日志

2024-12-11T11:10:10.279795740+08:00 I1211 11:10:10.279617 1 apiserver.go:89] Register /kapis/monitoring.kubesphere.io/v1beta1

2024-12-11T11:10:10.279848700+08:00 I1211 11:10:10.279710 1 apiserver.go:91] GET /kapis/monitoring.kubesphere.io/v1beta1/cluster_metrics

2024-12-11T11:10:10.279858640+08:00 I1211 11:10:10.279719 1 apiserver.go:91] GET /kapis/monitoring.kubesphere.io/v1beta1/node_metrics

2024-12-11T11:10:10.279864280+08:00 I1211 11:10:10.279725 1 apiserver.go:91] GET /kapis/monitoring.kubesphere.io/v1beta1/workspace_metrics

2024-12-11T11:10:10.279869680+08:00 I1211 11:10:10.279732 1 apiserver.go:91] GET /kapis/monitoring.kubesphere.io/v1beta1/namespace_metrics

2024-12-11T11:10:10.279875080+08:00 I1211 11:10:10.279739 1 apiserver.go:91] GET /kapis/monitoring.kubesphere.io/v1beta1/workload_metrics

2024-12-11T11:10:10.279880420+08:00 I1211 11:10:10.279745 1 apiserver.go:91] GET /kapis/monitoring.kubesphere.io/v1beta1/pod_metrics

2024-12-11T11:10:10.279903880+08:00 I1211 11:10:10.279751 1 apiserver.go:91] GET /kapis/monitoring.kubesphere.io/v1beta1/container_metrics

2024-12-11T11:10:10.279911460+08:00 I1211 11:10:10.279757 1 apiserver.go:91] GET /kapis/monitoring.kubesphere.io/v1beta1/persistentvolumeclaim_metrics

2024-12-11T11:10:10.279916280+08:00 I1211 11:10:10.279764 1 apiserver.go:91] GET /kapis/monitoring.kubesphere.io/v1beta1/component_metrics

2024-12-11T11:10:10.279921280+08:00 I1211 11:10:10.279769 1 apiserver.go:91] GET /kapis/monitoring.kubesphere.io/v1beta1/targets/query

2024-12-11T11:10:10.279926240+08:00 I1211 11:10:10.279776 1 apiserver.go:91] GET /kapis/monitoring.kubesphere.io/v1beta1/targets/metadata

2024-12-11T11:10:10.279930900+08:00 I1211 11:10:10.279783 1 apiserver.go:91] GET /kapis/monitoring.kubesphere.io/v1beta1/targets/labelvalues

2024-12-11T11:10:10.279935460+08:00 I1211 11:10:10.279789 1 apiserver.go:91] GET /kapis/monitoring.kubesphere.io/v1beta1/targets/labelsets

2024-12-11T11:10:10.279940060+08:00 I1211 11:10:10.279795 1 apiserver.go:89] Register /kapis/logging.kubesphere.io/v1alpha2

2024-12-11T11:10:10.279944600+08:00 I1211 11:10:10.279802 1 apiserver.go:91] GET /kapis/logging.kubesphere.io/v1alpha2/logs

2024-12-11T11:10:10.279949200+08:00 I1211 11:10:10.279812 1 apiserver.go:91] GET /kapis/logging.kubesphere.io/v1alpha2/events

2024-12-11T11:10:10.279988960+08:00 I1211 11:10:10.279819 1 apiserver.go:89] Register /kapis/notification.kubesphere.io/v2beta2

2024-12-11T11:10:10.279999540+08:00 I1211 11:10:10.279825 1 apiserver.go:91] POST /kapis/notification.kubesphere.io/v2beta2/verification

2024-12-11T11:10:10.280005960+08:00 I1211 11:10:10.279834 1 apiserver.go:91] POST /kapis/notification.kubesphere.io/v2beta2/users/{user}/verification

2024-12-11T11:10:10.280011000+08:00 I1211 11:10:10.279843 1 apiserver.go:91] GET /kapis/notification.kubesphere.io/v2beta2/{resources}

2024-12-11T11:10:10.280015880+08:00 I1211 11:10:10.279850 1 apiserver.go:91] GET /kapis/notification.kubesphere.io/v2beta2/{resources}/{name}

2024-12-11T11:10:10.280021020+08:00 I1211 11:10:10.279855 1 apiserver.go:91] POST /kapis/notification.kubesphere.io/v2beta2/{resources}

2024-12-11T11:10:10.280026380+08:00 I1211 11:10:10.279861 1 apiserver.go:91] PUT /kapis/notification.kubesphere.io/v2beta2/{resources}/{name}

2024-12-11T11:10:10.280031380+08:00 I1211 11:10:10.279868 1 apiserver.go:91] PATCH /kapis/notification.kubesphere.io/v2beta2/{resources}/{name}

2024-12-11T11:10:10.280036320+08:00 I1211 11:10:10.279874 1 apiserver.go:91] DELETE /kapis/notification.kubesphere.io/v2beta2/{resources}/{name}

2024-12-11T11:10:10.280041600+08:00 I1211 11:10:10.279879 1 apiserver.go:91] GET /kapis/notification.kubesphere.io/v2beta2/users/{user}/{resources}

2024-12-11T11:10:10.280075040+08:00 I1211 11:10:10.279886 1 apiserver.go:91] GET /kapis/notification.kubesphere.io/v2beta2/users/{user}/{resources}/{name}

2024-12-11T11:10:10.280085680+08:00 I1211 11:10:10.279893 1 apiserver.go:91] POST /kapis/notification.kubesphere.io/v2beta2/users/{user}/{resources}

2024-12-11T11:10:10.280091860+08:00 I1211 11:10:10.279902 1 apiserver.go:91] PUT /kapis/notification.kubesphere.io/v2beta2/users/{user}/{resources}/{name}

2024-12-11T11:10:10.280097220+08:00 I1211 11:10:10.279908 1 apiserver.go:91] PATCH /kapis/notification.kubesphere.io/v2beta2/users/{user}/{resources}/{name}

2024-12-11T11:10:10.280102560+08:00 I1211 11:10:10.279915 1 apiserver.go:91] DELETE /kapis/notification.kubesphere.io/v2beta2/users/{user}/{resources}/{name}

2024-12-11T11:10:10.280142680+08:00 I1211 11:10:10.279953 1 apiserver.go:140] Start listening on :9090

    Phoaster-wry

    是否环境中有组件限制 kubesphere token 注入呢?日志里报错是找不到token 文件,这个token 是 KubeSphere 创建并进行挂载的,你可以进入到容器中或使用 docker inspect 看看这个容器对应目录是否正常挂载。

    如果是误删除资源导致的,可以将扩展组件重装试下

      6 个月 后

      frezes 你好,现在才看到这个信息,组件也都是ks-4.1.1版本扩展中心安装的,应该是没有额外的组件。

      重新安装后还是会出现这个 问题,很奇怪。

      一段时间正常,又会出现这个问题。

      kubesphere/whizard-telemetry-apiserver这个是否有源代码工程,docker inspect看不到全部信息,想尝试重新做一下镜像发现没有代码工程,这个有吗