创建部署问题时,请参考下面模板,你提供的信息越多,越容易及时获得解答。如果未按模板创建问题,管理员有权关闭问题。
确保帖子格式清晰易读,用 markdown code block 语法格式化代码块。
你只花一分钟创建的问题,不能指望别人花上半个小时给你解答。
操作系统信息
物理机(最初部署的 1 master+2 node)+虚拟机,Centos7.9
Kubernetes版本信息
将 kubectl version
命令执行结果贴在下方
[19:37]:[shutang@dna028:~]$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.20", GitCommit:"1f3e19b7beb1cc0110255668c4238ed63dadb7ad", GitTreeState:"clean", BuildDate:"2021-06-16T12:58:51Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.20", GitCommit:"1f3e19b7beb1cc0110255668c4238ed63dadb7ad", GitTreeState:"clean", BuildDate:"2021-06-16T12:51:17Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
容器运行时
将 docker version
/ crictl version
/ nerdctl version
结果贴在下方
[19:52]:[shutang@dna028:~]$ docker version
Client: Docker Engine - Community
Version: 19.03.10
API version: 1.40
Go version: go1.13.10
Git commit: 9424aeaee9
Built: Thu May 28 22:18:06 2020
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.10
API version: 1.40 (minimum version 1.12)
Go version: go1.13.10
Git commit: 9424aeaee9
Built: Thu May 28 22:16:43 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.2.6
GitCommit: 894b81a4b802e4eb2a91d1ce216b8817763c29fb
runc:
Version: 1.0.0-rc8
GitCommit: 425e105d5a03fabd737a126ad93d62a9eeede87f
docker-init:
Version: 0.18.0
GitCommit: fec3683
KubeSphere版本信息
v3.1.1 离线安装。在已有K8s上安装。
问题是什么
报错日志是什么,最好有截图。
[19:36]:[shutang@dna028:~]$ kubectl get all -n kubesphere-system
NAME READY STATUS RESTARTS AGE
pod/ks-installer-7568684bbc-4hjq8 1/1 Running 0 2d21h
pod/redis-ha-haproxy-54dc5bcd44-bk4r8 1/1 Running 0 2d21h
pod/redis-ha-haproxy-54dc5bcd44-hppqd 1/1 Running 0 2d21h
pod/redis-ha-haproxy-54dc5bcd44-jc7ff 1/1 Running 0 2d21h
pod/redis-ha-server-0 2/2 Running 0 2d21h
pod/redis-ha-server-1 0/2 Init:0/1 0 2d21h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ks-apiserver ClusterIP 10.97.10.46 <none> 80/TCP 2d21h
service/ks-console NodePort 10.104.110.244 <none> 80:30880/TCP 2d21h
service/ks-controller-manager ClusterIP 10.105.101.9 <none> 443/TCP 2d21h
service/redis ClusterIP 10.98.68.249 <none> 6379/TCP 2d21h
service/redis-ha ClusterIP None <none> 6379/TCP,26379/TCP 2d21h
service/redis-ha-announce-0 ClusterIP 10.104.67.72 <none> 6379/TCP,26379/TCP 2d21h
service/redis-ha-announce-1 ClusterIP 10.111.249.229 <none> 6379/TCP,26379/TCP 2d21h
service/redis-ha-announce-2 ClusterIP 10.103.238.218 <none> 6379/TCP,26379/TCP 2d21h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ks-apiserver 0/3 0 0 2d21h
deployment.apps/ks-console 0/3 0 0 2d21h
deployment.apps/ks-controller-manager 0/3 0 0 2d21h
deployment.apps/ks-installer 1/1 1 1 2d21h
deployment.apps/redis-ha-haproxy 3/3 3 3 2d21h
NAME DESIRED CURRENT READY AGE
replicaset.apps/ks-apiserver-78f4f9dd87 1 0 0 2d21h
replicaset.apps/ks-apiserver-95694464f 1 0 0 2d21h
replicaset.apps/ks-apiserver-cff475669 1 0 0 2d21h
replicaset.apps/ks-console-7494896c94 3 0 0 2d21h
replicaset.apps/ks-controller-manager-6c5d8dc595 1 0 0 2d21h
replicaset.apps/ks-controller-manager-6d5fd4d4c6 1 0 0 2d21h
replicaset.apps/ks-controller-manager-79976bf678 1 0 0 2d21h
replicaset.apps/ks-installer-7568684bbc 1 1 1 2d21h
replicaset.apps/redis-ha-haproxy-54dc5bcd44 3 3 3 2d21h
NAME READY AGE
statefulset.apps/redis-ha-server 1/3 2d21h
[19:36]:[shutang@dna028:~]$ kubectl describe pod/redis-ha-server-1 -n kubesphere-system
Name: redis-ha-server-1
Namespace: kubesphere-system
Priority: 0
Node: lona/192.17.100.2
Start Time: Sat, 18 Jun 2022 21:40:11 -0700
Labels: app=redis-ha
controller-revision-hash=redis-ha-server-7479df6d4b
release=ks-redis
statefulset.kubernetes.io/pod-name=redis-ha-server-1
Annotations: checksum/init-config: 3536da7869ab409971a5b4fb4f3624e40b5484be59cba5ce0e741d5f28752cf7
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/redis-ha-server
Init Containers:
config-init:
Container ID:
Image: redis:5.0.12-alpine
Image ID:
Port: <none>
Host Port: <none>
Command:
sh
Args:
/readonly-config/init.sh
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment:
SENTINEL_ID_0: 76570abc73c20d3c0e6c21105777ed9b0898cb75
SENTINEL_ID_1: 0c5b5dae5039679890d11c4d6b6fb66a08625c08
SENTINEL_ID_2: 0b174d8f2a622ce4e7f303c67ce788c35729251d
Mounts:
/data from data (rw)
/readonly-config from config (ro)
/var/run/secrets/kubernetes.io/serviceaccount from redis-ha-token-7nw5m (ro)
Containers:
redis:
Container ID:
Image: redis:5.0.12-alpine
Image ID:
Port: 6379/TCP
Host Port: 0/TCP
Command:
redis-server
Args:
/data/conf/redis.conf
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Liveness: tcp-socket :6379 delay=15s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/data from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from redis-ha-token-7nw5m (ro)
sentinel:
Container ID:
Image: redis:5.0.12-alpine
Image ID:
Port: 26379/TCP
Host Port: 0/TCP
Command:
redis-sentinel
Args:
/data/conf/sentinel.conf
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Liveness: tcp-socket :26379 delay=15s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/data from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from redis-ha-token-7nw5m (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-redis-ha-server-1
ReadOnly: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: redis-ha-configmap
Optional: false
redis-ha-token-7nw5m:
Type: Secret (a volume populated by a Secret)
SecretName: redis-ha-token-7nw5m
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: CriticalAddonsOnly
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedMount 27m (x193 over 2d21h) kubelet Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[data redis-ha-token-7nw5m config]: timed out waiting for the condition
Warning FailedMount 6m42s (x2601 over 2d21h) kubelet (combined from similar events): MountVolume.SetUp failed for volume "pvc-59c2f2b1-883d-464a-ba3f-08c40b8f1359" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/d173f123-608d-47c7-b315-43e0316a07a7/volumes/kubernetes.io~nfs/pvc-59c2f2b1-883d-464a-ba3f-08c40b8f1359 --scope -- mount -t nfs 192.168.10.100/data03/nfs/data/kubesphere-system-data-redis-ha-server-1-pvc-59c2f2b1-883d-464a-ba3f-08c40b8f1359 /var/lib/kubelet/pods/d173f123-608d-47c7-b315-43e0316a07a7/volumes/kubernetes.io~nfs/pvc-59c2f2b1-883d-464a-ba3f-08c40b8f1359
Output: Running scope as unit run-107933.scope.
mount: wrong fs type, bad option, bad superblock on 192.168.10.100:/data03/nfs/data/kubesphere-system-data-redis-ha-server-1-pvc-59c2f2b1-883d-464a-ba3f-08c40b8f1359,
missing codepage or helper program, or other error
(for several filesystems (e.g. nfs, cifs) you might
need a /sbin/mount.<type> helper program)
In some cases useful info is found in syslog - try
dmesg | tail or so.
Warning FailedMount 2m1s (x897 over 2d21h) kubelet Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[config data redis-ha-token-7nw5m]: timed out waiting for the condition
我所搭建的是3个master两个node,由于一开始搭建是使用kubeadm部署一个master和两个node,存储是nfs,网络选择的calico, 这三台机器的文件系统是xfs,起初有的kubesphere是很正常的,后来老板有找来了两台机器,和之前的机器不是同一个网段的,当我把这两台机器加入集群后,集群显示是正常的,但是后面重启机器后,kubesphere好像重新部署了,部署成集群的形式了,出来了redis的高可用,但是其中的一个redis的pod是能正常的在最初的master上启动的,但是后面两个redis的pod就不能正常的在新增加的两个pod上启动了,查看pod的描述显示如上图结果,我看了一下,新增加的两台机器的 / 目录下的文件系统是btrfs类型,不是xfs类型,不知道是不是和这个有关系,我本来想让公司的网络管理给修改一下文件系统,但是他们现在告诉不能修改,由于目前不断定是不是和这个 / 的文件类型有关,所以请教大佬帮忙确定一下问题的根因。谢谢🙏