创建部署问题时,请参考下面模板,你提供的信息越多,越容易及时获得解答。如果未按模板创建问题,管理员有权关闭问题。
确保帖子格式清晰易读,用 markdown code block 语法格式化代码块。
你只花一分钟创建的问题,不能指望别人花上半个小时给你解答。
操作系统信息
虚拟机,Centos7
Kubernetes版本信息
1.24.16
容器运行时
docker
KubeSphere版本信息
3.4.0版本,在已有K8s上安装
问题是什么
我的k8s集群默认的存储类型为glusterfs,在安装完成kubesphere之后开启devops功能,集群会自动部署一个openldap服务,使用默认的glusterfs作为持久化存储。但是openldap确无法正常启动,报错日志如下:
[root@k8s18-27 ldap-config]# kubectl logs openldap-0 -n kubesphere-system
Backing up /etc/ldap/slapd.d in /var/backups/slapd-2.4.50+dfsg-1~bpo10+1… cp: ‘/etc/ldap/slapd.d’: Operation not supported
*** /container/run/startup/slapd failed with status 1
在尝试删除pvc重新创建也无法解决此问题
pod信息如下
[root@k8s18-27 ldap-config]# kubectl describe pod openldap-0 -n kubesphere-system
Name: openldap-0
Namespace: kubesphere-system
Priority: 0
Node: k8s18-200/10.90.18.200
Start Time: Mon, 04 Sep 2023 15:26:11 +0800
Labels: app.kubernetes.io/instance=ks-openldap
app.kubernetes.io/name=openldap-ha
controller-revision-hash=openldap-77fdd6d7c7
statefulset.kubernetes.io/pod-name=openldap-0
Annotations: cni.projectcalico.org/containerID: 546aabecf5d9b161c3bf535fc6b8bf204ef81c26f1719822892e93f8d1535980
cni.projectcalico.org/podIP: 10.224.119.168/32
cni.projectcalico.org/podIPs: 10.224.119.168/32
kubesphere.io/restartedAt: 2023-09-04T07:18:11.128Z
Status: Running
IP: 10.224.119.168
IPs:
IP: 10.224.119.168
Controlled By: StatefulSet/openldap
Containers:
openldap-ha:
Container ID: docker://6f92dd66553baca0b3cda5122b70d85d1c8167cfa1b3646cdb64e2232653397e
Image: osixia/openldap:1.3.0
Image ID: docker-pullable://osixia/openldap@sha256:66bf8dafc3c47a387dfa9d87425acab96acd8a3f2a62a8f6393584c27777cb41
Port: 389/TCP
Host Port: 0/TCP
Args:
--copy-service
--loglevel=warning
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Mon, 04 Sep 2023 16:28:14 +0800
Finished: Mon, 04 Sep 2023 16:28:15 +0800
Ready: False
Restart Count: 17
Liveness: tcp-socket :389 delay=30s timeout=1s period=15s #success=1 #failure=3
Readiness: tcp-socket :389 delay=30s timeout=1s period=15s #success=1 #failure=3
Environment:
LDAP_ORGANISATION: kubesphere
LDAP_DOMAIN: kubesphere.io
LDAP_CONFIG_PASSWORD: admin
LDAP_ADMIN_PASSWORD: admin
LDAP_REPLICATION: false
LDAP_TLS: false
LDAP_REMOVE_CONFIG_AFTER_SETUP: false
MY_POD_NAME: openldap-0 (v1:metadata.name)
HOSTNAME: $(MY_POD_NAME).openldap
Mounts:
/container/service/slapd/assets/config/bootstrap/ldif/custom from openldap-bootstrap (rw)
/etc/ldap/slapd.d from openldap-pvc (rw,path="ldap-config")
/var/lib/ldap from openldap-pvc (rw,path="ldap-data")
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q8nh2 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
openldap-pvc:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: openldap-pvc-openldap-0
ReadOnly: false
openldap-bootstrap:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: openldap-bootstrap
Optional: false
kube-api-access-q8nh2:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: CriticalAddonsOnly op=Exists
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
Warning BackOff 91s (x330 over 66m) kubelet Back-off restarting failed container
PVC信息如下:
[root@k8s18-27 ldap-config]# kubectl describe pvc openldap-pvc-openldap-0 -n kubesphere-system
Name: openldap-pvc-openldap-0
Namespace: kubesphere-system
StorageClass: glusterfs
Status: Bound
Volume: pvc-1d3ad80c-4f29-4045-acf4-32c103f3534a
Labels: app.kubernetes.io/instance=ks-openldap
app.kubernetes.io/name=openldap-ha
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/glusterfs
volume.kubernetes.io/storage-provisioner: kubernetes.io/glusterfs
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 2Gi
Access Modes: RWO
VolumeMode: Filesystem
Used By: openldap-0
Events: <none>