这两个pod一直在ContainerCreating状态

[root@k8s ~]# kubectl get sc
NAME                        PROVISIONER          AGE
rook-ceph-block (default)   ceph.rook.io/block   
[root@k8s ~]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                                                           STORAGECLASS      REASON   AGE
pvc-9b713a5d-0743-11ea-9961-fa163ef7a64f   3Gi        RWO            Retain           Bound    default/claim1                                                                  rook-ceph-block            4h11m
pvc-b11ca1c2-06c4-11ea-9961-fa163ef7a64f   20Gi       RWO            Delete           Bound    kubesphere-monitoring-system/prometheus-k8s-db-prometheus-k8s-0                 rook-ceph-block            4h16m
pvc-b14af121-06c4-11ea-9961-fa163ef7a64f   20Gi       RWO            Delete           Bound    kubesphere-monitoring-system/prometheus-k8s-system-db-prometheus-k8s-system-0   rook-ceph-block            4h16m
pvc-c175abe6-0743-11ea-9961-fa163ef7a64f   2Gi        RWO            Retain           Bound    kubesphere-system/redis-pvc                                                     rook-ceph-block            4h10m
pvc-c3b8afbe-0743-11ea-9961-fa163ef7a64f   2Gi        RWO            Retain           Bound    kubesphere-system/openldap-pvc-openldap-0                                       rook-ceph-block            4h10m

kubectl describe pods -n kubesphere-system redis-69c99ffd67-wls9l
查看有warning

Volumes:
  redis-pvc:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  redis-pvc
    ReadOnly:   false
  default-token-mtr2c:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-mtr2c
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     CriticalAddonsOnly
                 node-role.kubernetes.io/master:NoSchedule
Events:
  Type     Reason       Age                   From                    Message
  ----     ------       ----                  ----                    -------
  Warning  FailedMount  88s (x92 over 3h27m)  kubelet, 10.228.141.41  Unable to mount volumes for pod "redis-69c99ffd67-wls9l_kubesphere-system(18d3f015-074a-11ea-9961-fa163ef7a64f)": timeout expired waiting for volumes to attach or mount for pod "kubesphere-system"/"redis-69c99ffd67-wls9l". list of unmounted volumes=[redis-pvc]. list of unattached volumes=[redis-pvc default-token-mtr2c]

这里看是都ok的

[root@k8s ~]# kubectl get pvc -n kubesphere-system
NAME                      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
openldap-pvc-openldap-0   Bound    pvc-c3b8afbe-0743-11ea-9961-fa163ef7a64f   2Gi        RWO            rook-ceph-block   4h16m
redis-pvc                 Bound    pvc-c175abe6-0743-11ea-9961-fa163ef7a64f   2Gi        RWO            rook-ceph-block   4h16m

用ceph的话是不是要在机器上装个ceph-common ?

    Cauchy 集群都装了ceph-common的
    自己创建pv pvc也都是正常的

    ceph集群也是健康状态

    是不是我之前卸载没清除干净?
    kubectl delete-f https://raw.githubusercontent.com/kubesphere/ks-installer/master/kubesphere-minimal.yaml
    是这样卸载吗

    存储有什么需要清理的呢

    看样子是pvc正常的,但是不能挂载到pod上 ,是不是存储的权限配的有问题,可以看下cephf管理服务端的日志

    6 天 后

    排查发现是rook-ceph的问题。rook最新版本默认使用的是csi驱动
    我在storageclass中配置的是flex驱动,冲突了。
    在operator.yaml中将ROOK_ENABLE_FLEX_DRIVER 改为 true

    重新apply即可

      15 天 后

      for-mat 你好,我也遇到了一样的问题。operator.yaml 这个是哪里的配置文件,十分感谢。

        21 天 后