创建快照之后,就是显示创建失败,提示"rook-ceph-block" not found。我创建了VolumeSnapshotClass,name为csi-cephfsplugin-snapclass,为何使用kubesphere来创建快照的时候还是提示“rook-ceph-block“。“rook-ceph-block”这个是属于sc的名称。
另外如何让ceph显示支持快照类型?

已经按照论坛的方法操作创建了ProvisionerCapability,还是显示为否


用kubectl 指定 volumeSnapshotClassName 为 “csi-cephfsplugin-snapclass”,创建也是失败,不过返回的内容不同


之前显示创建失败,是PVC未挂载,挂载后又卡在了创建中 T_T。

    yyt6200 kubesphere使用和storageclass同名的volumesnapshotclass创建快照,参考https://kubesphere.com.cn/forum/d/3057-kubespherek8s117kubesphere/3 创建storageclasscapabilities,之后,同名的volumesnapshotclass会被自动创建。

    stoneshi-yunify 感谢大佬回复,我截下来图给你看下。
    这是VolumeSnapShotContent的详细信息

    这是VolumeShapShot的详细信息

    yyt6200 需要看下csi-cephfsplugin-provisioner的log,看下你有没有这个pod。e.g. kubectl logs csi-cephfsplugin-provisioner-fdc787595-2zwq8 -c csi-cephfsplugin

      stoneshi-yunify 大佬,能再帮忙看个问题吗?我按照官方流程创建了cephfs的strorageclass和filesystem,使用cephfs创建的pvc一直在pending状态。rbd创建pvc是没有问题的,难道cephfs和rbd不能共存吗?

      这是 csi-provisioner的日志

      
      I0205 02:18:25.387577 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"rook-ceph", Name:"cephfs-test", UID:"54cadc93-f0b9-44dd-a8ad-b02d187cf5a0", APIVersion:"v1", ResourceVersion:"11508873", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "rook-ceph/cephfs-test"
      
      W0205 02:18:25.391258 1 controller.go:943] Retrying syncing claim "54cadc93-f0b9-44dd-a8ad-b02d187cf5a0", failure 189
      
      E0205 02:18:25.391289 1 controller.go:966] error syncing claim "54cadc93-f0b9-44dd-a8ad-b02d187cf5a0": failed to provision volume with StorageClass "rook-cephfs": rpc error: code = Aborted desc = an operation with the given Volume ID pvc-54cadc93-f0b9-44dd-a8ad-b02d187cf5a0 already exists
      
      I0205 02:18:25.391315 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"rook-ceph", Name:"cephfs-test", UID:"54cadc93-f0b9-44dd-a8ad-b02d187cf5a0", APIVersion:"v1", ResourceVersion:"11508873", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "rook-cephfs": rpc error: code = Aborted desc = an operation with the given Volume ID pvc-54cadc93-f0b9-44dd-a8ad-b02d187cf5a0 already exists

      这是csi-cephfsplugin的日志

      
      E0205 03:05:45.244402 1 utils.go:136] ID: 10927 Req-ID: pvc-54cadc93-f0b9-44dd-a8ad-b02d187cf5a0 GRPC error: rpc error: code = Internal desc = rados: ret=-22, Invalid argument: "Traceback (most recent call last):\n File \"/usr/share/ceph/mgr/volumes/fs/operations/volume.py\", line 165, in get_fs_handle\n conn.connect()\n File \"/usr/share/ceph/mgr/volumes/fs/operations/volume.py\", line 88, in connect\n self.fs.mount(filesystem_name=self.fs_name.encode('utf-8'))\n File \"cephfs.pyx\", line 739, in cephfs.LibCephFS.mount\ncephfs.Error: error calling ceph_mount: Connection timed out [Errno 110]\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/usr/share/ceph/mgr/mgr_module.py\", line 1177, in _handle_command\n return self.handle_command(inbuf, cmd)\n File \"/usr/share/ceph/mgr/volumes/module.py\", line 426, in handle_command\n return handler(inbuf, cmd)\n File \"/usr/share/ceph/mgr/volumes/module.py\", line 34, in wrap\n return f(self, inbuf, cmd)\n File \"/usr/share/ceph/mgr/volumes/module.py\", line 452, in _cmd_fs_subvolumegroup_create\n uid=cmd.get('uid', None), gid=cmd.get('gid', None))\n File \"/usr/share/ceph/mgr/volumes/fs/volume.py\", line 480, in create_subvolume_group\n with open_volume(self, volname) as fs_handle:\n File \"/lib64/python3.6/contextlib.py\", line 81, in __enter__\n return next(self.gen)\n File \"/usr/share/ceph/mgr/volumes/fs/operations/volume.py\", line 316, in open_volume\n fs_handle = vc.connection_pool.get_fs_handle(volname)\n File \"/usr/share/ceph/mgr/volumes/fs/operations/volume.py\", line 171, in get_fs_handle\n raise VolumeException(-e.args[0], e.args[1])\nTypeError: bad operand type for unary -: 'str'\n"

      随便找了一个rook-ceph-mds的日志

      
      debug 2021-02-05T03:04:57.036+0000 ffff96abd300 1 mds.myfs-c asok_command: status {prefix=status} (starting...)
      
      debug 2021-02-05T03:05:07.016+0000 ffff96abd300 1 mds.myfs-c asok_command: status {prefix=status} (starting...)
      
      debug 2021-02-05T03:05:16.956+0000 ffff96abd300 1 mds.myfs-c asok_command: status {prefix=status} (starting...)
      
      debug 2021-02-05T03:05:26.956+0000 ffff96abd300 1 mds.myfs-c asok_command: status {prefix=status} (starting...)
      
      debug 2021-02-05T03:05:36.966+0000 ffff96abd300 1 mds.myfs-c asok_command: status {prefix=status} (starting...)
      
      debug 2021-02-05T03:05:46.966+0000 ffff96abd300 1 mds.myfs-c asok_command: status {prefix=status} (starting...)
      
      debug 2021-02-05T03:05:56.976+0000 ffff96abd300 1 mds.myfs-c asok_command: status {prefix=status} (starting...)
      
      debug 2021-02-05T03:06:06.996+0000 ffff96abd300 1 mds.myfs-c asok_command: status {prefix=status} (starting...)
      
      debug 2021-02-05T03:06:16.966+0000 ffff96abd300 1 mds.myfs-c asok_command: status {prefix=status} (starting...)
      
      debug 2021-02-05T03:06:26.966+0000 ffff96abd300 1 mds.myfs-c asok_command: status {prefix=status} (starting...)
      
      debug 2021-02-05T03:06:36.967+0000 ffff96abd300 1 mds.myfs-c asok_command: status {prefix=status} (starting...)
      
      debug 2021-02-05T03:06:46.977+0000 ffff96abd300 1 mds.myfs-c asok_command: status {prefix=status} (starting...)
      
      debug 2021-02-05T03:06:56.977+0000 ffff96abd300 1 mds.myfs-c asok_command: status {prefix=status} (starting...)
      
      debug 2021-02-05T03:07:07.017+0000 ffff96abd300 1 mds.myfs-c asok_command: status {prefix=status} (starting...)
      
      debug 2021-02-05T03:07:16.977+0000 ffff96abd300 1 mds.myfs-c asok_command: status {prefix=status} (starting...)
      
      debug 2021-02-05T03:07:26.957+0000 ffff96abd300 1 mds.myfs-c asok_command: status {prefix=status} (starting...)

      rook-ceph-mgr的日志

      
      debug 2021-02-05T03:05:37.872+0000 ffff81aac700 0 log_channel(cluster) log [DBG] : pgmap v305988: 97 pgs: 97 active+clean; 15 MiB data, 59 MiB used, 1.5 TiB / 1.5 TiB avail; 3.3 KiB/s rd, 6 op/s
      
      debug 2021-02-05T03:05:39.882+0000 ffff81aac700 0 log_channel(cluster) log [DBG] : pgmap v305989: 97 pgs: 97 active+clean; 15 MiB data, 59 MiB used, 1.5 TiB / 1.5 TiB avail; 2.5 KiB/s rd, 4 op/s
      
      debug 2021-02-05T03:05:41.882+0000 ffff81aac700 0 log_channel(cluster) log [DBG] : pgmap v305990: 97 pgs: 97 active+clean; 15 MiB data, 59 MiB used, 1.5 TiB / 1.5 TiB avail; 3.7 KiB/s rd, 7 op/s
      
      debug 2021-02-05T03:05:43.883+0000 ffff81aac700 0 log_channel(cluster) log [DBG] : pgmap v305991: 97 pgs: 97 active+clean; 15 MiB data, 59 MiB used, 1.5 TiB / 1.5 TiB avail; 2.5 KiB/s rd, 4 op/s
      
      debug 2021-02-05T03:05:45.173+0000 ffff8027c700 0 [volumes ERROR volumes.module] Failed _cmd_fs_subvolumegroup_create(format:json, group_name:csi, prefix:fs subvolumegroup create, vol_name:myfs) < "":
      
      Traceback (most recent call last):
      
      File "/usr/share/ceph/mgr/volumes/fs/operations/volume.py", line 165, in get_fs_handle
      
      conn.connect()
      
      File "/usr/share/ceph/mgr/volumes/fs/operations/volume.py", line 88, in connect
      
      self.fs.mount(filesystem_name=self.fs_name.encode('utf-8'))
      
      File "cephfs.pyx", line 739, in cephfs.LibCephFS.mount
      
      cephfs.Error: error calling ceph_mount: Connection timed out [Errno 110]
      
      During handling of the above exception, another exception occurred:
      
      Traceback (most recent call last):
      
      File "/usr/share/ceph/mgr/volumes/module.py", line 34, in wrap
      
      return f(self, inbuf, cmd)
      
      File "/usr/share/ceph/mgr/volumes/module.py", line 452, in _cmd_fs_subvolumegroup_create
      
      uid=cmd.get('uid', None), gid=cmd.get('gid', None))
      
      File "/usr/share/ceph/mgr/volumes/fs/volume.py", line 480, in create_subvolume_group
      
      with open_volume(self, volname) as fs_handle:
      
      File "/lib64/python3.6/contextlib.py", line 81, in __enter__
      
      return next(self.gen)
      
      File "/usr/share/ceph/mgr/volumes/fs/operations/volume.py", line 316, in open_volume
      
      fs_handle = vc.connection_pool.get_fs_handle(volname)
      
      File "/usr/share/ceph/mgr/volumes/fs/operations/volume.py", line 171, in get_fs_handle
      
      raise VolumeException(-e.args[0], e.args[1])
      
      TypeError: bad operand type for unary -: 'str'
      
      debug 2021-02-05T03:05:45.173+0000 ffff8027c700 -1 mgr handle_command module 'volumes' command handler threw exception: bad operand type for unary -: 'str'
      
      debug 2021-02-05T03:05:45.173+0000 ffff8027c700 -1 mgr.server reply reply (22) Invalid argument Traceback (most recent call last):
      
      File "/usr/share/ceph/mgr/volumes/fs/operations/volume.py", line 165, in get_fs_handle
      
      conn.connect()
      
      File "/usr/share/ceph/mgr/volumes/fs/operations/volume.py", line 88, in connect
      
      self.fs.mount(filesystem_name=self.fs_name.encode('utf-8'))
      
      File "cephfs.pyx", line 739, in cephfs.LibCephFS.mount
      
      cephfs.Error: error calling ceph_mount: Connection timed out [Errno 110]
      
      During handling of the above exception, another exception occurred:
      
      Traceback (most recent call last):
      
      File "/usr/share/ceph/mgr/mgr_module.py", line 1177, in _handle_command
      
      return self.handle_command(inbuf, cmd)
      
      File "/usr/share/ceph/mgr/volumes/module.py", line 426, in handle_command
      
      return handler(inbuf, cmd)
      
      File "/usr/share/ceph/mgr/volumes/module.py", line 34, in wrap
      
      return f(self, inbuf, cmd)
      
      File "/usr/share/ceph/mgr/volumes/module.py", line 452, in _cmd_fs_subvolumegroup_create
      
      uid=cmd.get('uid', None), gid=cmd.get('gid', None))
      
      File "/usr/share/ceph/mgr/volumes/fs/volume.py", line 480, in create_subvolume_group
      
      with open_volume(self, volname) as fs_handle:
      
      File "/lib64/python3.6/contextlib.py", line 81, in __enter__
      
      return next(self.gen)
      
      File "/usr/share/ceph/mgr/volumes/fs/operations/volume.py", line 316, in open_volume
      
      fs_handle = vc.connection_pool.get_fs_handle(volname)
      
      File "/usr/share/ceph/mgr/volumes/fs/operations/volume.py", line 171, in get_fs_handle
      
      raise VolumeException(-e.args[0], e.args[1])
      
      TypeError: bad operand type for unary -: 'str'
      debug 2021-02-05T03:05:45.273+0000 ffff8027c700 -1 client.0 error registering admin socket command: (17) File exists
      
      debug 2021-02-05T03:05:45.273+0000 ffff8027c700 -1 client.0 error registering admin socket command: (17) File exists
      
      debug 2021-02-05T03:05:45.273+0000 ffff8027c700 -1 client.0 error registering admin socket command: (17) File exists
      
      debug 2021-02-05T03:05:45.273+0000 ffff8027c700 -1 client.0 error registering admin socket command: (17) File exists
      
      debug 2021-02-05T03:05:45.273+0000 ffff8027c700 -1 client.0 error registering admin socket command: (17) File exists
      
      10.244.1.1 - - [05/Feb/2021:03:05:45] "GET / HTTP/1.1" 200 176 "" "kube-probe/1.18"
      
      debug 2021-02-05T03:05:45.883+0000 ffff81aac700 0 log_channel(cluster) log [DBG] : pgmap v305992: 97 pgs: 97 active+clean; 15 MiB data, 59 MiB used, 1.5 TiB / 1.5 TiB avail; 2.9 KiB/s rd, 0 B/s wr, 6 op/s
      
      debug 2021-02-05T03:05:47.883+0000 ffff81aac700 0 log_channel(cluster) log [DBG] : pgmap v305993: 97 pgs: 97 active+clean; 15 MiB data, 59 MiB used, 1.5 TiB / 1.5 TiB avail; 3.3 KiB/s rd, 0 B/s wr, 6 op/s
      
      debug 2021-02-05T03:05:49.883+0000 ffff81aac700 0 log_channel(cluster) log [DBG] : pgmap v305994: 97 pgs: 97 active+clean; 15 MiB data, 59 MiB used, 1.5 TiB / 1.5 TiB avail; 2.5 KiB/s rd, 0 B/s wr, 5 op/s
      
      debug 2021-02-05T03:05:51.883+0000 ffff81aac700 0 log_channel(cluster) log [DBG] : pgmap v305995: 97 pgs: 97 active+clean; 15 MiB data, 59 MiB used, 1.5 TiB / 1.5 TiB avail; 3.7 KiB/s rd, 0 B/s wr, 7 op/s
      
      debug 2021-02-05T03:05:53.883+0000 ffff81aac700 0 log_channel(cluster) log [DBG] : pgmap v305996: 97 pgs: 97 active+clean; 15 MiB data, 59 MiB used, 1.5 TiB / 1.5 TiB avail; 2.5 KiB/s rd, 0 B/s wr, 5 op/s

      运行的rook-ceph pod

      NAME                                                    READY   STATUS      RESTARTS   AGE
      csi-cephfsplugin-6xbjq                                  3/3     Running     0          7d2h
      csi-cephfsplugin-c625v                                  3/3     Running     0          7d2h
      csi-cephfsplugin-gjlfv                                  3/3     Running     0          7d2h
      csi-cephfsplugin-provisioner-64c8898766-446c2           6/6     Running     0          7d2h
      csi-cephfsplugin-provisioner-64c8898766-ntwlg           6/6     Running     0          7d2h
      csi-cephfsplugin-qd9fc                                  3/3     Running     0          7d2h
      csi-cephfsplugin-qfp6n                                  3/3     Running     0          7d2h
      csi-cephfsplugin-sz9cn                                  3/3     Running     0          7d2h
      csi-rbdplugin-bj84w                                     3/3     Running     0          7d2h
      csi-rbdplugin-dmjds                                     3/3     Running     0          7d2h
      csi-rbdplugin-jngq5                                     3/3     Running     0          7d2h
      csi-rbdplugin-provisioner-5ddffb7c49-c8glh              6/6     Running     0          7d2h
      csi-rbdplugin-provisioner-5ddffb7c49-fzr8f              6/6     Running     0          7d2h
      csi-rbdplugin-sf5tl                                     3/3     Running     0          7d2h
      csi-rbdplugin-tl6lr                                     3/3     Running     0          7d2h
      csi-rbdplugin-z55sc                                     3/3     Running     0          7d2h
      rook-ceph-crashcollector-k8s-master2-6c8c79c7df-nprlg   1/1     Running     0          7d2h
      rook-ceph-crashcollector-k8s-node1-67cf7c86cd-qm449     1/1     Running     0          20h
      rook-ceph-crashcollector-k8s-node2-d6d5b59c5-fzwpd      1/1     Running     0          20h
      rook-ceph-crashcollector-k8s-node3-84675f58f8-95bps     1/1     Running     0          7d2h
      rook-ceph-mds-myfs-a-6ddfdfdbd9-slx75                   1/1     Running     0          20h
      rook-ceph-mds-myfs-b-7978679f6c-v8vhl                   1/1     Running     0          20h
      rook-ceph-mds-myfs-c-5dbcb88f-c5fwr                     1/1     Running     0          20h
      rook-ceph-mds-myfs-d-7497f6cd7f-xtqcl                   1/1     Running     0          20h
      rook-ceph-mds-myfs-e-5dc4f4b5-zkx7t                     1/1     Running     0          20h
      rook-ceph-mds-myfs-f-7f5fcdfb8c-gpgg2                   1/1     Running     0          20h
      rook-ceph-mgr-a-75bd86d795-bjfm4                        1/1     Running     0          7d2h
      rook-ceph-mon-a-777f4f9646-vrx9h                        1/1     Running     2          6d21h
      rook-ceph-mon-b-d6977c76d-rhqr8                         1/1     Running     0          6d21h
      rook-ceph-mon-c-5767c7c4c-nvl97                         1/1     Running     0          6d21h
      rook-ceph-operator-6cd478f99-nk8jj                      1/1     Running     0          6d23h
      rook-ceph-osd-0-6868c5d98b-8gzxk                        1/1     Running     0          7d2h
      rook-ceph-osd-1-67485b97bc-4kw2h                        1/1     Running     0          7d1h
      rook-ceph-osd-2-65d5884f4c-ctbtm                        1/1     Running     0          7d1h
      rook-ceph-osd-prepare-k8s-node1-b86r2                   0/1     Completed   0          6d23h
      rook-ceph-osd-prepare-k8s-node2-999vd                   0/1     Completed   0          6d23h
      rook-ceph-osd-prepare-k8s-node3-6qhjg                   0/1     Completed   0          6d23h
      rook-ceph-tools-6b4889fdfd-7mbnn                        1/1     Running     0          20h

      ceph-cstroageclass.yaml

      kind: StorageClass
      metadata:
        name: rook-cephfs
      provisioner: rook-ceph.cephfs.csi.ceph.com # driver:namespace:operator
      parameters:
        # clusterID is the namespace where operator is deployed.
        clusterID: rook-ceph # namespace:cluster
      
        # CephFS filesystem name into which the volume shall be created
        fsName: myfs
      
        # Ceph pool into which the volume shall be created
        # Required for provisionVolume: "true"
        pool: myfs-data0
      
        # Root path of an existing CephFS volume
        # Required for provisionVolume: "false"
        # rootPath: /absolute/path
      
        # The secrets contain Ceph admin credentials. These are generated automatically by the operator
        # in the same namespace as the cluster.
        csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
        csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph # namespace:cluster
        csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner
        csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph # namespace:cluster
        csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
        csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph # namespace:cluster
      
        # (optional) The driver can use either ceph-fuse (fuse) or ceph kernel client (kernel)
        # If omitted, default volume mounter will be used - this is determined by probing for ceph-fuse
        # or by setting the default mounter explicitly via --volumemounter command-line argument.
        # mounter: kernel
      reclaimPolicy: Delete
      allowVolumeExpansion: true
      mountOptions:
        # uncomment the following line for debugging
        #- debug

      cephfs-filesystem.yaml

      # Create a filesystem with settings for a test environment where only a single OSD is required.
      #  kubectl create -f filesystem-test.yaml
      #################################################################################################################
      
      apiVersion: ceph.rook.io/v1
      kind: CephFilesystem
      metadata:
        name: myfs
        namespace: rook-ceph # namespace:cluster
      spec:
        metadataPool:
          replicated:
            size: 3
            requireSafeReplicaSize: false
        dataPools:
          - failureDomain: osd
            replicated:
              size: 3
              requireSafeReplicaSize: false
        preserveFilesystemOnDelete: false
        metadataServer:
          activeCount: 3
          activeStandby: true              

      pvc相关信息

      NAME          STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
      ceph-test-1   Bound     pvc-98e46d1b-d7fb-42b4-a6ab-cb6078e7f2b8   2Gi        RWO            rook-ceph-block   2d1h
      cephfs-test   Pending                                                                        rook-cephfs       20h
      [root@k8s-master1 rook+ceph]# kubectl describe pvc cephfs-test -n rook-ceph
      Name:          cephfs-test
      Namespace:     rook-ceph
      StorageClass:  rook-cephfs
      Status:        Pending
      Volume:        
      Labels:        <none>
      Annotations:   kubesphere.io/creator: admin
                     volume.beta.kubernetes.io/storage-provisioner: rook-ceph.cephfs.csi.ceph.com
      Finalizers:    [kubernetes.io/pvc-protection]
      Capacity:      
      Access Modes:  
      VolumeMode:    Filesystem
      Mounted By:    <none>
      Events:
        Type     Reason                Age                    From                                                                                                              Message
        ----     ------                ----                   ----                                                                                                              -------
        Warning  ProvisioningFailed    9m40s (x113 over 20h)  rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-64c8898766-ntwlg_fb0796d7-86a7-4d1a-98f8-6225b9eded7a  failed to provision volume with StorageClass "rook-cephfs": rpc error: code = DeadlineExceeded desc = context deadline exceeded
        Normal   Provisioning          4m40s (x202 over 20h)  rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-64c8898766-ntwlg_fb0796d7-86a7-4d1a-98f8-6225b9eded7a  External provisioner is provisioning volume for claim "rook-ceph/cephfs-test"
        Normal   ExternalProvisioning  37s (x5002 over 20h)   persistentvolume-controller                                                                                       waiting for a volume to be created, either by external provisioner "rook-ceph.cephfs.csi.ceph.com" or manually created by system administrator

        yyt6200 看起来是ceph的问题,我的配置跟你一样,cephfs pvc是可以正常创建的。cephfs和rbd是可以同时存在的。你装一个ceph toolbox,进去用ceph命令创建一个volume试试,如果这样也创建不了,肯定就是ceph底层的问题了。
        eg.

        [root@rook-ceph-tools-685bccc695-jnmcf /]# ceph fs subvolume create myfs stone-test-2 1073741824 csi
        [root@rook-ceph-tools-685bccc695-jnmcf /]# ceph fs subvolume info myfs stone-test-2 csi
        {
            "atime": "2021-02-05 05:40:21",
            "bytes_pcent": "0.00",
            "bytes_quota": 1073741824,
            "bytes_used": 0,
            "created_at": "2021-02-05 05:40:21",
            "ctime": "2021-02-05 05:40:21",
            "data_pool": "myfs-data0",
            "gid": 0,
            "mode": 16877,
            "mon_addrs": [
                "10.233.47.3:6789",
                "10.233.40.60:6789",
                "10.233.31.175:6789"
            ],
            "mtime": "2021-02-05 05:40:21",
            "path": "/volumes/csi/stone-test-2/09bf7d1b-ac09-46b1-a643-99cac1ce3d08",
            "pool_namespace": "",
            "type": "subvolume",
            "uid": 0
        }

          stoneshi-yunify ceph fs subvolume create myfs stone-test-2 1073741824 csi 我执行这个命令。。没有反应的。。。需要ctrl+c 来强制结束

            stoneshi-yunify 大佬,是重启哪个pod?社区上留言了,至今没回复呢.另外我是在鲲鹏云上搭建的,是arm64的。。不知道会不会有影响