• 安装部署
  • driver name rbd.csi.ceph.com not found in the list of registered CSI drivers

Failed Mount of CSI RBD PVC, the log shows “driver name rbd.csi.ceph.com not found in the list of registered CSI drivers”
Events:
Type Reason Age From Message

Warning FailedMount 16m (x8 over 84m) kubelet, master1 Unable to attach or mount volumes: unmounted volumes=[openldap-pvc], unattached volumes=[default-token-l47s6 openldap-pvc]: timed out waiting for the condition
Warning FailedMount 6m18s (x57 over 106m) kubelet, master1 MountVolume.MountDevice failed for volume "pvc-2954ed1f-1a80-4170-8fe5-9b731bbebee5" : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name rbd.csi.ceph.com not found in the list of registered CSI drivers
Warning FailedMount 14s (x36 over 104m) kubelet, master1 Unable to attach or mount volumes: unmounted volumes=[openldap-pvc], unattached volumes=[openldap-pvc default-token-l47s6]: timed out waiting for the condition

logs of csi-rbdplugin pod are shown below:
-c driver-registrar
I0913 10:31:27.227348 5129 main.go:110] Version: v1.3.0-0-g6e9fff3e
I0913 10:31:27.227457 5129 main.go:120] Attempting to open a gRPC connection with: "/csi/csi.sock"
I0913 10:31:27.227480 5129 connection.go:151] Connecting to unix:///csi/csi.sock
W0913 10:31:37.227785 5129 connection.go:170] Still connecting to unix:///csi/csi.sock
I0913 10:31:41.952231 5129 main.go:127] Calling CSI driver to discover driver name
I0913 10:31:41.952305 5129 connection.go:180] GRPC call: /csi.v1.Identity/GetPluginInfo
I0913 10:31:41.952312 5129 connection.go:181] GRPC request: {}
I0913 10:31:41.957244 5129 connection.go:183] GRPC response: {"name":"rbd.csi.ceph.com","vendor_version":"v3.1.0"}
I0913 10:31:41.957656 5129 connection.go:184] GRPC error: <nil>
I0913 10:31:41.957667 5129 main.go:137] CSI driver name: "rbd.csi.ceph.com"
I0913 10:31:41.957717 5129 node_register.go:51] Starting Registration Server at: /registration/rbd.csi.ceph.com-reg.sock
I0913 10:31:41.958074 5129 node_register.go:60] Registration Server started at: /registration/rbd.csi.ceph.com-reg.sock
I0913 10:31:42.094283 5129 main.go:77] Received GetInfo call: &InfoRequest{}
I0913 10:31:43.641635 5129 main.go:87] Received NotifyRegistrationStatus call: &RegistrationStatus{PluginRegistered:true,Error:,}

-c csi-rbdplugin
I0913 12:24:41.286682 5447 utils.go:159] ID: 255 GRPC call: /csi.v1.Identity/Probe
I0913 12:24:41.287353 5447 utils.go:160] ID: 255 GRPC request: {}
I0913 12:24:41.287776 5447 utils.go:165] ID: 255 GRPC response: {}
I0913 12:25:41.286240 5447 utils.go:159] ID: 256 GRPC call: /csi.v1.Identity/Probe
I0913 12:25:41.286767 5447 utils.go:160] ID: 256 GRPC request: {}
I0913 12:25:41.287207 5447 utils.go:165] ID: 256 GRPC response: {}
I0913 12:26:07.741872 5447 utils.go:159] ID: 257 GRPC call: /csi.v1.Node/NodeGetCapabilities
I0913 12:26:07.742595 5447 utils.go:160] ID: 257 GRPC request: {}
I0913 12:26:07.745094 5447 utils.go:165] ID: 257 GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":3}}}]}
I0913 12:26:07.745927 5447 utils.go:159] ID: 258 GRPC call: /csi.v1.Node/NodeGetVolumeStats
I0913 12:26:07.746331 5447 utils.go:160] ID: 258 GRPC request: {"volume_id":"0001-0024-42176a3e-ae26-4da6-9954-f630b9f25d9d-0000000000000000-dc99f5f5-f5ad-11ea-bdea-ce02b2f26c3c","volume_path":"/var/lib/kubelet/pods/9f15622a-b652-4f7c-bfce-795aefc68171/volumes/kubernetes.io~csi/pvc-4e3a40f5-d8a0-43fc-a7db-44e1c36561a3/mount"}
I0913 12:26:07.757068 5447 utils.go:165] ID: 258 GRPC response: {"usage":[{"available":20924592128,"total":21003583488,"unit":1,"used":62214144},{"available":1310703,"total":1310720,"unit":2,"used":17}]}

kubectl get pods –all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/calico-kube-controllers-76d4774d89-rs4j8 1/1 Running 0 94m
kube-system pod/calico-node-fbthf 1/1 Running 0 94m
kube-system pod/calico-node-gxlhq 1/1 Running 0 93m
kube-system pod/ceph-csi-rbd-nodeplugin-jqdpk 3/3 Running 0 91m
kube-system pod/ceph-csi-rbd-provisioner-57b44c7df5-24h49 6/6 Running 7 91m
kube-system pod/ceph-csi-rbd-provisioner-57b44c7df5-7g6p8 6/6 Running 11 91m
kube-system pod/ceph-csi-rbd-provisioner-57b44c7df5-c8cjz 6/6 Running 10 91m
kube-system pod/coredns-f85fd7f64-mdqk2 1/1 Running 0 95m
kube-system pod/coredns-f85fd7f64-vzf45 1/1 Running 0 95m
kube-system pod/kube-apiserver-master1 1/1 Running 0 96m
kube-system pod/kube-controller-manager-master1 1/1 Running 7 96m
kube-system pod/kube-proxy-69vrc 1/1 Running 0 93m
kube-system pod/kube-proxy-d4kl2 1/1 Running 0 95m
kube-system pod/kube-scheduler-master1 1/1 Running 7 96m
kube-system pod/nodelocaldns-czjl6 0/1 CrashLoopBackOff 22 93m
kube-system pod/nodelocaldns-ht8xv 1/1 Running 0 95m
kube-system pod/snapshot-controller-0 1/1 Running 0 85m
kubesphere-system pod/ks-apiserver-667697c496-b8dzj 0/1 CrashLoopBackOff 12 40m
kubesphere-system pod/ks-console-fc457b9c7-cxhhh 1/1 Running 0 83m
kubesphere-system pod/ks-controller-manager-6ff749c58d-v8bq9 0/1 CrashLoopBackOff 19 78m
kubesphere-system pod/ks-controller-manager-7698979f87-tsrdp 0/1 CrashLoopBackOff 19 78m
kubesphere-system pod/ks-installer-744f4574f6-2dnq8 1/1 Running 0 91m
kubesphere-system pod/openldap-0 0/1 ContainerCreating 0 85m
kubesphere-system pod/redis-696b58877-k89q8 0/1 ContainerCreating 0 85m

kk create cluster -f config-sample.yaml
storage:
defaultStorageClass: “csi-rbd-sc”
localVolume:
storageClassName: local
addons:

  • name: ceph-csi-rbd
    namespace: kube-system
    sources:
    chart:
    name: ceph-csi-rbd
    repo: https://ceph.github.io/csi-charts
    values: /data/ceph-csi-config.yaml
  • name: ceph-csi-rbd-sc
    sources:
    yaml:
    path:
    - /data/ceph-csi-rbd-sc.yaml

ceph-csi-config.yaml
`
csiConfig:

  • clusterID: “42176a3e-ae26-4da6-9954-f630b9f25d9d”
    monitors:
    • x.x.x.x:6789
      /data/ceph-csi-rbd-sc.yaml
      apiVersion: v1
      kind: Secret
      metadata:
      name: csi-rbd-secret
      namespace: kube-system
      stringData:
      userID: admin
      userKey: AQC22lhfIm0FLhAAUyhciA4qvc1FmVe2jEABCQ==
      ---
      apiVersion: storage.k8s.io/v1
      kind: StorageClass
      metadata:
      name: csi-rbd-sc
      annotations:
      storageclass.beta.kubernetes.io/is-default-class: “true”
      storageclass.kubesphere.io/supported-access-modes: ‘[“ReadWriteOnce”,“ReadOnlyMany”,“ReadWriteMany”]’
      provisioner: rbd.csi.ceph.com
      parameters:
      clusterID: “42176a3e-ae26-4da6-9954-f630b9f25d9d”
      pool: rbd
      imageFeatures: layering
      csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
      csi.storage.k8s.io/provisioner-secret-namespace: kube-system
      csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
      csi.storage.k8s.io/controller-expand-secret-namespace: kube-system
      csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
      csi.storage.k8s.io/node-stage-secret-namespace: kube-system
      csi.storage.k8s.io/fstype: ext4
      reclaimPolicy: Delete
      allowVolumeExpansion: true
      mountOptions:
    • discard
      `
  • humphery755 回复了此帖
  • humphery755 问题已经解决
    在ceph-csi-config.yaml添加污点配置
    nodeplugin:
    tolerations:
    - key: node-role.kubernetes.io/master
    effect: NoSchedule
    - key: CriticalAddonsOnly
    operator: Exists
    - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 60
    - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 60

    从命令行这边看的话,nodelocaldns,ks-controller-manager和ks-apiserver状态异常,这几个pod的events和logs贴出来看看呢

    humphery755 问题已经解决
    在ceph-csi-config.yaml添加污点配置
    nodeplugin:
    tolerations:
    - key: node-role.kubernetes.io/master
    effect: NoSchedule
    - key: CriticalAddonsOnly
    operator: Exists
    - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 60
    - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 60