weiliang-msK零S
stoneshi-yunify
肯定超过两分钟了
版本是latest,这两天拉取的
stoneshi-yunify
肯定超过两分钟了
版本是latest,这两天拉取的
weiliang-ms 我实验室用的也是最新的,kubesphere/rbd-provisioner:v2.1.1-k8s1.11, 没有你遇到的这个问题。是不是还是ceph server端配置有问题,kubesphere并没有对rbd provisioner做什么改动。要么你换成ceph csi再试试 https://github.com/ceph/ceph-csi
stoneshi-yunify
替换为csi后解决,暂时未发现问题。感谢~
weiliang-ms 有兴趣在论坛分享一下你的实践步骤么?
参考博客原文实现
本地环境版本:
我的实践步骤如下,请参考:
1、创建ceph pool
ceph节点执行创建,并进行配额设置
ceph osd pool create kubernetes 512 512
ceph osd pool set-quota kubernetes max_objects 1000000
ceph osd pool set-quota kubernetes max_bytes 2T
2、为kubernetes和ceph-csi创建一个新的用户
[root@node3 kubernetes]# ceph auth get-or-create client.kubernetes mon 'profile rbd' osd 'profile rbd pool=kubernetes' mgr 'profile rbd pool=kubernetes'
[client.kubernetes]
key = AQAi4PNfwri8IxAAvtIQgCNGIMQAjMytoTXeSw==
生成/etc/ceph/ceph.client.kubernetes.keyring
ceph auth get client.kubernetes >> /etc/ceph/ceph.client.kubernetes.keyring
scp拷贝/etc/ceph/ceph.client.kubernetes.keyring
至k8s所有node节点/etc/ceph/
下
3、获取ceph 集群id
ceph mon dump
输出如下:
dumped monmap epoch 3
epoch 3
fsid 1fc9f495-498c-4fe2-b3d5-80a041bc5c49
last_changed 2020-12-21 18:53:05.535581
created 2020-12-21 18:40:09.332030
min_mon_release 14 (nautilus)
0: [v2:192.168.1.1:3300/0,v1:192.168.1.1:6789/0] mon.node5
1: [v2:192.168.1.2:3300/0,v1:192.168.1.2:6789/0] mon.node4
2: [v2:192.168.1.3:3300/0,v1:192.168.1.3:6789/0] mon.node3
id
即为1fc9f495-498c-4fe2-b3d5-80a041bc5c49
4、下载镜像并修改tag导入本地镜像库
k8s.gcr.io/sig-storage/csi-provisioner:v2.0.4
k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.2
k8s.gcr.io/sig-storage/csi-attacher:v3.0.2
k8s.gcr.io/sig-storage/csi-resizer:v1.0.1
quay.io/cephcsi/cephcsi:v3.2.0
quay.io/cephcsi/cephcsi:v3.2.0
quay.io/cephcsi/cephcsi:v3.2.0
k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
quay.io/cephcsi/cephcsi:v3.2.0
quay.io/cephcsi/cephcsi:v3.2.0
5、下载
ceph-csi
离线工程,解压
上传ceph-csi-3.2.0.tar.gz至k8s主节点服务器,解压
tar zxvf ceph-csi-3.2.0.tar.gz
cd ceph-csi-3.2.0/deploy/rbd/kubernetes
6、创建namespace专门用来部署
ceph-csi
kubectl create ns ceph-csi
7、修改configmap
clusterID
与monitors
参考第3步返回信息
cat <<EOF > csi-config-map.yaml
---
apiVersion: v1
kind: ConfigMap
data:
config.json: |-
[
{
"clusterID": "1fc9f495-498c-4fe2-b3d5-80a041bc5c49",
"monitors": [
"192.168.1.1:6789",
"192.168.1.2:6789",
"192.168.1.3:6789"
]
}
]
metadata:
name: ceph-csi-config
EOF
创建
kubectl -n ceph-csi apply -f csi-config-map.yaml
8、创建ceph-csi cephx Secret
userKey
参考第2步返回
cat <<EOF > csi-rbd-secret.yaml
---
apiVersion: v1
kind: Secret
metadata:
name: kubernetes-csi-rbd-secret
namespace: default
stringData:
userID: kubernetes
userKey: AQAi4PNfwri8IxAAvtIQgCNGIMQAjMytoTXeSw==
EOF
创建
kubectl -n ceph-csi apply -f csi-rbd-secret.yaml
9、配置ceph-csi插件
rbac
sed -i "s/namespace: default/namespace: ceph-csi/g" $(grep -rl "namespace: default" ./)
sed -i -e "/^kind: ServiceAccount/{N;N;a\ namespace: ceph-csi
}" $(egrep -rl "^kind: ServiceAccount" ./)
kubectl apply -f csi-provisioner-rbac.yaml
kubectl apply -f csi-nodeplugin-rbac.yaml
10、创建
PodSecurityPolicy
kubectl create -f csi-provisioner-psp.yaml
kubectl create -f csi-nodeplugin-psp.yaml
11、部署 CSI sidecar
修改csi-rbdplugin-provisioner.yaml
内镜像为本地私有仓库镜像
修改csi-rbdplugin.yaml
内镜像为本地私有仓库镜像
[root@node3 kubernetes]# cat csi-rbdplugin.yaml |grep "image:"
image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
image: quay.io/cephcsi/cephcsi:v3.2.0
image: quay.io/cephcsi/cephcsi:v3.2.0
[root@node3 kubernetes]# cat csi-rbdplugin-provisioner.yaml |grep "image:"
image: k8s.gcr.io/sig-storage/csi-provisioner:v2.0.4
image: k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.2
image: k8s.gcr.io/sig-storage/csi-attacher:v3.0.2
image: k8s.gcr.io/sig-storage/csi-resizer:v1.0.1
image: quay.io/cephcsi/cephcsi:v3.2.0
image: quay.io/cephcsi/cephcsi:v3.2.0
image: quay.io/cephcsi/cephcsi:v3.2.0
创建
kubectl -n ceph-csi apply -f csi-rbdplugin-provisioner.yaml
kubectl -n ceph-csi apply -f csi-rbdplugin.yaml
12、创建storageclass
生成配置文件,clusterID
参考第3步返回信息
cat <<EOF > csi-rbd-sc.yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: kubernetes-csi-rbd-sc
provisioner: rbd.csi.ceph.com
parameters:
clusterID: 1fc9f495-498c-4fe2-b3d5-80a041bc5c49
pool: kubernetes
imageFeatures: layering
csi.storage.k8s.io/provisioner-secret-name: kubernetes-csi-rbd-secret
csi.storage.k8s.io/provisioner-secret-namespace: ceph-csi
csi.storage.k8s.io/controller-expand-secret-name: kubernetes-csi-rbd-secret
csi.storage.k8s.io/controller-expand-secret-namespace: ceph-csi
csi.storage.k8s.io/node-stage-secret-name: kubernetes-csi-rbd-secret
csi.storage.k8s.io/node-stage-secret-namespace: ceph-csi
csi.storage.k8s.io/fstype: ext4
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
- discard
EOF
创建StorageClass
kubectl -n ceph-csi apply -f csi-rbd-sc.yaml
13、查看
StorageClass
[root@node3 kubernetes]# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
hsa-ceph-class ceph.com/rbd Delete Immediate false 7d23h
hsa-csi-rbd-sc rbd.csi.ceph.com Delete Immediate true 5d20h
kubernetes-csi-rbd-sc rbd.csi.ceph.com Delete Immediate true 18m
local (default) openebs.io/local Delete WaitForFirstConsumer false 15d
14、登录
kubesphere
控制台查看存储类型
15、创建存储卷
weiliang-ms nice~ 是不是还没有写完?我看第 8 步还是空的
Feynman
嗯吃饭去了,马上补充。
请教一下哈:我的这个plugin deployment 创建失败,describe它也没看到有具体报错。大概是什么原因呀?
[root@con1 rbd]# kubectl describe deployments.apps -n ceph-csi csi-rbdplugin-provisioner
Name: csi-rbdplugin-provisioner
Namespace: ceph-csi
CreationTimestamp: Sun, 21 Nov 2021 19:25:37 +0800
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 3
Selector: app=csi-rbdplugin-provisioner
Replicas: 3 desired | 0 updated | 0 total | 0 available | 4 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=csi-rbdplugin-provisioner
Annotations: kubesphere.io/restartedAt: 2021-11-21T12:21:25.952Z
Service Account: rbd-csi-provisioner
Containers:
csi-provisioner:
Image: k8s.gcr.io/sig-storage/csi-provisioner:v2.2.2
Port: <none>
Host Port: <none>
Args:
--csi-address=$(ADDRESS)
--v=5
--timeout=150s
--retry-interval-start=500ms
--leader-election=true
--feature-gates=Topology=false
--default-fstype=ext4
--extra-create-metadata=true
Environment:
ADDRESS: unix:///csi/csi-provisioner.sock
Mounts:
/csi from socket-dir (rw)
csi-snapshotter:
Image: k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1
Port: <none>
Host Port: <none>
Args:
--csi-address=$(ADDRESS)
--v=5
--timeout=150s
--leader-election=true
Environment:
ADDRESS: unix:///csi/csi-provisioner.sock
Mounts:
/csi from socket-dir (rw)
csi-attacher:
Image: k8s.gcr.io/sig-storage/csi-attacher:v3.2.1
Port: <none>
Host Port: <none>
Args:
--v=5
--csi-address=$(ADDRESS)
--leader-election=true
--retry-interval-start=500ms
Environment:
ADDRESS: /csi/csi-provisioner.sock
Mounts:
/csi from socket-dir (rw)
csi-resizer:
Image: k8s.gcr.io/sig-storage/csi-resizer:v1.2.0
Port: <none>
Host Port: <none>
Args:
--csi-address=$(ADDRESS)
--v=5
--timeout=150s
--leader-election
--retry-interval-start=500ms
--handle-volume-inuse-error=false
Environment:
ADDRESS: unix:///csi/csi-provisioner.sock
Mounts:
/csi from socket-dir (rw)
csi-rbdplugin:
Image: quay.io/cephcsi/cephcsi:v3.4.0
Port: <none>
Host Port: <none>
Args:
--nodeid=$(NODE_ID)
--type=rbd
--controllerserver=true
--endpoint=$(CSI_ENDPOINT)
--v=5
--drivername=rbd.csi.ceph.com
--pidlimit=-1
--rbdhardmaxclonedepth=8
--rbdsoftmaxclonedepth=4
--enableprofiling=false
Environment:
POD_IP: (v1:status.podIP)
NODE_ID: (v1:spec.nodeName)
CSI_ENDPOINT: unix:///csi/csi-provisioner.sock
Mounts:
/csi from socket-dir (rw)
/dev from host-dev (rw)
/etc/ceph-csi-config/ from ceph-csi-config (rw)
/lib/modules from lib-modules (ro)
/sys from host-sys (rw)
/tmp/csi/keys from keys-tmp-dir (rw)
csi-rbdplugin-controller:
Image: quay.io/cephcsi/cephcsi:v3.4.0
Port: <none>
Host Port: <none>
Args:
--type=controller
--v=5
--drivername=rbd.csi.ceph.com
--drivernamespace=$(DRIVER_NAMESPACE)
Environment:
DRIVER_NAMESPACE: (v1:metadata.namespace)
Mounts:
/etc/ceph-csi-config/ from ceph-csi-config (rw)
/tmp/csi/keys from keys-tmp-dir (rw)
liveness-prometheus:
Image: quay.io/cephcsi/cephcsi:v3.4.0
Port: <none>
Host Port: <none>
Args:
--type=liveness
--endpoint=$(CSI_ENDPOINT)
--metricsport=8680
--metricspath=/metrics
--polltime=60s
--timeout=3s
Environment:
CSI_ENDPOINT: unix:///csi/csi-provisioner.sock
POD_IP: (v1:status.podIP)
Mounts:
/csi from socket-dir (rw)
Volumes:
host-dev:
Type: HostPath (bare host directory volume)
Path: /dev
HostPathType:
host-sys:
Type: HostPath (bare host directory volume)
Path: /sys
HostPathType:
lib-modules:
Type: HostPath (bare host directory volume)
Path: /lib/modules
HostPathType:
socket-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
SizeLimit: <unset>
ceph-csi-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: ceph-csi-config
Optional: false
keys-tmp-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
SizeLimit: <unset>
Priority Class Name: system-cluster-critical
Conditions:
Type Status Reason
---- ------ ------
Available False MinimumReplicasUnavailable
ReplicaFailure True FailedCreate
Progressing False ProgressDeadlineExceeded
OldReplicaSets: csi-rbdplugin-provisioner-69dcf9f769 (0/2 replicas created), csi-rbdplugin-provisioner-6cf5fdb898 (0/1 replicas created)
NewReplicaSet: csi-rbdplugin-provisioner-687b467d95 (0/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set csi-rbdplugin-provisioner-6cf5fdb898 to 1
Normal ScalingReplicaSet 10m deployment-controller Scaled down replica set csi-rbdplugin-provisioner-69dcf9f769 to 2
Normal ScalingReplicaSet 10m deployment-controller Scaled up replica set csi-rbdplugin-provisioner-687b467d95 to 1
moyasu describe pod 看看
stoneshi-yunify pod 没建出来
这些k8s.gcr.io 上的镜像也没法pull下来,请问可以从哪里拉取?
moyasu 科学上网之后可以拉取