Ccsz711K零S
K8S和kubesphere安装后,发现少了3个pv,想手工去创建,从哪里去获取创建pv的文件呢?
自己写了一下,但是问题百出,所以 想问下怎么获取到原文件
K8S和kubesphere安装后,发现少了3个pv,想手工去创建,从哪里去获取创建pv的文件呢?
自己写了一下,但是问题百出,所以 想问下怎么获取到原文件
csz711 pv的数量/用途跟ks cluster的节点/组件/功能配置是有关的,你为什么会觉得少了3个pv?是有哪些pod不正常么?
stoneshi-yunify 嗯,有redis的pvc,但是没有pv,所以绑定不起来。导致redis
启动不起来
csz711 kubectl describe pvc <redis pvc name> 看看。你用的存储插件是?存储插件的log也看看。redis的pv是通过pvc动态创建的。
stoneshi-yunify
1、用的什么存储插件怎么看呢?
2、redis的pv是通过pvc动态创建的。–StorageClass?只能参照这个写个yaml文件?以下是 我找的一个正常环境的redis-pvc
bglab@master:~$ kubectl get pvc redis-pvc -n kubesphere-system -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"redis-pvc","namespace":"kubesphere-system"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"2Gi"}}}}
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: openebs.io/local
volume.kubernetes.io/selected-node: master
creationTimestamp: "2020-11-12T08:32:37Z"
finalizers:
- kubernetes.io/pvc-protection
name: redis-pvc
namespace: kubesphere-system
resourceVersion: "166574"
selfLink: /api/v1/namespaces/kubesphere-system/persistentvolumeclaims/redis-pvc
uid: 4dafdfe7-994a-463b-a561-aff1fb5e776f
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: local
volumeMode: Filesystem
volumeName: pvc-4dafdfe7-994a-463b-a561-aff1fb5e776f
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 2Gi
phase: Bound
bglab@master:~$ kubectl describe pvc redis-pvc -n kubesphere-system
Name: redis-pvc
Namespace: kubesphere-system
StorageClass: local
Status: Bound
Volume: pvc-4dafdfe7-994a-463b-a561-aff1fb5e776f
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"redis-pvc","namespace":"kubesphere-system"},"spec":...
pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: openebs.io/local
volume.kubernetes.io/selected-node: master
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 2Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: redis-6fd6c6d6f9-6c8nd
Events: <none>
3、环境搭建的时候,比如生成pvc,yaml文件应该从哪里读取的吧?我们从哪里可以找到呢 ,可以找到的话 直接apply
一下yaml文件生成下
动态pvc的yaml编写解决。麻烦大神再看看上面的问题
1、编写动态生成pvc的yaml文件内容(总感觉自己 写的yaml不太靠谱,怕哪里写的不完整,导致集群哪里出现问题。还是想从kubekey或者哪里找到原始文件)
bglab@master:~$ cat redis-pvc001.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
finalizers:
- kubernetes.io/pvc-protection
name: redis-pvc
namespace: kubesphere-system
selfLink: /api/v1/namespaces/kubesphere-system/persistentvolumeclaims/redis-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: local
volumeMode: Filesystem
2、重建pv/pvc操作流程
(1)删除旧的pvc
kubectl delete pvc <pvc name> -n <namespace>
否则,执行yaml文件(注意:以下执行完,输出是configured,并不会立即创建pv? )
bglab@master:~$ kubectl apply -f prometheus-k8s-db-prometheus-k8s-1.yaml
persistentvolumeclaim/prometheus-k8s-db-prometheus-k8s-1 configured
(2)执行yaml文件,生成pv和pvc(注意:以下执行完,输出是created )
bglab@master:~$ kubectl apply -f prometheus-k8s-db-prometheus-k8s-1.yaml
persistentvolumeclaim/prometheus-k8s-db-prometheus-k8s-1 created
(3)检查pv,pvc,pod,观察p v和pvc是否绑定成功,pvc是否挂载到了po d上
csz711 如果你没设置过存储插件的话,默认的存储插件就是openebs,默认的storageclass就是你截图里面的local
.
你目前cluster上redis-pvc的状态是什么呢?describe redis-pvc看看。正常的话,状态是bound (表示pvc创建成功,和pv绑定到一起了),但不代表已经挂载到pod了。
现在需要看下为什么pv没有创建成功,如果这个原因没找到,你再次apply yaml,可能还是失败。
把下面这些命令的输出贴出来:
csz711 kubectl apply -f redis-pvc001.yaml ,pvc创建出来了么?现在pvc是什么状态?pvc yaml里的finalizers和selflink都可以不用填。
stoneshi-yunify
redis-pvc现在看着应该正常了
bglab@master:~$ kubectl get sc -A
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local (default) openebs.io/local Delete WaitForFirstConsumer false 5d20h
bglab@master:~$ kubectl -n kubesphere-system describe pvc redis-pvc
Name: redis-pvc
Namespace: kubesphere-system
StorageClass: local
Status: Bound
Volume: pvc-3513125b-2514-4c4a-90e8-9d47b210ff66
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"finalizers":["kubernetes.io/pvc-protection"],"name":"redis...
pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 2Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: redis-6fd6c6d6f9-zdpqc
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ClaimLost 33m persistentvolume-controller Bound claim has lost reference to PersistentVolume. Data on the volume is lost!
Normal Provisioning 33m openebs.io/local_openebs-localpv-provisioner-84956ddb89-n8mjs_95e8738f-208c-4178-a487-d439305dd8fa External provisioner is provisioning volume for claim "kubesphere-system/redis-pvc"
Normal ProvisioningSucceeded 33m openebs.io/local_openebs-localpv-provisioner-84956ddb89-n8mjs_95e8738f-208c-4178-a487-d439305dd8fa Successfully provisioned volume pvc-3513125b-2514-4c4a-90e8-9d47b210ff66
bglab@master:~$ kubectl -n kubesphere-system describe pod redis-6fd6c6d6f9-zdpqc
Name: redis-6fd6c6d6f9-zdpqc
Namespace: kubesphere-system
Priority: 0
Node: master/192.168.137.235
Start Time: Mon, 16 Nov 2020 14:54:25 +0800
Labels: app=redis
pod-template-hash=6fd6c6d6f9
tier=database
version=redis-4.0
Annotations: <none>
Status: Running
IP: 10.233.64.17
IPs:
IP: 10.233.64.17
Controlled By: ReplicaSet/redis-6fd6c6d6f9
Containers:
redis:
Container ID: docker://433c0939bb3f0315eedb42e5dbecf20139092ccb3b67446c62eb684b3ba65352
Image: redis:5.0.5-alpine
Image ID: docker-pullable://redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858
Port: 6379/TCP
Host Port: 0/TCP
State: Running
Started: Mon, 16 Nov 2020 14:54:35 +0800
Ready: True
Restart Count: 0
Limits:
cpu: 1
memory: 1000Mi
Requests:
cpu: 20m
memory: 100Mi
Environment: <none>
Mounts:
/data from redis-pvc (rw,path="redis-data")
/var/run/secrets/kubernetes.io/serviceaccount from default-token-d7ngn (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
redis-pvc:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: redis-pvc
ReadOnly: false
default-token-d7ngn:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-d7ngn
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: CriticalAddonsOnly
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned kubesphere-system/redis-6fd6c6d6f9-zdpqc to master
Normal Pulled 2m56s kubelet, master Container image "redis:5.0.5-alpine" already present on machine
Normal Created 2m53s kubelet, master Created container redis
Normal Started 2m52s kubelet, master Started container redis
bglab@master:~$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-05f8d4eb-1eab-4198-a337-38cf2e182028 2Gi RWO Delete Bound kubesphere-system/openldap-pvc-openldap-0 local 5d20h
pvc-12bb4e83-4a1d-45bb-acd8-a61d6f7e6e39 20Gi RWO Delete Bound kubesphere-monitoring-system/prometheus-k8s-db-prometheus-k8s-0 local 95s
pvc-3513125b-2514-4c4a-90e8-9d47b210ff66 2Gi RWO Delete Bound kubesphere-system/redis-pvc local 43m
pvc-9d5973e7-b753-46c7-98c0-2fac0de1ce9b 20Gi RWO Delete Bound kubesphere-monitoring-system/prometheus-k8s-db-prometheus-k8s-1 local 30s
bglab@master:~$ kubectl get pvc -A
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
kubesphere-monitoring-system prometheus-k8s-db-prometheus-k8s-0 Bound pvc-12bb4e83-4a1d-45bb-acd8-a61d6f7e6e39 20Gi RWO local 107s
kubesphere-monitoring-system prometheus-k8s-db-prometheus-k8s-1 Bound pvc-9d5973e7-b753-46c7-98c0-2fac0de1ce9b 20Gi RWO local 45s
kubesphere-system openldap-pvc-openldap-0 Bound pvc-05f8d4eb-1eab-4198-a337-38cf2e182028 2Gi RWO local 5d20h
kubesphere-system redis-pvc Bound pvc-3513125b-2514-4c4a-90e8-9d47b210ff66 2Gi RWO local 44m
csz711 嗯,redis正常了。