• 安装部署v2.1.1
  • KubeSphere2.1.1的Multi-node模式在ks-installer安装过程中有1个报错

在三台虚拟机centos7.7下,在线安装kubesphere2.1.1的集群,开始安装比较顺利,安装日志中没有严重报错(ignoring除外),到ks-installer界面后,提示有1个错误,但我没有看出来,哪个大佬帮忙分析一下,日志信息如下:
2020-05-18T08:55:31Z INFO : Use temporary dir: /tmp/shell-operator
2020-05-18T08:55:31Z INFO : Initialize hooks manager …
2020-05-18T08:55:31Z INFO : Search and load hooks …
2020-05-18T08:55:31Z INFO : Load hook config from ‘/hooks/kubesphere/installRunner.py’
2020-05-18T08:55:31Z INFO : Initializing schedule manager …
2020-05-18T08:55:31Z INFO : KUBE Init Kubernetes client
2020-05-18T08:55:31Z INFO : KUBE-INIT Kubernetes client is configured successfully
2020-05-18T08:55:31Z INFO : MAIN: run main loop
2020-05-18T08:55:31Z INFO : MAIN: add onStartup tasks
2020-05-18T08:55:31Z INFO : QUEUE add all HookRun@OnStartup
2020-05-18T08:55:31Z INFO : Running schedule manager …
2020-05-18T08:55:31Z INFO : MSTOR Create new metric shell_operator_live_ticks
2020-05-18T08:55:31Z INFO : MSTOR Create new metric shell_operator_tasks_queue_length
2020-05-18T08:55:31Z INFO : GVR for kind ‘ConfigMap’ is /v1, Resource=configmaps
2020-05-18T08:55:31Z INFO : EVENT Kube event ‘166654d5-f534-4103-a3bb-0b23acad9863’
2020-05-18T08:55:31Z INFO : QUEUE add TASK_HOOK_RUN@KUBE_EVENTS kubesphere/installRunner.py
2020-05-18T08:55:34Z INFO : TASK_RUN HookRun@KUBE_EVENTS kubesphere/installRunner.py
2020-05-18T08:55:34Z INFO : Running hook ‘kubesphere/installRunner.py’ binding ‘KUBE_EVENTS’ …
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match ‘all’

PLAY [localhost] ***************************************************************

TASK [download : include_tasks] ************************************************
skipping: [localhost]

TASK [download : Download items] ***********************************************
skipping: [localhost]

TASK [download : Sync container] ***********************************************
skipping: [localhost]

TASK [kubesphere-defaults : Configure defaults] ********************************
ok: [localhost] => {
“msg”: “Check roles/kubesphere-defaults/defaults/main.yml”
}

TASK [preinstall : check k8s version] ******************************************
changed: [localhost]

TASK [preinstall : init k8s version] *******************************************
ok: [localhost]

TASK [preinstall : Stop if kuernetes version is nonsupport] ********************
ok: [localhost] => {
“changed”: false,
“msg”: “All assertions passed”
}

TASK [preinstall : check helm status] ******************************************
changed: [localhost]

TASK [preinstall : Stop if Helm is not available] ******************************
ok: [localhost] => {
“changed”: false,
“msg”: “All assertions passed”
}

TASK [preinstall : check storage class] ****************************************
changed: [localhost]

TASK [preinstall : Stop if StorageClass was not found] *************************
ok: [localhost] => {
“changed”: false,
“msg”: “All assertions passed”
}

TASK [preinstall : check default storage class] ********************************
changed: [localhost]

TASK [preinstall : Stop if defaultStorageClass was not found] ******************
skipping: [localhost]

PLAY RECAP *********************************************************************
localhost : ok=9 changed=4 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0

[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match ‘all’

PLAY [localhost] ***************************************************************

TASK [download : include_tasks] ************************************************
skipping: [localhost]

TASK [download : Download items] ***********************************************
skipping: [localhost]

TASK [download : Sync container] ***********************************************
skipping: [localhost]

TASK [kubesphere-defaults : Configure defaults] ********************************
ok: [localhost] => {
“msg”: “Check roles/kubesphere-defaults/defaults/main.yml”
}

TASK [metrics-server : Metrics-Server | Checking old installation files] *******
ok: [localhost]

TASK [metrics-server : Metrics-Server | deleting old prometheus-operator] ******
skipping: [localhost]

TASK [metrics-server : Metrics-Server | deleting old metrics-server files] *****
[DEPRECATION WARNING]: evaluating {‘failed’: False, u’stat’: {u’exists’:
False}, u’changed’: False} as a bare variable, this behaviour will go away and
you might need to add |bool to the expression in the future. Also see
CONDITIONAL_BARE_VARS configuration toggle.. This feature will be removed in
version 2.12. Deprecation warnings can be disabled by setting
deprecation_warnings=False in ansible.cfg.
ok: [localhost] => (item=metrics-server)

TASK [metrics-server : Metrics-Server | Getting metrics-server installation files] ***
changed: [localhost]

TASK [metrics-server : Metrics-Server | Creating manifests] ********************
changed: [localhost] => (item={u’type’: u’config’, u’name’: u’values’, u’file’: u’values.yaml’})

TASK [metrics-server : Metrics-Server | Check Metrics-Server] ******************
changed: [localhost]

TASK [metrics-server : Metrics-Server | Installing metrics-server] *************
changed: [localhost]

TASK [metrics-server : Metrics-Server | Installing metrics-server retry] *******
skipping: [localhost]

TASK [metrics-server : Metrics-Server | Waitting for v1beta1.metrics.k8s.io ready] ***
FAILED - RETRYING: Metrics-Server | Waitting for v1beta1.metrics.k8s.io ready (60 retries left).
changed: [localhost]

PLAY RECAP *********************************************************************
localhost : ok=8 changed=5 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0

[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match ‘all’

PLAY [localhost] ***************************************************************

TASK [download : include_tasks] ************************************************
skipping: [localhost]

TASK [download : Download items] ***********************************************
skipping: [localhost]

TASK [download : Sync container] ***********************************************
skipping: [localhost]

TASK [kubesphere-defaults : Configure defaults] ********************************
ok: [localhost] => {
“msg”: “Check roles/kubesphere-defaults/defaults/main.yml”
}

TASK [common : Kubesphere | Check kube-node-lease namespace] *******************
changed: [localhost]

TASK [common : KubeSphere | Get system namespaces] *****************************
ok: [localhost]

TASK [common : set_fact] *******************************************************
ok: [localhost]

TASK [common : debug] **********************************************************
ok: [localhost] => {
“msg”: [
“kubesphere-system”,
“kubesphere-controls-system”,
“kubesphere-monitoring-system”,
“kube-node-lease”,
“kubesphere-logging-system”,
“openpitrix-system”,
“istio-system”,
“kubesphere-alerting-system”,
“istio-system”
]
}

TASK [common : KubeSphere | Create kubesphere namespace] ***********************
changed: [localhost] => (item=kubesphere-system)
changed: [localhost] => (item=kubesphere-controls-system)
changed: [localhost] => (item=kubesphere-monitoring-system)
changed: [localhost] => (item=kube-node-lease)
changed: [localhost] => (item=kubesphere-logging-system)
changed: [localhost] => (item=openpitrix-system)
changed: [localhost] => (item=istio-system)
changed: [localhost] => (item=kubesphere-alerting-system)
changed: [localhost] => (item=istio-system)

TASK [common : KubeSphere | Labeling system-workspace] *************************
changed: [localhost] => (item=default)
changed: [localhost] => (item=kube-public)
changed: [localhost] => (item=kube-system)
changed: [localhost] => (item=kubesphere-system)
changed: [localhost] => (item=kubesphere-controls-system)
changed: [localhost] => (item=kubesphere-monitoring-system)
changed: [localhost] => (item=kube-node-lease)
changed: [localhost] => (item=kubesphere-logging-system)
changed: [localhost] => (item=openpitrix-system)
changed: [localhost] => (item=istio-system)
changed: [localhost] => (item=kubesphere-alerting-system)
changed: [localhost] => (item=istio-system)

TASK [common : KubeSphere | Create ImagePullSecrets] ***************************
changed: [localhost] => (item=default)
changed: [localhost] => (item=kube-public)
changed: [localhost] => (item=kube-system)
changed: [localhost] => (item=kubesphere-system)
changed: [localhost] => (item=kubesphere-controls-system)
changed: [localhost] => (item=kubesphere-monitoring-system)
changed: [localhost] => (item=kube-node-lease)
changed: [localhost] => (item=kubesphere-logging-system)
changed: [localhost] => (item=openpitrix-system)
changed: [localhost] => (item=istio-system)
changed: [localhost] => (item=kubesphere-alerting-system)
changed: [localhost] => (item=istio-system)

TASK [common : KubeSphere | Getting kubernetes master num] *********************
changed: [localhost]

TASK [common : KubeSphere | Setting master num] ********************************
ok: [localhost]

TASK [common : Kubesphere | Getting common component installation files] *******
changed: [localhost] => (item=common)
changed: [localhost] => (item=ks-crds)

TASK [common : KubeSphere | Create KubeSphere crds] ****************************
changed: [localhost]

TASK [common : Kubesphere | Checking openpitrix common component] **************
changed: [localhost]

TASK [common : include_tasks] **************************************************
skipping: [localhost] => (item={u’ks’: u’mysql-pvc’, u’op’: u’openpitrix-db’})
skipping: [localhost] => (item={u’ks’: u’etcd-pvc’, u’op’: u’openpitrix-etcd’})

TASK [common : Getting PersistentVolumeName (mysql)] ***************************
skipping: [localhost]

TASK [common : Getting PersistentVolumeSize (mysql)] ***************************
skipping: [localhost]

TASK [common : Setting PersistentVolumeName (mysql)] ***************************
skipping: [localhost]

TASK [common : Setting PersistentVolumeSize (mysql)] ***************************
skipping: [localhost]

TASK [common : Getting PersistentVolumeName (etcd)] ****************************
skipping: [localhost]

TASK [common : Getting PersistentVolumeSize (etcd)] ****************************
skipping: [localhost]

TASK [common : Setting PersistentVolumeName (etcd)] ****************************
skipping: [localhost]

TASK [common : Setting PersistentVolumeSize (etcd)] ****************************
skipping: [localhost]

TASK [common : Kubesphere | Check mysql PersistentVolumeClaim] *****************
fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-system mysql-pvc -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.550233″, “end”: “2020-05-18 08:56:41.354430″, “msg”: “non-zero return code”, “rc”: 1, “start”: “2020-05-18 08:56:40.804197″, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“mysql-pvc\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“mysql-pvc\” not found"], “stdout”: "", “stdout_lines”: []}
…ignoring

TASK [common : Kubesphere | Setting mysql db pv size] **************************
skipping: [localhost]

TASK [common : Kubesphere | Check redis PersistentVolumeClaim] *****************
fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-system redis-pvc -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.583233″, “end”: “2020-05-18 08:56:42.150315”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2020-05-18 08:56:41.567082″, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“redis-pvc\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“redis-pvc\” not found"], “stdout”: "", “stdout_lines”: []}
…ignoring

TASK [common : Kubesphere | Setting redis db pv size] **************************
skipping: [localhost]

TASK [common : Kubesphere | Check minio PersistentVolumeClaim] *****************
fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-system minio -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.564698″, “end”: “2020-05-18 08:56:42.972137″, “msg”: “non-zero return code”, “rc”: 1, “start”: “2020-05-18 08:56:42.407439”, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“minio\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“minio\” not found"], “stdout”: "", “stdout_lines”: []}
…ignoring

TASK [common : Kubesphere | Setting minio pv size] *****************************
skipping: [localhost]

TASK [common : Kubesphere | Check openldap PersistentVolumeClaim] **************
fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-system openldap-pvc-openldap-0 -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.535660”, “end”: “2020-05-18 08:56:43.736696”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2020-05-18 08:56:43.201036″, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“openldap-pvc-openldap-0\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“openldap-pvc-openldap-0\” not found"], “stdout”: "", “stdout_lines”: []}
…ignoring

TASK [common : Kubesphere | Setting openldap pv size] **************************
skipping: [localhost]

TASK [common : Kubesphere | Check etcd db PersistentVolumeClaim] ***************
fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-system etcd-pvc -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.537814”, “end”: “2020-05-18 08:56:44.494487”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2020-05-18 08:56:43.956673”, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“etcd-pvc\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“etcd-pvc\” not found"], “stdout”: "", “stdout_lines”: []}
…ignoring

TASK [common : Kubesphere | Setting etcd pv size] ******************************
skipping: [localhost]

TASK [common : Kubesphere | Check redis ha PersistentVolumeClaim] **************
fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-system data-redis-ha-server-0 -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.541028”, “end”: “2020-05-18 08:56:45.252659″, “msg”: “non-zero return code”, “rc”: 1, “start”: “2020-05-18 08:56:44.711631″, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“data-redis-ha-server-0\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“data-redis-ha-server-0\” not found"], “stdout”: "", “stdout_lines”: []}
…ignoring

TASK [common : Kubesphere | Setting redis ha pv size] **************************
skipping: [localhost]

TASK [common : Kubesphere | Creating common component manifests] ***************
changed: [localhost] => (item={u’path’: u’etcd’, u’file’: u’etcd.yaml’})
changed: [localhost] => (item={u’name’: u’mysql’, u’file’: u’mysql.yaml’})
changed: [localhost] => (item={u’path’: u’redis’, u’file’: u’redis.yaml’})

TASK [common : Kubesphere | Creating mysql sercet] *****************************
changed: [localhost]

TASK [common : Kubesphere | Deploying etcd and mysql] **************************
skipping: [localhost] => (item=etcd.yaml)
skipping: [localhost] => (item=mysql.yaml)

TASK [common : Kubesphere | Getting minio installation files] ******************
skipping: [localhost] => (item=minio-ha)

TASK [common : Kubesphere | Creating manifests] ********************************
skipping: [localhost] => (item={u’name’: u’custom-values-minio’, u’file’: u’custom-values-minio.yaml’})

TASK [common : Kubesphere | Check minio] ***************************************
skipping: [localhost]

TASK [common : Kubesphere | Deploy minio] **************************************
skipping: [localhost]

TASK [common : debug] **********************************************************
skipping: [localhost]

TASK [common : fail] ***********************************************************
skipping: [localhost]

TASK [common : Kubesphere | create minio config directory] *********************
skipping: [localhost]

TASK [common : Kubesphere | Creating common component manifests] ***************
skipping: [localhost] => (item={u’path’: u’/root/.config/rclone’, u’file’: u’rclone.conf’})

TASK [common : include_tasks] **************************************************
skipping: [localhost] => (item=helm)
skipping: [localhost] => (item=vmbased)

TASK [common : Kubesphere | Check ha-redis] ************************************
skipping: [localhost]

TASK [common : Kubesphere | Getting redis installation files] ******************
skipping: [localhost] => (item=redis-ha)

TASK [common : Kubesphere | Creating manifests] ********************************
skipping: [localhost] => (item={u’name’: u’custom-values-redis’, u’file’: u’custom-values-redis.yaml’})

TASK [common : Kubesphere | Check old redis status] ****************************
skipping: [localhost]

TASK [common : Kubesphere | Delete and backup old redis svc] *******************
skipping: [localhost]

TASK [common : Kubesphere | Deploying redis] ***********************************
skipping: [localhost]

TASK [common : Kubesphere | Getting redis PodIp] *******************************
skipping: [localhost]

TASK [common : Kubesphere | Creating redis migration script] *******************
skipping: [localhost] => (item={u’path’: u’/etc/kubesphere’, u’file’: u’redisMigrate.py’})

TASK [common : Kubesphere | Check redis-ha status] *****************************
skipping: [localhost]

TASK [common : ks-logging | Migrating redis data] ******************************
skipping: [localhost]

TASK [common : Kubesphere | Disable old redis] *********************************
skipping: [localhost]

TASK [common : Kubesphere | Deploying redis] ***********************************
skipping: [localhost] => (item=redis.yaml)

TASK [common : Kubesphere | Getting openldap installation files] ***************
skipping: [localhost] => (item=openldap-ha)

TASK [common : Kubesphere | Creating manifests] ********************************
skipping: [localhost] => (item={u’name’: u’custom-values-openldap’, u’file’: u’custom-values-openldap.yaml’})

TASK [common : Kubesphere | Check old openldap status] *************************
skipping: [localhost]

TASK [common : KubeSphere | Shutdown ks-account] *******************************
skipping: [localhost]

TASK [common : Kubesphere | Delete and backup old openldap svc] ****************
skipping: [localhost]

TASK [common : Kubesphere | Check openldap] ************************************
skipping: [localhost]

TASK [common : Kubesphere | Deploy openldap] ***********************************
skipping: [localhost]

TASK [common : Kubesphere | Load old openldap data] ****************************
skipping: [localhost]

TASK [common : Kubesphere | Check openldap-ha status] **************************
skipping: [localhost]

TASK [common : Kubesphere | Get openldap-ha pod list] **************************
skipping: [localhost]

TASK [common : Kubesphere | Get old openldap data] *****************************
skipping: [localhost]

TASK [common : Kubesphere | Migrating openldap data] ***************************
skipping: [localhost]

TASK [common : Kubesphere | Disable old openldap] ******************************
skipping: [localhost]

TASK [common : Kubesphere | Restart openldap] **********************************
skipping: [localhost]

TASK [common : KubeSphere | Restarting ks-account] *****************************
skipping: [localhost]

TASK [common : Kubesphere | Check ha-redis] ************************************
changed: [localhost]

TASK [common : Kubesphere | Getting redis installation files] ******************
skipping: [localhost] => (item=redis-ha)

TASK [common : Kubesphere | Creating manifests] ********************************
skipping: [localhost] => (item={u’name’: u’custom-values-redis’, u’file’: u’custom-values-redis.yaml’})

TASK [common : Kubesphere | Check old redis status] ****************************
skipping: [localhost]

TASK [common : Kubesphere | Delete and backup old redis svc] *******************
skipping: [localhost]

TASK [common : Kubesphere | Deploying redis] ***********************************
skipping: [localhost]

TASK [common : Kubesphere | Getting redis PodIp] *******************************
skipping: [localhost]

TASK [common : Kubesphere | Creating redis migration script] *******************
skipping: [localhost] => (item={u’path’: u’/etc/kubesphere’, u’file’: u’redisMigrate.py’})

TASK [common : Kubesphere | Check redis-ha status] *****************************
skipping: [localhost]

TASK [common : ks-logging | Migrating redis data] ******************************
skipping: [localhost]

TASK [common : Kubesphere | Disable old redis] *********************************
skipping: [localhost]

TASK [common : Kubesphere | Deploying redis] ***********************************
changed: [localhost] => (item=redis.yaml)

TASK [common : Kubesphere | Getting openldap installation files] ***************
changed: [localhost] => (item=openldap-ha)

TASK [common : Kubesphere | Creating manifests] ********************************
changed: [localhost] => (item={u’name’: u’custom-values-openldap’, u’file’: u’custom-values-openldap.yaml’})

TASK [common : Kubesphere | Check old openldap status] *************************
changed: [localhost]

TASK [common : KubeSphere | Shutdown ks-account] *******************************
skipping: [localhost]

TASK [common : Kubesphere | Delete and backup old openldap svc] ****************
skipping: [localhost]

TASK [common : Kubesphere | Check openldap] ************************************
changed: [localhost]

TASK [common : Kubesphere | Deploy openldap] ***********************************
changed: [localhost]

TASK [common : Kubesphere | Load old openldap data] ****************************
skipping: [localhost]

TASK [common : Kubesphere | Check openldap-ha status] **************************
skipping: [localhost]

TASK [common : Kubesphere | Get openldap-ha pod list] **************************
skipping: [localhost]

TASK [common : Kubesphere | Get old openldap data] *****************************
skipping: [localhost]

TASK [common : Kubesphere | Migrating openldap data] ***************************
skipping: [localhost]

TASK [common : Kubesphere | Disable old openldap] ******************************
skipping: [localhost]

TASK [common : Kubesphere | Restart openldap] **********************************
skipping: [localhost]

TASK [common : KubeSphere | Restarting ks-account] *****************************
skipping: [localhost]

TASK [common : Kubesphere | Getting minio installation files] ******************
changed: [localhost] => (item=minio-ha)

TASK [common : Kubesphere | Creating manifests] ********************************
changed: [localhost] => (item={u’name’: u’custom-values-minio’, u’file’: u’custom-values-minio.yaml’})

TASK [common : Kubesphere | Check minio] ***************************************
changed: [localhost]

TASK [common : Kubesphere | Deploy minio] **************************************
fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/helm upgrade –install ks-minio /etc/kubesphere/minio-ha -f /etc/kubesphere/custom-values-minio.yaml –set fullnameOverride=minio –namespace kubesphere-system –wait –timeout 1800\n”, “delta”: “0:30:00.966692″, “end”: “2020-05-18 09:27:02.782463”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2020-05-18 08:57:01.815771”, “stderr”: “Error: release ks-minio failed: timed out waiting for the condition”, “stderr_lines”: [“Error: release ks-minio failed: timed out waiting for the condition”], “stdout”: "Release \“ks-minio\” does not exist. Installing it now.", “stdout_lines”: ["Release \“ks-minio\” does not exist. Installing it now."]}
…ignoring

TASK [common : debug] **********************************************************
ok: [localhost] => {
“msg”: [
“1. check the storage configuration and storage server”,
“2. make sure the DNS address in /etc/resolv.conf is available.”,
“3. execute ‘helm del –purge ks-minio && kubectl delete job -n kubesphere-system ks-minio-make-bucket-job’”,
“4. Restart the installer pod in kubesphere-system namespace”
]
}

TASK [common : fail] ***********************************************************
fatal: [localhost]: FAILED! => {“changed”: false, “msg”: “It is suggested to refer to the above methods for troubleshooting problems .”}

PLAY RECAP *********************************************************************
localhost : ok=33 changed=27 unreachable=0 failed=1 skipped=75 rescued=0 ignored=7

[root@k8sphere01 ]# kubectl get pod minio-845b7bd867-dsqfs -n kubesphere-system -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
checksum/config: cadb75194921ca14ae01c91438d67a3cf5341519b3ae85527baf27d6a83ee494
checksum/secrets: c0a2180ce5e11287a026c1180e8158171ea73ecd24193f5c715906c5187295e1
creationTimestamp: “2020-05-18T08:57:02Z”
generateName: minio-845b7bd867-
labels:
app: minio
pod-template-hash: 845b7bd867
release: ks-minio
name: minio-845b7bd867-dsqfs
namespace: kubesphere-system
ownerReferences:

  • apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: minio-845b7bd867
    uid: 6744597a-e701-47e9-a4d9-534045027141
    resourceVersion: “2821”
    selfLink: /api/v1/namespaces/kubesphere-system/pods/minio-845b7bd867-dsqfs
    uid: b8acb1c8-147b-4c8e-8dc5-600c6df41fd1
    spec:
    containers:
  • command:
    • /bin/sh
    • -ce
    • /usr/bin/docker-entrypoint.sh minio -C /root/.minio/ server /data
      env:
    • name: MINIO_ACCESS_KEY
      valueFrom:
      secretKeyRef:
      key: accesskey
      name: minio
    • name: MINIO_SECRET_KEY
      valueFrom:
      secretKeyRef:
      key: secretkey
      name: minio
    • name: MINIO_BROWSER
      value: “on”
      image: minio/minio:RELEASE.2019-08-07T01-59-21Z
      imagePullPolicy: IfNotPresent
      livenessProbe:
      failureThreshold: 3
      httpGet:
      path: /minio/health/live
      port: service
      scheme: HTTP
      initialDelaySeconds: 5
      periodSeconds: 30
      successThreshold: 1
      timeoutSeconds: 1
      name: minio
      ports:
    • containerPort: 9000
      name: service
      protocol: TCP
      readinessProbe:
      failureThreshold: 3
      httpGet:
      path: /minio/health/ready
      port: service
      scheme: HTTP
      initialDelaySeconds: 5
      periodSeconds: 15
      successThreshold: 1
      timeoutSeconds: 1
      resources:
      requests:
      cpu: 250m
      memory: 256Mi
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
    • mountPath: /data
      name: export
    • mountPath: /root/.minio/
      name: minio-config-dir
    • mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: ks-minio-token-fpn4z
      readOnly: true
      dnsPolicy: ClusterFirst
      enableServiceLinks: true
      nodeName: k8sphere03
      priority: 0
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: ks-minio
      serviceAccountName: ks-minio
      terminationGracePeriodSeconds: 30
      tolerations:
  • effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  • effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
    volumes:
  • name: export
    persistentVolumeClaim:
    claimName: minio
  • name: minio-user
    secret:
    defaultMode: 420
    secretName: minio
  • emptyDir: {}
    name: minio-config-dir
  • name: ks-minio-token-fpn4z
    secret:
    defaultMode: 420
    secretName: ks-minio-token-fpn4z
    status:
    conditions:
  • lastProbeTime: null
    lastTransitionTime: “2020-05-18T08:57:28Z”
    status: “True”
    type: Initialized
  • lastProbeTime: null
    lastTransitionTime: “2020-05-18T08:57:28Z”
    message: ‘containers with unready status: [minio]’
    reason: ContainersNotReady
    status: “False”
    type: Ready
  • lastProbeTime: null
    lastTransitionTime: “2020-05-18T08:57:28Z”
    message: ‘containers with unready status: [minio]’
    reason: ContainersNotReady
    status: “False”
    type: ContainersReady
  • lastProbeTime: null
    lastTransitionTime: “2020-05-18T08:57:28Z”
    status: “True”
    type: PodScheduled
    containerStatuses:
  • image: minio/minio:RELEASE.2019-08-07T01-59-21Z
    imageID: ""
    lastState: {}
    name: minio
    ready: false
    restartCount: 0
    started: false
    state:
    waiting:
    reason: ContainerCreating
    hostIP: 192.168.108.45
    phase: Pending
    qosClass: Burstable
    startTime: “2020-05-18T08:57:28Z”
    [root@k8sphere01 ]#

通过查看pod的详细信息,发现存储问题
[root@k8sphere01 ]# kubectl describe pods minio-845b7bd867-dsqfs -n kubesphere-system
Name: minio-845b7bd867-dsqfs
Namespace: kubesphere-system
Priority: 0
Node: k8sphere03/192.168.108.45
Start Time: Mon, 18 May 2020 16:57:28 +0800
Labels: app=minio
pod-template-hash=845b7bd867
release=ks-minio
Annotations: checksum/config: cadb75194921ca14ae01c91438d67a3cf5341519b3ae85527baf27d6a83ee494
checksum/secrets: c0a2180ce5e11287a026c1180e8158171ea73ecd24193f5c715906c5187295e1
Status: Pending
IP:

IPs: <none>
Controlled By: ReplicaSet/minio-845b7bd867
Containers:
minio:
Container ID:

Image: minio/minio:RELEASE.2019-08-07T01-59-21Z
Image ID:

Port: 9000/TCP
Host Port: 0/TCP
Command:
/bin/sh
-ce
/usr/bin/docker-entrypoint.sh minio -C /root/.minio/ server /data
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Requests:
cpu: 250m
memory: 256Mi
Liveness: http-get http://:service/minio/health/live delay=5s timeout=1s period=30s #success=1 #failure=3
Readiness: http-get http://:service/minio/health/ready delay=5s timeout=1s period=15s #success=1 #failure=3
Environment:
MINIO_ACCESS_KEY: <set to the key ‘accesskey’ in secret ‘minio’> Optional: false
MINIO_SECRET_KEY: <set to the key ‘secretkey’ in secret ‘minio’> Optional: false
MINIO_BROWSER: on
Mounts:
/data from export (rw)
/root/.minio/ from minio-config-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from ks-minio-token-fpn4z (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
export:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: minio
ReadOnly: false
minio-user:
Type: Secret (a volume populated by a Secret)
SecretName: minio
Optional: false
minio-config-dir:
Type: EmptyDir (a temporary directory that shares a pod’s lifetime)
Medium:

SizeLimit: <unset>
ks-minio-token-fpn4z:
Type: Secret (a volume populated by a Secret)
SecretName: ks-minio-token-fpn4z
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message


Warning FailedMount 14m (x28 over 152m) kubelet, k8sphere03 Unable to attach or mount volumes: unmounted volumes=[export], unattached volumes=[export minio-config-dir ks-minio-token-fpn4z]: timed out waiting for the condition
Warning FailedMount 3m38s (x102 over 152m) kubelet, k8sphere03 (combined from similar events): MountVolume.SetUp failed for volume “pvc-9dc107c3-e355-46c1-8bc3-d9e208c74a77″ : mount failed: mount failed: exit status 1
Mounting command: systemd-run
Mounting arguments: –description=Kubernetes transient mount for /var/lib/kubelet/pods/b8acb1c8-147b-4c8e-8dc5-600c6df41fd1/volumes/kubernetes.ioglusterfs/pvc-9dc107c3-e355-46c1-8bc3-d9e208c74a77 –scope – mount -t glusterfs -o auto_unmount,backup-volfile-servers=192.168.108.52:192.168.108.72:192.168.108.73,log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/pvc-9dc107c3-e355-46c1-8bc3-d9e208c74a77/minio-845b7bd867-dsqfs-glusterfs.log,log-level=ERROR 192.168.108.73:vol_5aae6e0563980f01b9f5a8ad9b9bfb3a /var/lib/kubelet/pods/b8acb1c8-147b-4c8e-8dc5-600c6df41fd1/volumes/kubernetes.ioglusterfs/pvc-9dc107c3-e355-46c1-8bc3-d9e208c74a77
Output: Running scope as unit run-24338.scope.
Mount failed. Please check the log file for more details.
, the following error information was pulled from the glusterfs log to help diagnose this issue:
[2020-05-18 11:28:20.072105] E [glusterfsd-mgmt.c:1925:mgmt_getspec_cbk] 0-glusterfs: failed to get the ‘volume file’ from server
[2020-05-18 11:28:20.072221] E [glusterfsd-mgmt.c:2061:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:vol_5aae6e0563980f01b9f5a8ad9b9bfb3a)
[root@k8sphere01 ]#

qcloud 每个节点都安装了glusterfs的客户端没
部署方式是在已有的k8s环境下部署ks是吗?