• 安装部署
  • kubesphere最小化安装 pod CrashLoopBackOff Pending

版本说明:kubelet-1.17.3 kubeadm-1.17.3 kubectl-1.17.3
执行了 kubectl apply -f kubesphere-minimal.yaml 安装完成后,查看 pod状态:kubectl get pods –all-namespaces|grep -v Running
ks-account-596657f8c6-mqrpl Init:0/2
ks-apigateway-78bcdc8ffc-nghsb CrashLoopBackOff
openldap-0 Pending
redis-6fd6c6d6f9-jr9xq Pending

头二遍都是卡在这里,三天了还是这样,这是第三遍了,大佬们帮忙看看哪的问题,十分感谢!
部分日志信息如下:
[root@k8s-node1 k8s]# kubectl logs -n kubesphere-system ks-apigateway-78bcdc8ffc-nghsb
[DEV NOTICE] Registered directive ‘authenticate’ before ‘jwt’
[DEV NOTICE] Registered directive ‘authentication’ before ‘jwt’
[DEV NOTICE] Registered directive ‘swagger’ before ‘jwt’
2020/05/20 11:55:59 [INFO][cache:0xc0000c0410] Started certificate maintenance routine
Activating privacy features… done.
E0520 11:56:00.642841 1 redis.go:51] unable to reach redis hostdial tcp 10.96.1.247:6379: connect: connection refused
2020/05/20 11:56:00 dial tcp 10.96.1.247:6379: connect: connection refused

[root@k8s-node1 k8s]# kubectl logs -n kubesphere-system ks-account-596657f8c6-mqrpl
Error from server (BadRequest): container “ks-account” in pod “ks-account-596657f8c6-mqrpl” is waiting to start: PodInitializing

    用的什么持久化存储?你看看集群的 PVC 是否正常?

      [root@k8s-node1 k8s]# kubectl get ns
      NAME STATUS AGE
      default Active 22h
      kube-node-lease Active 22h
      kube-public Active 22h
      kube-system Active 22h
      kubesphere-controls-system Active 34m
      kubesphere-monitoring-system Active 34m
      kubesphere-system Active 36m
      openebs Active 22h

      [root@k8s-node1 k8s]# kubectl get sc
      NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
      openebs-device openebs.io/local Delete WaitForFirstConsumer false 21h
      openebs-hostpath (default) openebs.io/local Delete WaitForFirstConsumer false 21h
      openebs-jiva-default openebs.io/provisioner-iscsi Delete Immediate false 21h
      openebs-snapshot-promoter volumesnapshot.external-storage.k8s.io/snapshot-promoter Delete Immediate false 21h

      Feynman
      [root@k8s-node1 k8s]# kubectl get nodes
      NAME STATUS ROLES AGE VERSION
      k8s-node1 Ready master 23h v1.17.3
      k8s-node2 Ready <none> 22h v1.17.3
      k8s-node3 Ready <none> 22h v1.17.3
      这是集群状态信息

      Feynman
      现在又有新问题,卡在这里二三个小时了,是安装失败了还是还在安装中?我是继续等待,还是如何操作?感谢指点!

        chuanning
        执行这个看看存储挂盘的状态:

        kubectl get pvc --all-namespaces

          Feynman
          根据您的提示,去除污点后安装的更多日志信息如下:

          [root@k8s-node1 k8s]# kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
          2020-05-19T13:55:16Z INFO     : shell-operator v1.0.0-beta.5
          2020-05-19T13:55:16Z INFO     : Use temporary dir: /tmp/shell-operator
          2020-05-19T13:55:16Z INFO     : Initialize hooks manager ...
          2020-05-19T13:55:16Z INFO     : Search and load hooks ...
          2020-05-19T13:55:16Z INFO     : Load hook config from '/hooks/kubesphere/installRunner.py'
          2020-05-19T13:55:16Z INFO     : HTTP SERVER Listening on 0.0.0.0:9115
          2020-05-19T13:55:17Z INFO     : Initializing schedule manager ...
          2020-05-19T13:55:17Z INFO     : KUBE Init Kubernetes client
          2020-05-19T13:55:17Z INFO     : KUBE-INIT Kubernetes client is configured successfully
          2020-05-19T13:55:17Z INFO     : MAIN: run main loop
          2020-05-19T13:55:17Z INFO     : MAIN: add onStartup tasks
          2020-05-19T13:55:17Z INFO     : Running schedule manager ...
          2020-05-19T13:55:17Z INFO     : QUEUE add all HookRun@OnStartup
          2020-05-19T13:55:17Z INFO     : MSTOR Create new metric shell_operator_live_ticks
          2020-05-19T13:55:17Z INFO     : MSTOR Create new metric shell_operator_tasks_queue_length
          2020-05-19T13:55:17Z INFO     : GVR for kind 'ConfigMap' is /v1, Resource=configmaps
          2020-05-19T13:55:17Z INFO     : EVENT Kube event '796f243d-c91e-41cc-b59e-609705d284c0'
          2020-05-19T13:55:17Z INFO     : QUEUE add TASK_HOOK_RUN@KUBE_EVENTS kubesphere/installRunner.py
          2020-05-19T13:55:20Z INFO     : TASK_RUN HookRun@KUBE_EVENTS kubesphere/installRunner.py
          2020-05-19T13:55:20Z INFO     : Running hook 'kubesphere/installRunner.py' binding 'KUBE_EVENTS' ...
          [WARNING]: No inventory was parsed, only implicit localhost is available
          [WARNING]: provided hosts list is empty, only localhost is available. Note that
          the implicit localhost does not match 'all'
          
          PLAY [localhost] ***************************************************************
          
          TASK [download : include_tasks] ************************************************
          skipping: [localhost]
          
          TASK [download : Download items] ***********************************************
          skipping: [localhost]
          
          TASK [download : Sync container] ***********************************************
          skipping: [localhost]
          
          TASK [kubesphere-defaults : Configure defaults] ********************************
          ok: [localhost] => {
              "msg": "Check roles/kubesphere-defaults/defaults/main.yml"
          }
          
          TASK [preinstall : check k8s version] ******************************************
          changed: [localhost]
          
          TASK [preinstall : init k8s version] *******************************************
          ok: [localhost]
          
          TASK [preinstall : Stop if kuernetes version is nonsupport] ********************
          ok: [localhost] => {
              "changed": false, 
              "msg": "All assertions passed"
          }
          
          TASK [preinstall : check helm status] ******************************************
          changed: [localhost]
          
          TASK [preinstall : Stop if Helm is not available] ******************************
          ok: [localhost] => {
              "changed": false, 
              "msg": "All assertions passed"
          }
          
          TASK [preinstall : check storage class] ****************************************
          changed: [localhost]
          
          TASK [preinstall : Stop if StorageClass was not found] *************************
          skipping: [localhost]
          
          TASK [preinstall : check default storage class] ********************************
          changed: [localhost]
          
          TASK [preinstall : Stop if defaultStorageClass was not found] ******************
          ok: [localhost] => {
              "changed": false, 
              "msg": "All assertions passed"
          }
          
          PLAY RECAP *********************************************************************
          localhost                  : ok=9    changed=4    unreachable=0    failed=0    skipped=4    rescued=0    ignored=0   
          
          [WARNING]: No inventory was parsed, only implicit localhost is available
          [WARNING]: provided hosts list is empty, only localhost is available. Note that
          the implicit localhost does not match 'all'
          
          PLAY [localhost] ***************************************************************
          
          TASK [download : include_tasks] ************************************************
          skipping: [localhost]
          
          TASK [download : Download items] ***********************************************
          skipping: [localhost]
          
          TASK [download : Sync container] ***********************************************
          skipping: [localhost]
          
          TASK [kubesphere-defaults : Configure defaults] ********************************
          ok: [localhost] => {
              "msg": "Check roles/kubesphere-defaults/defaults/main.yml"
          }
          
          TASK [metrics-server : Metrics-Server | Checking old installation files] *******
          skipping: [localhost]
          
          TASK [metrics-server : Metrics-Server | deleting old prometheus-operator] ******
          skipping: [localhost]
          
          TASK [metrics-server : Metrics-Server | deleting old metrics-server files] *****
          skipping: [localhost] => (item=metrics-server) 
          
          TASK [metrics-server : Metrics-Server | Getting metrics-server installation files] ***
          skipping: [localhost]
          
          TASK [metrics-server : Metrics-Server | Creating manifests] ********************
          skipping: [localhost] => (item={u'type': u'config', u'name': u'values', u'file': u'values.yaml'}) 
          
          TASK [metrics-server : Metrics-Server | Check Metrics-Server] ******************
          skipping: [localhost]
          
          TASK [metrics-server : Metrics-Server | Installing metrics-server] *************
          skipping: [localhost]
          
          TASK [metrics-server : Metrics-Server | Installing metrics-server retry] *******
          skipping: [localhost]
          
          TASK [metrics-server : Metrics-Server | Waitting for v1beta1.metrics.k8s.io ready] ***
          skipping: [localhost]
          
          PLAY RECAP *********************************************************************
          localhost                  : ok=1    changed=0    unreachable=0    failed=0    skipped=12   rescued=0    ignored=0   
          
          [WARNING]: No inventory was parsed, only implicit localhost is available
          [WARNING]: provided hosts list is empty, only localhost is available. Note that
          the implicit localhost does not match 'all'
          
          PLAY [localhost] ***************************************************************
          
          TASK [download : include_tasks] ************************************************
          skipping: [localhost]
          
          TASK [download : Download items] ***********************************************
          skipping: [localhost]
          
          TASK [download : Sync container] ***********************************************
          skipping: [localhost]
          
          TASK [kubesphere-defaults : Configure defaults] ********************************
          ok: [localhost] => {
              "msg": "Check roles/kubesphere-defaults/defaults/main.yml"
          }
          
          TASK [common : Kubesphere | Check kube-node-lease namespace] *******************
          changed: [localhost]
          
          TASK [common : KubeSphere | Get system namespaces] *****************************
          ok: [localhost]
          
          TASK [common : set_fact] *******************************************************
          ok: [localhost]
          
          TASK [common : debug] **********************************************************
          ok: [localhost] => {
              "msg": [
                  "kubesphere-system", 
                  "kubesphere-controls-system", 
                  "kubesphere-monitoring-system", 
                  "kube-node-lease"
              ]
          }
          
          TASK [common : KubeSphere | Create kubesphere namespace] ***********************
          changed: [localhost] => (item=kubesphere-system)
          changed: [localhost] => (item=kubesphere-controls-system)
          changed: [localhost] => (item=kubesphere-monitoring-system)
          changed: [localhost] => (item=kube-node-lease)
          
          TASK [common : KubeSphere | Labeling system-workspace] *************************
          changed: [localhost] => (item=default)
          changed: [localhost] => (item=kube-public)
          changed: [localhost] => (item=kube-system)
          changed: [localhost] => (item=kubesphere-system)
          changed: [localhost] => (item=kubesphere-controls-system)
          changed: [localhost] => (item=kubesphere-monitoring-system)
          changed: [localhost] => (item=kube-node-lease)
          
          TASK [common : KubeSphere | Create ImagePullSecrets] ***************************
          changed: [localhost] => (item=default)
          changed: [localhost] => (item=kube-public)
          changed: [localhost] => (item=kube-system)
          changed: [localhost] => (item=kubesphere-system)
          changed: [localhost] => (item=kubesphere-controls-system)
          changed: [localhost] => (item=kubesphere-monitoring-system)
          changed: [localhost] => (item=kube-node-lease)
          
          TASK [common : KubeSphere | Getting kubernetes master num] *********************
          changed: [localhost]
          
          TASK [common : KubeSphere | Setting master num] ********************************
          ok: [localhost]
          
          TASK [common : Kubesphere | Getting common component installation files] *******
          changed: [localhost] => (item=common)
          changed: [localhost] => (item=ks-crds)
          
          TASK [common : KubeSphere | Create KubeSphere crds] ****************************
          changed: [localhost]
          
          TASK [common : Kubesphere | Checking openpitrix common component] **************
          changed: [localhost]
          
          TASK [common : include_tasks] **************************************************
          skipping: [localhost] => (item={u'ks': u'mysql-pvc', u'op': u'openpitrix-db'}) 
          skipping: [localhost] => (item={u'ks': u'etcd-pvc', u'op': u'openpitrix-etcd'}) 
          
          TASK [common : Getting PersistentVolumeName (mysql)] ***************************
          skipping: [localhost]
          
          TASK [common : Getting PersistentVolumeSize (mysql)] ***************************
          skipping: [localhost]
          
          TASK [common : Setting PersistentVolumeName (mysql)] ***************************
          skipping: [localhost]
          
          TASK [common : Setting PersistentVolumeSize (mysql)] ***************************
          skipping: [localhost]
          
          TASK [common : Getting PersistentVolumeName (etcd)] ****************************
          skipping: [localhost]
          
          TASK [common : Getting PersistentVolumeSize (etcd)] ****************************
          skipping: [localhost]
          
          TASK [common : Setting PersistentVolumeName (etcd)] ****************************
          skipping: [localhost]
          
          TASK [common : Setting PersistentVolumeSize (etcd)] ****************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Check mysql PersistentVolumeClaim] *****************
          fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl get pvc -n kubesphere-system mysql-pvc -o jsonpath='{.status.capacity.storage}'\n", "delta": "0:00:00.857110", "end": "2020-05-19 13:56:29.717529", "msg": "non-zero return code", "rc": 1, "start": "2020-05-19 13:56:28.860419", "stderr": "Error from server (NotFound): persistentvolumeclaims \"mysql-pvc\" not found", "stderr_lines": ["Error from server (NotFound): persistentvolumeclaims \"mysql-pvc\" not found"], "stdout": "", "stdout_lines": []}
          ...ignoring
          
          TASK [common : Kubesphere | Setting mysql db pv size] **************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Check redis PersistentVolumeClaim] *****************
          fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl get pvc -n kubesphere-system redis-pvc -o jsonpath='{.status.capacity.storage}'\n", "delta": "0:00:00.750486", "end": "2020-05-19 13:56:30.917826", "msg": "non-zero return code", "rc": 1, "start": "2020-05-19 13:56:30.167340", "stderr": "Error from server (NotFound): persistentvolumeclaims \"redis-pvc\" not found", "stderr_lines": ["Error from server (NotFound): persistentvolumeclaims \"redis-pvc\" not found"], "stdout": "", "stdout_lines": []}
          ...ignoring
          
          TASK [common : Kubesphere | Setting redis db pv size] **************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Check minio PersistentVolumeClaim] *****************
          fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl get pvc -n kubesphere-system minio -o jsonpath='{.status.capacity.storage}'\n", "delta": "0:00:00.765986", "end": "2020-05-19 13:56:32.120190", "msg": "non-zero return code", "rc": 1, "start": "2020-05-19 13:56:31.354204", "stderr": "Error from server (NotFound): persistentvolumeclaims \"minio\" not found", "stderr_lines": ["Error from server (NotFound): persistentvolumeclaims \"minio\" not found"], "stdout": "", "stdout_lines": []}
          ...ignoring
          
          TASK [common : Kubesphere | Setting minio pv size] *****************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Check openldap PersistentVolumeClaim] **************
          fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl get pvc -n kubesphere-system openldap-pvc-openldap-0 -o jsonpath='{.status.capacity.storage}'\n", "delta": "0:00:00.778738", "end": "2020-05-19 13:56:33.391675", "msg": "non-zero return code", "rc": 1, "start": "2020-05-19 13:56:32.612937", "stderr": "Error from server (NotFound): persistentvolumeclaims \"openldap-pvc-openldap-0\" not found", "stderr_lines": ["Error from server (NotFound): persistentvolumeclaims \"openldap-pvc-openldap-0\" not found"], "stdout": "", "stdout_lines": []}
          ...ignoring
          
          TASK [common : Kubesphere | Setting openldap pv size] **************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Check etcd db PersistentVolumeClaim] ***************
          fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl get pvc -n kubesphere-system etcd-pvc -o jsonpath='{.status.capacity.storage}'\n", "delta": "0:00:00.741595", "end": "2020-05-19 13:56:34.555584", "msg": "non-zero return code", "rc": 1, "start": "2020-05-19 13:56:33.813989", "stderr": "Error from server (NotFound): persistentvolumeclaims \"etcd-pvc\" not found", "stderr_lines": ["Error from server (NotFound): persistentvolumeclaims \"etcd-pvc\" not found"], "stdout": "", "stdout_lines": []}
          ...ignoring
          
          TASK [common : Kubesphere | Setting etcd pv size] ******************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Check redis ha PersistentVolumeClaim] **************
          fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl get pvc -n kubesphere-system data-redis-ha-server-0 -o jsonpath='{.status.capacity.storage}'\n", "delta": "0:00:00.715106", "end": "2020-05-19 13:56:35.724809", "msg": "non-zero return code", "rc": 1, "start": "2020-05-19 13:56:35.009703", "stderr": "Error from server (NotFound): persistentvolumeclaims \"data-redis-ha-server-0\" not found", "stderr_lines": ["Error from server (NotFound): persistentvolumeclaims \"data-redis-ha-server-0\" not found"], "stdout": "", "stdout_lines": []}
          ...ignoring
          
          TASK [common : Kubesphere | Setting redis ha pv size] **************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Creating common component manifests] ***************
          changed: [localhost] => (item={u'path': u'etcd', u'file': u'etcd.yaml'})
          changed: [localhost] => (item={u'name': u'mysql', u'file': u'mysql.yaml'})
          changed: [localhost] => (item={u'path': u'redis', u'file': u'redis.yaml'})
          
          TASK [common : Kubesphere | Creating mysql sercet] *****************************
          changed: [localhost]
          
          TASK [common : Kubesphere | Deploying etcd and mysql] **************************
          skipping: [localhost] => (item=etcd.yaml) 
          skipping: [localhost] => (item=mysql.yaml) 
          
          TASK [common : Kubesphere | Getting minio installation files] ******************
          skipping: [localhost] => (item=minio-ha) 
          
          TASK [common : Kubesphere | Creating manifests] ********************************
          skipping: [localhost] => (item={u'name': u'custom-values-minio', u'file': u'custom-values-minio.yaml'}) 
          
          TASK [common : Kubesphere | Check minio] ***************************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Deploy minio] **************************************
          skipping: [localhost]
          
          TASK [common : debug] **********************************************************
          skipping: [localhost]
          
          TASK [common : fail] ***********************************************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | create minio config directory] *********************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Creating common component manifests] ***************
          skipping: [localhost] => (item={u'path': u'/root/.config/rclone', u'file': u'rclone.conf'}) 
          
          TASK [common : include_tasks] **************************************************
          skipping: [localhost] => (item=helm) 
          skipping: [localhost] => (item=vmbased) 
          
          TASK [common : Kubesphere | Check ha-redis] ************************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Getting redis installation files] ******************
          skipping: [localhost] => (item=redis-ha) 
          
          TASK [common : Kubesphere | Creating manifests] ********************************
          skipping: [localhost] => (item={u'name': u'custom-values-redis', u'file': u'custom-values-redis.yaml'}) 
          
          TASK [common : Kubesphere | Check old redis status] ****************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Delete and backup old redis svc] *******************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Deploying redis] ***********************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Getting redis PodIp] *******************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Creating redis migration script] *******************
          skipping: [localhost] => (item={u'path': u'/etc/kubesphere', u'file': u'redisMigrate.py'}) 
          
          TASK [common : Kubesphere | Check redis-ha status] *****************************
          skipping: [localhost]
          
          TASK [common : ks-logging | Migrating redis data] ******************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Disable old redis] *********************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Deploying redis] ***********************************
          skipping: [localhost] => (item=redis.yaml) 
          
          TASK [common : Kubesphere | Getting openldap installation files] ***************
          skipping: [localhost] => (item=openldap-ha) 
          
          TASK [common : Kubesphere | Creating manifests] ********************************
          skipping: [localhost] => (item={u'name': u'custom-values-openldap', u'file': u'custom-values-openldap.yaml'}) 
          
          TASK [common : Kubesphere | Check old openldap status] *************************
          skipping: [localhost]
          
          TASK [common : KubeSphere | Shutdown ks-account] *******************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Delete and backup old openldap svc] ****************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Check openldap] ************************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Deploy openldap] ***********************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Load old openldap data] ****************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Check openldap-ha status] **************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Get openldap-ha pod list] **************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Get old openldap data] *****************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Migrating openldap data] ***************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Disable old openldap] ******************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Restart openldap] **********************************
          skipping: [localhost]
          
          TASK [common : KubeSphere | Restarting ks-account] *****************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Check ha-redis] ************************************
          changed: [localhost]
          
          TASK [common : Kubesphere | Getting redis installation files] ******************
          skipping: [localhost] => (item=redis-ha) 
          
          TASK [common : Kubesphere | Creating manifests] ********************************
          skipping: [localhost] => (item={u'name': u'custom-values-redis', u'file': u'custom-values-redis.yaml'}) 
          
          TASK [common : Kubesphere | Check old redis status] ****************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Delete and backup old redis svc] *******************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Deploying redis] ***********************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Getting redis PodIp] *******************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Creating redis migration script] *******************
          skipping: [localhost] => (item={u'path': u'/etc/kubesphere', u'file': u'redisMigrate.py'}) 
          
          TASK [common : Kubesphere | Check redis-ha status] *****************************
          skipping: [localhost]
          
          TASK [common : ks-logging | Migrating redis data] ******************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Disable old redis] *********************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Deploying redis] ***********************************
          changed: [localhost] => (item=redis.yaml)
          
          TASK [common : Kubesphere | Getting openldap installation files] ***************
          changed: [localhost] => (item=openldap-ha)
          
          TASK [common : Kubesphere | Creating manifests] ********************************
          changed: [localhost] => (item={u'name': u'custom-values-openldap', u'file': u'custom-values-openldap.yaml'})
          
          TASK [common : Kubesphere | Check old openldap status] *************************
          changed: [localhost]
          
          TASK [common : KubeSphere | Shutdown ks-account] *******************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Delete and backup old openldap svc] ****************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Check openldap] ************************************
          changed: [localhost]
          
          TASK [common : Kubesphere | Deploy openldap] ***********************************
          changed: [localhost]
          
          TASK [common : Kubesphere | Load old openldap data] ****************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Check openldap-ha status] **************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Get openldap-ha pod list] **************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Get old openldap data] *****************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Migrating openldap data] ***************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Disable old openldap] ******************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Restart openldap] **********************************
          skipping: [localhost]
          
          TASK [common : KubeSphere | Restarting ks-account] *****************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Getting minio installation files] ******************
          skipping: [localhost] => (item=minio-ha) 
          
          TASK [common : Kubesphere | Creating manifests] ********************************
          skipping: [localhost] => (item={u'name': u'custom-values-minio', u'file': u'custom-values-minio.yaml'}) 
          
          TASK [common : Kubesphere | Check minio] ***************************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Deploy minio] **************************************
          skipping: [localhost]
          
          TASK [common : debug] **********************************************************
          skipping: [localhost]
          
          TASK [common : fail] ***********************************************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | create minio config directory] *********************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Creating common component manifests] ***************
          skipping: [localhost] => (item={u'path': u'/root/.config/rclone', u'file': u'rclone.conf'}) 
          
          TASK [common : include_tasks] **************************************************
          skipping: [localhost] => (item=helm) 
          skipping: [localhost] => (item=vmbased) 
          
          TASK [common : Kubesphere | Deploying common component] ************************
          skipping: [localhost] => (item=mysql.yaml) 
          
          TASK [common : Kubesphere | Deploying common component] ************************
          skipping: [localhost] => (item=etcd.yaml) 
          
          TASK [common : Setting persistentVolumeReclaimPolicy (mysql)] ******************
          skipping: [localhost]
          
          TASK [common : Setting persistentVolumeReclaimPolicy (etcd)] *******************
          skipping: [localhost]
          
          PLAY RECAP *********************************************************************
          localhost                  : ok=28   changed=23   unreachable=0    failed=0    skipped=88   rescued=0    ignored=6   
          
          [WARNING]: No inventory was parsed, only implicit localhost is available
          [WARNING]: provided hosts list is empty, only localhost is available. Note that
          the implicit localhost does not match 'all'
          
          PLAY [localhost] ***************************************************************
          
          TASK [download : include_tasks] ************************************************
          skipping: [localhost]
          
          TASK [download : Download items] ***********************************************
          skipping: [localhost]
          
          TASK [download : Sync container] ***********************************************
          skipping: [localhost]
          
          TASK [kubesphere-defaults : Configure defaults] ********************************
          ok: [localhost] => {
              "msg": "Check roles/kubesphere-defaults/defaults/main.yml"
          }
          
          TASK [ks-core/prepare : KubeSphere | Create KubeSphere dir] ********************
          ok: [localhost]
          
          TASK [ks-core/prepare : KubeSphere | Getting installation init files] **********
          changed: [localhost] => (item=workspace.yaml)
          changed: [localhost] => (item=ks-init)
          
          TASK [ks-core/prepare : KubeSphere | Init KubeSphere system] *******************
          changed: [localhost]
          
          TASK [ks-core/prepare : KubeSphere | Creating KubeSphere Secret] ***************
          changed: [localhost]
          
          TASK [ks-core/prepare : KubeSphere | Creating KubeSphere Secret] ***************
          ok: [localhost]
          
          TASK [ks-core/prepare : KubeSphere | Enable Token Script] **********************
          changed: [localhost]
          
          TASK [ks-core/prepare : KubeSphere | Getting KS Token] *************************
          changed: [localhost]
          
          TASK [ks-core/prepare : KubeSphere | Setting ks_token] *************************
          ok: [localhost]
          
          TASK [ks-core/prepare : KubeSphere | Creating manifests] ***********************
          changed: [localhost] => (item={u'type': u'init', u'name': u'ks-account-init', u'file': u'ks-account-init.yaml'})
          changed: [localhost] => (item={u'type': u'init', u'name': u'ks-apigateway-init', u'file': u'ks-apigateway-init.yaml'})
          changed: [localhost] => (item={u'type': u'values', u'name': u'custom-values-istio-init', u'file': u'custom-values-istio-init.yaml'})
          changed: [localhost] => (item={u'type': u'cm', u'name': u'kubesphere-config', u'file': u'kubesphere-config.yaml'})
          
          TASK [ks-core/prepare : KubeSphere | Init KubeSphere] **************************
          changed: [localhost] => (item=ks-account-init.yaml)
          changed: [localhost] => (item=ks-apigateway-init.yaml)
          changed: [localhost] => (item=kubesphere-config.yaml)
          
          TASK [ks-core/prepare : KubeSphere | Getting controls-system file] *************
          changed: [localhost] => (item={u'name': u'kubesphere-controls-system', u'file': u'kubesphere-controls-system.yaml'})
          
          TASK [ks-core/prepare : KubeSphere | Installing controls-system] ***************
          changed: [localhost]
          
          TASK [ks-core/prepare : KubeSphere | Create KubeSphere workspace] **************
          FAILED - RETRYING: KubeSphere | Create KubeSphere workspace (5 retries left).
          FAILED - RETRYING: KubeSphere | Create KubeSphere workspace (4 retries left).
          FAILED - RETRYING: KubeSphere | Create KubeSphere workspace (3 retries left).
          FAILED - RETRYING: KubeSphere | Create KubeSphere workspace (2 retries left).
          FAILED - RETRYING: KubeSphere | Create KubeSphere workspace (1 retries left).
          fatal: [localhost]: FAILED! => {"attempts": 5, "changed": true, "cmd": "/usr/local/bin/kubectl apply -f /etc/kubesphere/workspace.yaml", "delta": "0:00:00.930795", "end": "2020-05-19 13:57:57.647429", "msg": "non-zero return code", "rc": 1, "start": "2020-05-19 13:57:56.716634", "stderr": "error: unable to recognize \"/etc/kubesphere/workspace.yaml\": no matches for kind \"Workspace\" in version \"tenant.kubesphere.io/v1alpha1\"", "stderr_lines": ["error: unable to recognize \"/etc/kubesphere/workspace.yaml\": no matches for kind \"Workspace\" in version \"tenant.kubesphere.io/v1alpha1\""], "stdout": "", "stdout_lines": []}
          
          PLAY RECAP *********************************************************************
          localhost                  : ok=13   changed=9    unreachable=0    failed=1    skipped=3    rescued=0    ignored=0   

          Feynman
          昨天晚上进行了第四次安装,结果和上次一样,在执行 Create KubeSphere workspace 那一步失败了!请大神抽空帮忙看看什么原因,万分感谢!(master在安装openebs之前去除了污点,安装openebs成功后,接着就安装kubesphere-minimal)

          相关环境:
          [root@k8s-node1 ~]# kubectl get nodes
          NAME        STATUS   ROLES    AGE    VERSION
          k8s-node1   Ready    master   5d5h   v1.17.3
          k8s-node2   Ready    <none>   5d5h   v1.17.3
          k8s-node3   Ready    <none>   5d5h   v1.17.3
          
          [root@k8s-node1 ~]# helm ls --all openebs
          NAME   	REVISION	UPDATED                 	STATUS  	CHART        	APP VERSION	NAMESPACE
          openebs	1       	Tue May 19 13:27:09 2020	DEPLOYED	openebs-1.5.0	1.5.0      	openebs
          
          [root@k8s-node1 ~]# kubectl get pvc --all-namespaces
          NAMESPACE           NAME                      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS       AGE
          kubesphere-system   openldap-pvc-openldap-0   Bound    pvc-35ee429f-0d59-4bf4-943b-851ac2cdd2de   2Gi        RWO            openebs-hostpath   9m37s
          kubesphere-system   redis-pvc                 Bound    pvc-4845c505-c5d2-4a40-8d07-eac97d831bbe   2Gi        RWO            openebs-hostpath   9m53s
          
          [root@k8s-node1 k8s]# kubectl get pod -n kubesphere-system
          NAME                            READY   STATUS    RESTARTS   AGE
          ks-installer-75b8d89dff-w5gf8   1/1     Running   0          3m
          
          安装kubesphere日志:
          [root@k8s-node1 k8s]# kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
          2020-05-19T18:44:09Z INFO     : shell-operator v1.0.0-beta.5
          2020-05-19T18:44:09Z INFO     : Use temporary dir: /tmp/shell-operator
          2020-05-19T18:44:09Z INFO     : Initialize hooks manager ...
          2020-05-19T18:44:09Z INFO     : Search and load hooks ...
          2020-05-19T18:44:09Z INFO     : Load hook config from '/hooks/kubesphere/installRunner.py'
          2020-05-19T18:44:09Z INFO     : HTTP SERVER Listening on 0.0.0.0:9115
          2020-05-19T18:44:10Z INFO     : Initializing schedule manager ...
          2020-05-19T18:44:10Z INFO     : KUBE Init Kubernetes client
          2020-05-19T18:44:10Z INFO     : KUBE-INIT Kubernetes client is configured successfully
          2020-05-19T18:44:10Z INFO     : MAIN: run main loop
          2020-05-19T18:44:10Z INFO     : MAIN: add onStartup tasks
          2020-05-19T18:44:10Z INFO     : QUEUE add all HookRun@OnStartup
          2020-05-19T18:44:10Z INFO     : Running schedule manager ...
          2020-05-19T18:44:10Z INFO     : MSTOR Create new metric shell_operator_live_ticks
          2020-05-19T18:44:10Z INFO     : MSTOR Create new metric shell_operator_tasks_queue_length
          2020-05-19T18:44:10Z INFO     : GVR for kind 'ConfigMap' is /v1, Resource=configmaps
          2020-05-19T18:44:10Z INFO     : EVENT Kube event '1db54c9c-1466-4016-9258-e82290d02496'
          2020-05-19T18:44:10Z INFO     : QUEUE add TASK_HOOK_RUN@KUBE_EVENTS kubesphere/installRunner.py
          2020-05-19T18:44:13Z INFO     : TASK_RUN HookRun@KUBE_EVENTS kubesphere/installRunner.py
          2020-05-19T18:44:13Z INFO     : Running hook 'kubesphere/installRunner.py' binding 'KUBE_EVENTS' ...
          [WARNING]: No inventory was parsed, only implicit localhost is available
          [WARNING]: provided hosts list is empty, only localhost is available. Note that
          the implicit localhost does not match 'all'
          
          PLAY [localhost] ***************************************************************
          
          TASK [download : include_tasks] ************************************************
          skipping: [localhost]
          
          TASK [download : Download items] ***********************************************
          skipping: [localhost]
          
          TASK [download : Sync container] ***********************************************
          skipping: [localhost]
          
          TASK [kubesphere-defaults : Configure defaults] ********************************
          ok: [localhost] => {
              "msg": "Check roles/kubesphere-defaults/defaults/main.yml"
          }
          
          TASK [preinstall : check k8s version] ******************************************
          changed: [localhost]
          
          TASK [preinstall : init k8s version] *******************************************
          ok: [localhost]
          
          TASK [preinstall : Stop if kuernetes version is nonsupport] ********************
          ok: [localhost] => {
              "changed": false, 
              "msg": "All assertions passed"
          }
          
          TASK [preinstall : check helm status] ******************************************
          changed: [localhost]
          
          TASK [preinstall : Stop if Helm is not available] ******************************
          ok: [localhost] => {
              "changed": false, 
              "msg": "All assertions passed"
          }
          
          TASK [preinstall : check storage class] ****************************************
          changed: [localhost]
          
          TASK [preinstall : Stop if StorageClass was not found] *************************
          skipping: [localhost]
          
          TASK [preinstall : check default storage class] ********************************
          changed: [localhost]
          
          TASK [preinstall : Stop if defaultStorageClass was not found] ******************
          ok: [localhost] => {
              "changed": false, 
              "msg": "All assertions passed"
          }
          
          PLAY RECAP *********************************************************************
          localhost                  : ok=9    changed=4    unreachable=0    failed=0    skipped=4    rescued=0    ignored=0   
          
          [WARNING]: No inventory was parsed, only implicit localhost is available
          [WARNING]: provided hosts list is empty, only localhost is available. Note that
          the implicit localhost does not match 'all'
          
          PLAY [localhost] ***************************************************************
          
          TASK [download : include_tasks] ************************************************
          skipping: [localhost]
          
          TASK [download : Download items] ***********************************************
          skipping: [localhost]
          
          TASK [download : Sync container] ***********************************************
          skipping: [localhost]
          
          TASK [kubesphere-defaults : Configure defaults] ********************************
          ok: [localhost] => {
              "msg": "Check roles/kubesphere-defaults/defaults/main.yml"
          }
          
          TASK [metrics-server : Metrics-Server | Checking old installation files] *******
          skipping: [localhost]
          
          TASK [metrics-server : Metrics-Server | deleting old prometheus-operator] ******
          skipping: [localhost]
          
          TASK [metrics-server : Metrics-Server | deleting old metrics-server files] *****
          skipping: [localhost] => (item=metrics-server) 
          
          TASK [metrics-server : Metrics-Server | Getting metrics-server installation files] ***
          skipping: [localhost]
          
          TASK [metrics-server : Metrics-Server | Creating manifests] ********************
          skipping: [localhost] => (item={u'type': u'config', u'name': u'values', u'file': u'values.yaml'}) 
          
          TASK [metrics-server : Metrics-Server | Check Metrics-Server] ******************
          skipping: [localhost]
          
          TASK [metrics-server : Metrics-Server | Installing metrics-server] *************
          skipping: [localhost]
          
          TASK [metrics-server : Metrics-Server | Installing metrics-server retry] *******
          skipping: [localhost]
          
          TASK [metrics-server : Metrics-Server | Waitting for v1beta1.metrics.k8s.io ready] ***
          skipping: [localhost]
          
          PLAY RECAP *********************************************************************
          localhost                  : ok=1    changed=0    unreachable=0    failed=0    skipped=12   rescued=0    ignored=0   
          
          [WARNING]: No inventory was parsed, only implicit localhost is available
          [WARNING]: provided hosts list is empty, only localhost is available. Note that
          the implicit localhost does not match 'all'
          
          PLAY [localhost] ***************************************************************
          
          TASK [download : include_tasks] ************************************************
          skipping: [localhost]
          
          TASK [download : Download items] ***********************************************
          skipping: [localhost]
          
          TASK [download : Sync container] ***********************************************
          skipping: [localhost]
          
          TASK [kubesphere-defaults : Configure defaults] ********************************
          ok: [localhost] => {
              "msg": "Check roles/kubesphere-defaults/defaults/main.yml"
          }
          
          TASK [common : Kubesphere | Check kube-node-lease namespace] *******************
          changed: [localhost]
          
          TASK [common : KubeSphere | Get system namespaces] *****************************
          ok: [localhost]
          
          TASK [common : set_fact] *******************************************************
          ok: [localhost]
          
          TASK [common : debug] **********************************************************
          ok: [localhost] => {
              "msg": [
                  "kubesphere-system", 
                  "kubesphere-controls-system", 
                  "kubesphere-monitoring-system", 
                  "kube-node-lease"
              ]
          }
          
          TASK [common : KubeSphere | Create kubesphere namespace] ***********************
          changed: [localhost] => (item=kubesphere-system)
          changed: [localhost] => (item=kubesphere-controls-system)
          changed: [localhost] => (item=kubesphere-monitoring-system)
          changed: [localhost] => (item=kube-node-lease)
          
          TASK [common : KubeSphere | Labeling system-workspace] *************************
          changed: [localhost] => (item=default)
          changed: [localhost] => (item=kube-public)
          changed: [localhost] => (item=kube-system)
          changed: [localhost] => (item=kubesphere-system)
          changed: [localhost] => (item=kubesphere-controls-system)
          changed: [localhost] => (item=kubesphere-monitoring-system)
          changed: [localhost] => (item=kube-node-lease)
          
          TASK [common : KubeSphere | Create ImagePullSecrets] ***************************
          changed: [localhost] => (item=default)
          changed: [localhost] => (item=kube-public)
          changed: [localhost] => (item=kube-system)
          changed: [localhost] => (item=kubesphere-system)
          changed: [localhost] => (item=kubesphere-controls-system)
          changed: [localhost] => (item=kubesphere-monitoring-system)
          changed: [localhost] => (item=kube-node-lease)
          
          TASK [common : KubeSphere | Getting kubernetes master num] *********************
          changed: [localhost]
          
          TASK [common : KubeSphere | Setting master num] ********************************
          ok: [localhost]
          
          TASK [common : Kubesphere | Getting common component installation files] *******
          changed: [localhost] => (item=common)
          changed: [localhost] => (item=ks-crds)
          
          TASK [common : KubeSphere | Create KubeSphere crds] ****************************
          changed: [localhost]
          
          TASK [common : Kubesphere | Checking openpitrix common component] **************
          changed: [localhost]
          
          TASK [common : include_tasks] **************************************************
          skipping: [localhost] => (item={u'ks': u'mysql-pvc', u'op': u'openpitrix-db'}) 
          skipping: [localhost] => (item={u'ks': u'etcd-pvc', u'op': u'openpitrix-etcd'}) 
          
          TASK [common : Getting PersistentVolumeName (mysql)] ***************************
          skipping: [localhost]
          
          TASK [common : Getting PersistentVolumeSize (mysql)] ***************************
          skipping: [localhost]
          
          TASK [common : Setting PersistentVolumeName (mysql)] ***************************
          skipping: [localhost]
          
          TASK [common : Setting PersistentVolumeSize (mysql)] ***************************
          skipping: [localhost]
          
          TASK [common : Getting PersistentVolumeName (etcd)] ****************************
          skipping: [localhost]
          
          TASK [common : Getting PersistentVolumeSize (etcd)] ****************************
          skipping: [localhost]
          
          TASK [common : Setting PersistentVolumeName (etcd)] ****************************
          skipping: [localhost]
          
          TASK [common : Setting PersistentVolumeSize (etcd)] ****************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Check mysql PersistentVolumeClaim] *****************
          fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl get pvc -n kubesphere-system mysql-pvc -o jsonpath='{.status.capacity.storage}'\n", "delta": "0:00:00.840427", "end": "2020-05-19 18:45:24.356249", "msg": "non-zero return code", "rc": 1, "start": "2020-05-19 18:45:23.515822", "stderr": "Error from server (NotFound): persistentvolumeclaims \"mysql-pvc\" not found", "stderr_lines": ["Error from server (NotFound): persistentvolumeclaims \"mysql-pvc\" not found"], "stdout": "", "stdout_lines": []}
          ...ignoring
          
          TASK [common : Kubesphere | Setting mysql db pv size] **************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Check redis PersistentVolumeClaim] *****************
          fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl get pvc -n kubesphere-system redis-pvc -o jsonpath='{.status.capacity.storage}'\n", "delta": "0:00:00.782230", "end": "2020-05-19 18:45:25.567637", "msg": "non-zero return code", "rc": 1, "start": "2020-05-19 18:45:24.785407", "stderr": "Error from server (NotFound): persistentvolumeclaims \"redis-pvc\" not found", "stderr_lines": ["Error from server (NotFound): persistentvolumeclaims \"redis-pvc\" not found"], "stdout": "", "stdout_lines": []}
          ...ignoring
          
          TASK [common : Kubesphere | Setting redis db pv size] **************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Check minio PersistentVolumeClaim] *****************
          fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl get pvc -n kubesphere-system minio -o jsonpath='{.status.capacity.storage}'\n", "delta": "0:00:00.841701", "end": "2020-05-19 18:45:26.842157", "msg": "non-zero return code", "rc": 1, "start": "2020-05-19 18:45:26.000456", "stderr": "Error from server (NotFound): persistentvolumeclaims \"minio\" not found", "stderr_lines": ["Error from server (NotFound): persistentvolumeclaims \"minio\" not found"], "stdout": "", "stdout_lines": []}
          ...ignoring
          
          TASK [common : Kubesphere | Setting minio pv size] *****************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Check openldap PersistentVolumeClaim] **************
          fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl get pvc -n kubesphere-system openldap-pvc-openldap-0 -o jsonpath='{.status.capacity.storage}'\n", "delta": "0:00:00.734275", "end": "2020-05-19 18:45:28.024918", "msg": "non-zero return code", "rc": 1, "start": "2020-05-19 18:45:27.290643", "stderr": "Error from server (NotFound): persistentvolumeclaims \"openldap-pvc-openldap-0\" not found", "stderr_lines": ["Error from server (NotFound): persistentvolumeclaims \"openldap-pvc-openldap-0\" not found"], "stdout": "", "stdout_lines": []}
          ...ignoring
          
          TASK [common : Kubesphere | Setting openldap pv size] **************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Check etcd db PersistentVolumeClaim] ***************
          fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl get pvc -n kubesphere-system etcd-pvc -o jsonpath='{.status.capacity.storage}'\n", "delta": "0:00:00.740610", "end": "2020-05-19 18:45:29.168965", "msg": "non-zero return code", "rc": 1, "start": "2020-05-19 18:45:28.428355", "stderr": "Error from server (NotFound): persistentvolumeclaims \"etcd-pvc\" not found", "stderr_lines": ["Error from server (NotFound): persistentvolumeclaims \"etcd-pvc\" not found"], "stdout": "", "stdout_lines": []}
          ...ignoring
          
          TASK [common : Kubesphere | Setting etcd pv size] ******************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Check redis ha PersistentVolumeClaim] **************
          fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl get pvc -n kubesphere-system data-redis-ha-server-0 -o jsonpath='{.status.capacity.storage}'\n", "delta": "0:00:00.715752", "end": "2020-05-19 18:45:30.313488", "msg": "non-zero return code", "rc": 1, "start": "2020-05-19 18:45:29.597736", "stderr": "Error from server (NotFound): persistentvolumeclaims \"data-redis-ha-server-0\" not found", "stderr_lines": ["Error from server (NotFound): persistentvolumeclaims \"data-redis-ha-server-0\" not found"], "stdout": "", "stdout_lines": []}
          ...ignoring
          
          TASK [common : Kubesphere | Setting redis ha pv size] **************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Creating common component manifests] ***************
          changed: [localhost] => (item={u'path': u'etcd', u'file': u'etcd.yaml'})
          changed: [localhost] => (item={u'name': u'mysql', u'file': u'mysql.yaml'})
          changed: [localhost] => (item={u'path': u'redis', u'file': u'redis.yaml'})
          
          TASK [common : Kubesphere | Creating mysql sercet] *****************************
          changed: [localhost]
          
          TASK [common : Kubesphere | Deploying etcd and mysql] **************************
          skipping: [localhost] => (item=etcd.yaml) 
          skipping: [localhost] => (item=mysql.yaml) 
          
          TASK [common : Kubesphere | Getting minio installation files] ******************
          skipping: [localhost] => (item=minio-ha) 
          
          TASK [common : Kubesphere | Creating manifests] ********************************
          skipping: [localhost] => (item={u'name': u'custom-values-minio', u'file': u'custom-values-minio.yaml'}) 
          
          TASK [common : Kubesphere | Check minio] ***************************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Deploy minio] **************************************
          skipping: [localhost]
          
          TASK [common : debug] **********************************************************
          skipping: [localhost]
          
          TASK [common : fail] ***********************************************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | create minio config directory] *********************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Creating common component manifests] ***************
          skipping: [localhost] => (item={u'path': u'/root/.config/rclone', u'file': u'rclone.conf'}) 
          
          TASK [common : include_tasks] **************************************************
          skipping: [localhost] => (item=helm) 
          skipping: [localhost] => (item=vmbased) 
          
          TASK [common : Kubesphere | Check ha-redis] ************************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Getting redis installation files] ******************
          skipping: [localhost] => (item=redis-ha) 
          
          TASK [common : Kubesphere | Creating manifests] ********************************
          skipping: [localhost] => (item={u'name': u'custom-values-redis', u'file': u'custom-values-redis.yaml'}) 
          
          TASK [common : Kubesphere | Check old redis status] ****************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Delete and backup old redis svc] *******************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Deploying redis] ***********************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Getting redis PodIp] *******************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Creating redis migration script] *******************
          skipping: [localhost] => (item={u'path': u'/etc/kubesphere', u'file': u'redisMigrate.py'}) 
          
          TASK [common : Kubesphere | Check redis-ha status] *****************************
          skipping: [localhost]
          
          TASK [common : ks-logging | Migrating redis data] ******************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Disable old redis] *********************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Deploying redis] ***********************************
          skipping: [localhost] => (item=redis.yaml) 
          
          TASK [common : Kubesphere | Getting openldap installation files] ***************
          skipping: [localhost] => (item=openldap-ha) 
          
          TASK [common : Kubesphere | Creating manifests] ********************************
          skipping: [localhost] => (item={u'name': u'custom-values-openldap', u'file': u'custom-values-openldap.yaml'}) 
          
          TASK [common : Kubesphere | Check old openldap status] *************************
          skipping: [localhost]
          
          TASK [common : KubeSphere | Shutdown ks-account] *******************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Delete and backup old openldap svc] ****************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Check openldap] ************************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Deploy openldap] ***********************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Load old openldap data] ****************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Check openldap-ha status] **************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Get openldap-ha pod list] **************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Get old openldap data] *****************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Migrating openldap data] ***************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Disable old openldap] ******************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Restart openldap] **********************************
          skipping: [localhost]
          
          TASK [common : KubeSphere | Restarting ks-account] *****************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Check ha-redis] ************************************
          changed: [localhost]
          
          TASK [common : Kubesphere | Getting redis installation files] ******************
          skipping: [localhost] => (item=redis-ha) 
          
          TASK [common : Kubesphere | Creating manifests] ********************************
          skipping: [localhost] => (item={u'name': u'custom-values-redis', u'file': u'custom-values-redis.yaml'}) 
          
          TASK [common : Kubesphere | Check old redis status] ****************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Delete and backup old redis svc] *******************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Deploying redis] ***********************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Getting redis PodIp] *******************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Creating redis migration script] *******************
          skipping: [localhost] => (item={u'path': u'/etc/kubesphere', u'file': u'redisMigrate.py'}) 
          
          TASK [common : Kubesphere | Check redis-ha status] *****************************
          skipping: [localhost]
          
          TASK [common : ks-logging | Migrating redis data] ******************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Disable old redis] *********************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Deploying redis] ***********************************
          changed: [localhost] => (item=redis.yaml)
          
          TASK [common : Kubesphere | Getting openldap installation files] ***************
          changed: [localhost] => (item=openldap-ha)
          
          TASK [common : Kubesphere | Creating manifests] ********************************
          changed: [localhost] => (item={u'name': u'custom-values-openldap', u'file': u'custom-values-openldap.yaml'})
          
          TASK [common : Kubesphere | Check old openldap status] *************************
          changed: [localhost]
          
          TASK [common : KubeSphere | Shutdown ks-account] *******************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Delete and backup old openldap svc] ****************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Check openldap] ************************************
          changed: [localhost]
          
          TASK [common : Kubesphere | Deploy openldap] ***********************************
          changed: [localhost]
          
          TASK [common : Kubesphere | Load old openldap data] ****************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Check openldap-ha status] **************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Get openldap-ha pod list] **************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Get old openldap data] *****************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Migrating openldap data] ***************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Disable old openldap] ******************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Restart openldap] **********************************
          skipping: [localhost]
          
          TASK [common : KubeSphere | Restarting ks-account] *****************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Getting minio installation files] ******************
          skipping: [localhost] => (item=minio-ha) 
          
          TASK [common : Kubesphere | Creating manifests] ********************************
          skipping: [localhost] => (item={u'name': u'custom-values-minio', u'file': u'custom-values-minio.yaml'}) 
          
          TASK [common : Kubesphere | Check minio] ***************************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Deploy minio] **************************************
          skipping: [localhost]
          
          TASK [common : debug] **********************************************************
          skipping: [localhost]
          
          TASK [common : fail] ***********************************************************
          skipping: [localhost]
          
          TASK [common : Kubesphere | create minio config directory] *********************
          skipping: [localhost]
          
          TASK [common : Kubesphere | Creating common component manifests] ***************
          skipping: [localhost] => (item={u'path': u'/root/.config/rclone', u'file': u'rclone.conf'}) 
          
          TASK [common : include_tasks] **************************************************
          skipping: [localhost] => (item=helm) 
          skipping: [localhost] => (item=vmbased) 
          
          TASK [common : Kubesphere | Deploying common component] ************************
          skipping: [localhost] => (item=mysql.yaml) 
          
          TASK [common : Kubesphere | Deploying common component] ************************
          skipping: [localhost] => (item=etcd.yaml) 
          
          TASK [common : Setting persistentVolumeReclaimPolicy (mysql)] ******************
          skipping: [localhost]
          
          TASK [common : Setting persistentVolumeReclaimPolicy (etcd)] *******************
          skipping: [localhost]
          
          PLAY RECAP *********************************************************************
          localhost                  : ok=28   changed=23   unreachable=0    failed=0    skipped=88   rescued=0    ignored=6   
          
          [WARNING]: No inventory was parsed, only implicit localhost is available
          [WARNING]: provided hosts list is empty, only localhost is available. Note that
          the implicit localhost does not match 'all'
          
          PLAY [localhost] ***************************************************************
          
          TASK [download : include_tasks] ************************************************
          skipping: [localhost]
          
          TASK [download : Download items] ***********************************************
          skipping: [localhost]
          
          TASK [download : Sync container] ***********************************************
          skipping: [localhost]
          
          TASK [kubesphere-defaults : Configure defaults] ********************************
          ok: [localhost] => {
              "msg": "Check roles/kubesphere-defaults/defaults/main.yml"
          }
          
          TASK [ks-core/prepare : KubeSphere | Create KubeSphere dir] ********************
          ok: [localhost]
          
          TASK [ks-core/prepare : KubeSphere | Getting installation init files] **********
          changed: [localhost] => (item=workspace.yaml)
          changed: [localhost] => (item=ks-init)
          
          TASK [ks-core/prepare : KubeSphere | Init KubeSphere system] *******************
          changed: [localhost]
          
          TASK [ks-core/prepare : KubeSphere | Creating KubeSphere Secret] ***************
          changed: [localhost]
          
          TASK [ks-core/prepare : KubeSphere | Creating KubeSphere Secret] ***************
          ok: [localhost]
          
          TASK [ks-core/prepare : KubeSphere | Enable Token Script] **********************
          changed: [localhost]
          
          TASK [ks-core/prepare : KubeSphere | Getting KS Token] *************************
          changed: [localhost]
          
          TASK [ks-core/prepare : KubeSphere | Setting ks_token] *************************
          ok: [localhost]
          
          TASK [ks-core/prepare : KubeSphere | Creating manifests] ***********************
          changed: [localhost] => (item={u'type': u'init', u'name': u'ks-account-init', u'file': u'ks-account-init.yaml'})
          changed: [localhost] => (item={u'type': u'init', u'name': u'ks-apigateway-init', u'file': u'ks-apigateway-init.yaml'})
          changed: [localhost] => (item={u'type': u'values', u'name': u'custom-values-istio-init', u'file': u'custom-values-istio-init.yaml'})
          changed: [localhost] => (item={u'type': u'cm', u'name': u'kubesphere-config', u'file': u'kubesphere-config.yaml'})
          
          TASK [ks-core/prepare : KubeSphere | Init KubeSphere] **************************
          changed: [localhost] => (item=ks-account-init.yaml)
          changed: [localhost] => (item=ks-apigateway-init.yaml)
          changed: [localhost] => (item=kubesphere-config.yaml)
          
          TASK [ks-core/prepare : KubeSphere | Getting controls-system file] *************
          changed: [localhost] => (item={u'name': u'kubesphere-controls-system', u'file': u'kubesphere-controls-system.yaml'})
          
          TASK [ks-core/prepare : KubeSphere | Installing controls-system] ***************
          changed: [localhost]
          
          TASK [ks-core/prepare : KubeSphere | Create KubeSphere workspace] **************
          FAILED - RETRYING: KubeSphere | Create KubeSphere workspace (5 retries left).
          FAILED - RETRYING: KubeSphere | Create KubeSphere workspace (4 retries left).
          FAILED - RETRYING: KubeSphere | Create KubeSphere workspace (3 retries left).
          FAILED - RETRYING: KubeSphere | Create KubeSphere workspace (2 retries left).
          FAILED - RETRYING: KubeSphere | Create KubeSphere workspace (1 retries left).
          fatal: [localhost]: FAILED! => {"attempts": 5, "changed": true, "cmd": "/usr/local/bin/kubectl apply -f /etc/kubesphere/workspace.yaml", "delta": "0:00:01.289906", "end": "2020-05-19 18:46:57.557220", "msg": "non-zero return code", "rc": 1, "start": "2020-05-19 18:46:56.267314", "stderr": "error: unable to recognize \"/etc/kubesphere/workspace.yaml\": no matches for kind \"Workspace\" in version \"tenant.kubesphere.io/v1alpha1\"", "stderr_lines": ["error: unable to recognize \"/etc/kubesphere/workspace.yaml\": no matches for kind \"Workspace\" in version \"tenant.kubesphere.io/v1alpha1\""], "stdout": "", "stdout_lines": []}
          
          PLAY RECAP *********************************************************************
          localhost                  : ok=13   changed=9    unreachable=0    failed=1    skipped=3    rescued=0    ignored=0

          error: unable to recognize \"/etc/kubesphere/workspace.yaml\": no matches for kind \"Workspace\" in version \"tenant.kubesphere.io/v1alpha1\"

          找不到对应的crd
          可以kubectl get crd 看看有木有 workspaces.tenant.kubesphere.io

            kubectl get workspaces 执行这个看看
            没有报错的话重启下ks-installer试试
            kubectl rollout restart deploy -n kubesphere-system ks-installer

              Cauchy
              首先十分感谢您的回复,同时感谢 Feynman 一开始就跟进这个问题的进度。
              这是第四次安装失败后,根据您的建议,查看了 crd 和 workspaces

              [root@k8s-node1 ~]# kubectl get crd
              NAME                                                         CREATED AT
              applications.app.k8s.io                                      2020-05-24T18:09:09Z
              blockdeviceclaims.openebs.io                                 2020-05-24T13:08:30Z
              blockdevices.openebs.io                                      2020-05-24T13:08:26Z
              castemplates.openebs.io                                      2020-05-24T14:36:41Z
              cstorbackups.openebs.io                                      2020-05-24T14:36:41Z
              cstorcompletedbackups.openebs.io                             2020-05-24T14:36:41Z
              cstorpoolinstances.openebs.io                                2020-05-24T14:36:41Z
              cstorpools.openebs.io                                        2020-05-24T14:36:41Z
              cstorrestores.openebs.io                                     2020-05-24T14:36:41Z
              cstorvolumeclaims.openebs.io                                 2020-05-24T14:36:41Z
              cstorvolumereplicas.openebs.io                               2020-05-24T14:36:41Z
              cstorvolumes.openebs.io                                      2020-05-24T14:36:41Z
              destinationrules.networking.istio.io                         2020-05-24T18:09:10Z
              disks.openebs.io                                             2020-05-24T13:08:22Z
              fluentbits.logging.kubesphere.io                             2020-05-24T18:09:10Z
              runtasks.openebs.io                                          2020-05-24T14:36:41Z
              s2ibinaries.devops.kubesphere.io                             2020-05-24T18:09:09Z
              s2ibuilders.devops.kubesphere.io                             2020-05-24T18:09:09Z
              s2ibuildertemplates.devops.kubesphere.io                     2020-05-24T18:09:10Z
              s2iruns.devops.kubesphere.io                                 2020-05-24T18:09:10Z
              servicepolicies.servicemesh.kubesphere.io                    2020-05-24T18:09:10Z
              storagepoolclaims.openebs.io                                 2020-05-24T14:36:41Z
              storagepools.openebs.io                                      2020-05-24T14:36:41Z
              strategies.servicemesh.kubesphere.io                         2020-05-24T18:09:10Z
              upgradetasks.openebs.io                                      2020-05-24T14:36:41Z
              virtualservices.networking.istio.io                          2020-05-24T18:09:10Z
              volumesnapshotdatas.volumesnapshot.external-storage.k8s.io   2020-05-24T13:08:07Z
              volumesnapshots.volumesnapshot.external-storage.k8s.io       2020-05-24T13:08:07Z
              workspaces.tenant.kubesphere.io                              2020-05-24T18:09:10Z
              
              [root@k8s-node1 ~]# kubectl get workspaces
              No resources found in default namespace.
              

              之后,又根据您的建议重新安装,日志信息如下:

              [root@k8s-node1 ~]# kubectl rollout restart deploy -n kubesphere-system ks-installer
              deployment.apps/ks-installer restarted
              [root@k8s-node1 ~]# kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
              2020-05-21T22:21:44Z INFO     : shell-operator v1.0.0-beta.5
              2020-05-21T22:21:44Z INFO     : Use temporary dir: /tmp/shell-operator
              2020-05-21T22:21:44Z INFO     : Initialize hooks manager ...
              2020-05-21T22:21:44Z INFO     : Search and load hooks ...
              2020-05-21T22:21:44Z INFO     : Load hook config from '/hooks/kubesphere/installRunner.py'
              2020-05-21T22:21:44Z INFO     : HTTP SERVER Listening on 0.0.0.0:9115
              2020-05-21T22:21:45Z INFO     : Initializing schedule manager ...
              2020-05-21T22:21:45Z INFO     : KUBE Init Kubernetes client
              2020-05-21T22:21:45Z INFO     : KUBE-INIT Kubernetes client is configured successfully
              2020-05-21T22:21:45Z INFO     : MAIN: run main loop
              2020-05-21T22:21:45Z INFO     : MAIN: add onStartup tasks
              2020-05-21T22:21:45Z INFO     : Running schedule manager ...
              2020-05-21T22:21:45Z INFO     : QUEUE add all HookRun@OnStartup
              2020-05-21T22:21:45Z INFO     : MSTOR Create new metric shell_operator_live_ticks
              2020-05-21T22:21:45Z INFO     : MSTOR Create new metric shell_operator_tasks_queue_length
              2020-05-21T22:21:45Z INFO     : GVR for kind 'ConfigMap' is /v1, Resource=configmaps
              2020-05-21T22:21:45Z INFO     : EVENT Kube event '4c624d86-852b-4838-9d28-377d11707af9'
              2020-05-21T22:21:45Z INFO     : QUEUE add TASK_HOOK_RUN@KUBE_EVENTS kubesphere/installRunner.py
              2020-05-21T22:21:48Z INFO     : TASK_RUN HookRun@KUBE_EVENTS kubesphere/installRunner.py
              2020-05-21T22:21:48Z INFO     : Running hook 'kubesphere/installRunner.py' binding 'KUBE_EVENTS' ...
              [WARNING]: No inventory was parsed, only implicit localhost is available
              [WARNING]: provided hosts list is empty, only localhost is available. Note that
              the implicit localhost does not match 'all'
              
              PLAY [localhost] ***************************************************************
              
              TASK [download : include_tasks] ************************************************
              skipping: [localhost]
              
              TASK [download : Download items] ***********************************************
              skipping: [localhost]
              
              TASK [download : Sync container] ***********************************************
              skipping: [localhost]
              
              TASK [kubesphere-defaults : Configure defaults] ********************************
              ok: [localhost] => {
                  "msg": "Check roles/kubesphere-defaults/defaults/main.yml"
              }
              
              TASK [preinstall : check k8s version] ******************************************
              changed: [localhost]
              
              TASK [preinstall : init k8s version] *******************************************
              ok: [localhost]
              
              TASK [preinstall : Stop if kuernetes version is nonsupport] ********************
              ok: [localhost] => {
                  "changed": false, 
                  "msg": "All assertions passed"
              }
              
              TASK [preinstall : check helm status] ******************************************
              changed: [localhost]
              
              TASK [preinstall : Stop if Helm is not available] ******************************
              ok: [localhost] => {
                  "changed": false, 
                  "msg": "All assertions passed"
              }
              
              TASK [preinstall : check storage class] ****************************************
              changed: [localhost]
              
              TASK [preinstall : Stop if StorageClass was not found] *************************
              skipping: [localhost]
              
              TASK [preinstall : check default storage class] ********************************
              changed: [localhost]
              
              TASK [preinstall : Stop if defaultStorageClass was not found] ******************
              ok: [localhost] => {
                  "changed": false, 
                  "msg": "All assertions passed"
              }
              
              PLAY RECAP *********************************************************************
              localhost                  : ok=9    changed=4    unreachable=0    failed=0    skipped=4    rescued=0    ignored=0   
              
              [WARNING]: No inventory was parsed, only implicit localhost is available
              [WARNING]: provided hosts list is empty, only localhost is available. Note that
              the implicit localhost does not match 'all'
              
              PLAY [localhost] ***************************************************************
              
              TASK [download : include_tasks] ************************************************
              skipping: [localhost]
              
              TASK [download : Download items] ***********************************************
              skipping: [localhost]
              
              TASK [download : Sync container] ***********************************************
              skipping: [localhost]
              
              TASK [kubesphere-defaults : Configure defaults] ********************************
              ok: [localhost] => {
                  "msg": "Check roles/kubesphere-defaults/defaults/main.yml"
              }
              
              TASK [metrics-server : Metrics-Server | Checking old installation files] *******
              skipping: [localhost]
              
              TASK [metrics-server : Metrics-Server | deleting old prometheus-operator] ******
              skipping: [localhost]
              
              TASK [metrics-server : Metrics-Server | deleting old metrics-server files] *****
              skipping: [localhost] => (item=metrics-server) 
              
              TASK [metrics-server : Metrics-Server | Getting metrics-server installation files] ***
              skipping: [localhost]
              
              TASK [metrics-server : Metrics-Server | Creating manifests] ********************
              skipping: [localhost] => (item={u'type': u'config', u'name': u'values', u'file': u'values.yaml'}) 
              
              TASK [metrics-server : Metrics-Server | Check Metrics-Server] ******************
              skipping: [localhost]
              
              TASK [metrics-server : Metrics-Server | Installing metrics-server] *************
              skipping: [localhost]
              
              TASK [metrics-server : Metrics-Server | Installing metrics-server retry] *******
              skipping: [localhost]
              
              TASK [metrics-server : Metrics-Server | Waitting for v1beta1.metrics.k8s.io ready] ***
              skipping: [localhost]
              
              PLAY RECAP *********************************************************************
              localhost                  : ok=1    changed=0    unreachable=0    failed=0    skipped=12   rescued=0    ignored=0   
              
              [WARNING]: No inventory was parsed, only implicit localhost is available
              [WARNING]: provided hosts list is empty, only localhost is available. Note that
              the implicit localhost does not match 'all'
              
              PLAY [localhost] ***************************************************************
              
              TASK [download : include_tasks] ************************************************
              skipping: [localhost]
              
              TASK [download : Download items] ***********************************************
              skipping: [localhost]
              
              TASK [download : Sync container] ***********************************************
              skipping: [localhost]
              
              TASK [kubesphere-defaults : Configure defaults] ********************************
              ok: [localhost] => {
                  "msg": "Check roles/kubesphere-defaults/defaults/main.yml"
              }
              
              TASK [common : Kubesphere | Check kube-node-lease namespace] *******************
              changed: [localhost]
              
              TASK [common : KubeSphere | Get system namespaces] *****************************
              ok: [localhost]
              
              TASK [common : set_fact] *******************************************************
              ok: [localhost]
              
              TASK [common : debug] **********************************************************
              ok: [localhost] => {
                  "msg": [
                      "kubesphere-system", 
                      "kubesphere-controls-system", 
                      "kubesphere-monitoring-system", 
                      "kube-node-lease"
                  ]
              }
              
              TASK [common : KubeSphere | Create kubesphere namespace] ***********************
              changed: [localhost] => (item=kubesphere-system)
              changed: [localhost] => (item=kubesphere-controls-system)
              changed: [localhost] => (item=kubesphere-monitoring-system)
              changed: [localhost] => (item=kube-node-lease)
              
              TASK [common : KubeSphere | Labeling system-workspace] *************************
              changed: [localhost] => (item=default)
              changed: [localhost] => (item=kube-public)
              changed: [localhost] => (item=kube-system)
              changed: [localhost] => (item=kubesphere-system)
              changed: [localhost] => (item=kubesphere-controls-system)
              changed: [localhost] => (item=kubesphere-monitoring-system)
              changed: [localhost] => (item=kube-node-lease)
              
              TASK [common : KubeSphere | Create ImagePullSecrets] ***************************
              changed: [localhost] => (item=default)
              changed: [localhost] => (item=kube-public)
              changed: [localhost] => (item=kube-system)
              changed: [localhost] => (item=kubesphere-system)
              changed: [localhost] => (item=kubesphere-controls-system)
              changed: [localhost] => (item=kubesphere-monitoring-system)
              changed: [localhost] => (item=kube-node-lease)
              
              TASK [common : KubeSphere | Getting kubernetes master num] *********************
              changed: [localhost]
              
              TASK [common : KubeSphere | Setting master num] ********************************
              ok: [localhost]
              
              TASK [common : Kubesphere | Getting common component installation files] *******
              changed: [localhost] => (item=common)
              changed: [localhost] => (item=ks-crds)
              
              TASK [common : KubeSphere | Create KubeSphere crds] ****************************
              changed: [localhost]
              
              TASK [common : Kubesphere | Checking openpitrix common component] **************
              changed: [localhost]
              
              TASK [common : include_tasks] **************************************************
              skipping: [localhost] => (item={u'ks': u'mysql-pvc', u'op': u'openpitrix-db'}) 
              skipping: [localhost] => (item={u'ks': u'etcd-pvc', u'op': u'openpitrix-etcd'}) 
              
              TASK [common : Getting PersistentVolumeName (mysql)] ***************************
              skipping: [localhost]
              
              TASK [common : Getting PersistentVolumeSize (mysql)] ***************************
              skipping: [localhost]
              
              TASK [common : Setting PersistentVolumeName (mysql)] ***************************
              skipping: [localhost]
              
              TASK [common : Setting PersistentVolumeSize (mysql)] ***************************
              skipping: [localhost]
              
              TASK [common : Getting PersistentVolumeName (etcd)] ****************************
              skipping: [localhost]
              
              TASK [common : Getting PersistentVolumeSize (etcd)] ****************************
              skipping: [localhost]
              
              TASK [common : Setting PersistentVolumeName (etcd)] ****************************
              skipping: [localhost]
              
              TASK [common : Setting PersistentVolumeSize (etcd)] ****************************
              skipping: [localhost]
              
              TASK [common : Kubesphere | Check mysql PersistentVolumeClaim] *****************
              fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl get pvc -n kubesphere-system mysql-pvc -o jsonpath='{.status.capacity.storage}'\n", "delta": "0:00:00.780890", "end": "2020-05-21 22:23:02.376997", "msg": "non-zero return code", "rc": 1, "start": "2020-05-21 22:23:01.596107", "stderr": "Error from server (NotFound): persistentvolumeclaims \"mysql-pvc\" not found", "stderr_lines": ["Error from server (NotFound): persistentvolumeclaims \"mysql-pvc\" not found"], "stdout": "", "stdout_lines": []}
              ...ignoring
              
              TASK [common : Kubesphere | Setting mysql db pv size] **************************
              skipping: [localhost]
              
              TASK [common : Kubesphere | Check redis PersistentVolumeClaim] *****************
              changed: [localhost]
              
              TASK [common : Kubesphere | Setting redis db pv size] **************************
              ok: [localhost]
              
              TASK [common : Kubesphere | Check minio PersistentVolumeClaim] *****************
              fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl get pvc -n kubesphere-system minio -o jsonpath='{.status.capacity.storage}'\n", "delta": "0:00:00.928649", "end": "2020-05-21 22:23:05.736725", "msg": "non-zero return code", "rc": 1, "start": "2020-05-21 22:23:04.808076", "stderr": "Error from server (NotFound): persistentvolumeclaims \"minio\" not found", "stderr_lines": ["Error from server (NotFound): persistentvolumeclaims \"minio\" not found"], "stdout": "", "stdout_lines": []}
              ...ignoring
              
              TASK [common : Kubesphere | Setting minio pv size] *****************************
              skipping: [localhost]
              
              TASK [common : Kubesphere | Check openldap PersistentVolumeClaim] **************
              changed: [localhost]
              
              TASK [common : Kubesphere | Setting openldap pv size] **************************
              ok: [localhost]
              
              TASK [common : Kubesphere | Check etcd db PersistentVolumeClaim] ***************
              fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl get pvc -n kubesphere-system etcd-pvc -o jsonpath='{.status.capacity.storage}'\n", "delta": "0:00:00.819573", "end": "2020-05-21 22:23:08.447702", "msg": "non-zero return code", "rc": 1, "start": "2020-05-21 22:23:07.628129", "stderr": "Error from server (NotFound): persistentvolumeclaims \"etcd-pvc\" not found", "stderr_lines": ["Error from server (NotFound): persistentvolumeclaims \"etcd-pvc\" not found"], "stdout": "", "stdout_lines": []}
              ...ignoring
              
              TASK [common : Kubesphere | Setting etcd pv size] ******************************
              skipping: [localhost]
              
              TASK [common : Kubesphere | Check redis ha PersistentVolumeClaim] **************
              fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl get pvc -n kubesphere-system data-redis-ha-server-0 -o jsonpath='{.status.capacity.storage}'\n", "delta": "0:00:00.774771", "end": "2020-05-21 22:23:09.777650", "msg": "non-zero return code", "rc": 1, "start": "2020-05-21 22:23:09.002879", "stderr": "Error from server (NotFound): persistentvolumeclaims \"data-redis-ha-server-0\" not found", "stderr_lines": ["Error from server (NotFound): persistentvolumeclaims \"data-redis-ha-server-0\" not found"], "stdout": "", "stdout_lines": []}
              ...ignoring
              
              TASK [common : Kubesphere | Setting redis ha pv size] **************************
              skipping: [localhost]
              
              TASK [common : Kubesphere | Creating common component manifests] ***************
              changed: [localhost] => (item={u'path': u'etcd', u'file': u'etcd.yaml'})
              changed: [localhost] => (item={u'name': u'mysql', u'file': u'mysql.yaml'})
              changed: [localhost] => (item={u'path': u'redis', u'file': u'redis.yaml'})
              
              TASK [common : Kubesphere | Creating mysql sercet] *****************************
              changed: [localhost]
              
              TASK [common : Kubesphere | Deploying etcd and mysql] **************************
              skipping: [localhost] => (item=etcd.yaml) 
              skipping: [localhost] => (item=mysql.yaml) 
              
              TASK [common : Kubesphere | Getting minio installation files] ******************
              skipping: [localhost] => (item=minio-ha) 
              
              TASK [common : Kubesphere | Creating manifests] ********************************
              skipping: [localhost] => (item={u'name': u'custom-values-minio', u'file': u'custom-values-minio.yaml'}) 
              
              TASK [common : Kubesphere | Check minio] ***************************************
              skipping: [localhost]
              
              TASK [common : Kubesphere | Deploy minio] **************************************
              skipping: [localhost]
              
              TASK [common : debug] **********************************************************
              skipping: [localhost]
              
              TASK [common : fail] ***********************************************************
              skipping: [localhost]
              
              TASK [common : Kubesphere | create minio config directory] *********************
              skipping: [localhost]
              
              TASK [common : Kubesphere | Creating common component manifests] ***************
              skipping: [localhost] => (item={u'path': u'/root/.config/rclone', u'file': u'rclone.conf'}) 
              
              TASK [common : include_tasks] **************************************************
              skipping: [localhost] => (item=helm) 
              skipping: [localhost] => (item=vmbased) 
              
              TASK [common : Kubesphere | Check ha-redis] ************************************
              skipping: [localhost]
              
              TASK [common : Kubesphere | Getting redis installation files] ******************
              skipping: [localhost] => (item=redis-ha) 
              
              TASK [common : Kubesphere | Creating manifests] ********************************
              skipping: [localhost] => (item={u'name': u'custom-values-redis', u'file': u'custom-values-redis.yaml'}) 
              
              TASK [common : Kubesphere | Check old redis status] ****************************
              skipping: [localhost]
              
              TASK [common : Kubesphere | Delete and backup old redis svc] *******************
              skipping: [localhost]
              
              TASK [common : Kubesphere | Deploying redis] ***********************************
              skipping: [localhost]
              
              TASK [common : Kubesphere | Getting redis PodIp] *******************************
              skipping: [localhost]
              
              TASK [common : Kubesphere | Creating redis migration script] *******************
              skipping: [localhost] => (item={u'path': u'/etc/kubesphere', u'file': u'redisMigrate.py'}) 
              
              TASK [common : Kubesphere | Check redis-ha status] *****************************
              skipping: [localhost]
              
              TASK [common : ks-logging | Migrating redis data] ******************************
              skipping: [localhost]
              
              TASK [common : Kubesphere | Disable old redis] *********************************
              skipping: [localhost]
              
              TASK [common : Kubesphere | Deploying redis] ***********************************
              skipping: [localhost] => (item=redis.yaml) 
              
              TASK [common : Kubesphere | Getting openldap installation files] ***************
              skipping: [localhost] => (item=openldap-ha) 
              
              TASK [common : Kubesphere | Creating manifests] ********************************
              skipping: [localhost] => (item={u'name': u'custom-values-openldap', u'file': u'custom-values-openldap.yaml'}) 
              
              TASK [common : Kubesphere | Check old openldap status] *************************
              skipping: [localhost]
              
              TASK [common : KubeSphere | Shutdown ks-account] *******************************
              skipping: [localhost]
              
              TASK [common : Kubesphere | Delete and backup old openldap svc] ****************
              skipping: [localhost]
              
              TASK [common : Kubesphere | Check openldap] ************************************
              skipping: [localhost]
              
              TASK [common : Kubesphere | Deploy openldap] ***********************************
              skipping: [localhost]
              
              TASK [common : Kubesphere | Load old openldap data] ****************************
              skipping: [localhost]
              
              TASK [common : Kubesphere | Check openldap-ha status] **************************
              skipping: [localhost]
              
              TASK [common : Kubesphere | Get openldap-ha pod list] **************************
              skipping: [localhost]
              
              TASK [common : Kubesphere | Get old openldap data] *****************************
              skipping: [localhost]
              
              TASK [common : Kubesphere | Migrating openldap data] ***************************
              skipping: [localhost]
              
              TASK [common : Kubesphere | Disable old openldap] ******************************
              skipping: [localhost]
              
              TASK [common : Kubesphere | Restart openldap] **********************************
              skipping: [localhost]
              
              TASK [common : KubeSphere | Restarting ks-account] *****************************
              skipping: [localhost]
              
              TASK [common : Kubesphere | Check ha-redis] ************************************
              changed: [localhost]
              
              TASK [common : Kubesphere | Getting redis installation files] ******************
              skipping: [localhost] => (item=redis-ha) 
              
              TASK [common : Kubesphere | Creating manifests] ********************************
              skipping: [localhost] => (item={u'name': u'custom-values-redis', u'file': u'custom-values-redis.yaml'}) 
              
              TASK [common : Kubesphere | Check old redis status] ****************************
              skipping: [localhost]
              
              TASK [common : Kubesphere | Delete and backup old redis svc] *******************
              skipping: [localhost]
              
              TASK [common : Kubesphere | Deploying redis] ***********************************
              skipping: [localhost]
              
              TASK [common : Kubesphere | Getting redis PodIp] *******************************
              skipping: [localhost]
              
              TASK [common : Kubesphere | Creating redis migration script] *******************
              skipping: [localhost] => (item={u'path': u'/etc/kubesphere', u'file': u'redisMigrate.py'}) 
              
              TASK [common : Kubesphere | Check redis-ha status] *****************************
              skipping: [localhost]
              
              TASK [common : ks-logging | Migrating redis data] ******************************
              skipping: [localhost]
              
              TASK [common : Kubesphere | Disable old redis] *********************************
              skipping: [localhost]
              
              TASK [common : Kubesphere | Deploying redis] ***********************************
              changed: [localhost] => (item=redis.yaml)
              
              TASK [common : Kubesphere | Getting openldap installation files] ***************
              changed: [localhost] => (item=openldap-ha)
              
              TASK [common : Kubesphere | Creating manifests] ********************************
              changed: [localhost] => (item={u'name': u'custom-values-openldap', u'file': u'custom-values-openldap.yaml'})
              
              TASK [common : Kubesphere | Check old openldap status] *************************
              changed: [localhost]
              
              TASK [common : KubeSphere | Shutdown ks-account] *******************************
              skipping: [localhost]
              
              TASK [common : Kubesphere | Delete and backup old openldap svc] ****************
              skipping: [localhost]
              
              TASK [common : Kubesphere | Check openldap] ************************************
              changed: [localhost]
              
              TASK [common : Kubesphere | Deploy openldap] ***********************************
              skipping: [localhost]
              
              TASK [common : Kubesphere | Load old openldap data] ****************************
              skipping: [localhost]
              
              TASK [common : Kubesphere | Check openldap-ha status] **************************
              skipping: [localhost]
              
              TASK [common : Kubesphere | Get openldap-ha pod list] **************************
              skipping: [localhost]
              
              TASK [common : Kubesphere | Get old openldap data] *****************************
              skipping: [localhost]
              
              TASK [common : Kubesphere | Migrating openldap data] ***************************
              skipping: [localhost]
              
              TASK [common : Kubesphere | Disable old openldap] ******************************
              skipping: [localhost]
              
              TASK [common : Kubesphere | Restart openldap] **********************************
              skipping: [localhost]
              
              TASK [common : KubeSphere | Restarting ks-account] *****************************
              skipping: [localhost]
              
              TASK [common : Kubesphere | Getting minio installation files] ******************
              skipping: [localhost] => (item=minio-ha) 
              
              TASK [common : Kubesphere | Creating manifests] ********************************
              skipping: [localhost] => (item={u'name': u'custom-values-minio', u'file': u'custom-values-minio.yaml'}) 
              
              TASK [common : Kubesphere | Check minio] ***************************************
              skipping: [localhost]
              
              TASK [common : Kubesphere | Deploy minio] **************************************
              skipping: [localhost]
              
              TASK [common : debug] **********************************************************
              skipping: [localhost]
              
              TASK [common : fail] ***********************************************************
              skipping: [localhost]
              
              TASK [common : Kubesphere | create minio config directory] *********************
              skipping: [localhost]
              
              TASK [common : Kubesphere | Creating common component manifests] ***************
              skipping: [localhost] => (item={u'path': u'/root/.config/rclone', u'file': u'rclone.conf'}) 
              
              TASK [common : include_tasks] **************************************************
              skipping: [localhost] => (item=helm) 
              skipping: [localhost] => (item=vmbased) 
              
              TASK [common : Kubesphere | Deploying common component] ************************
              skipping: [localhost] => (item=mysql.yaml) 
              
              TASK [common : Kubesphere | Deploying common component] ************************
              skipping: [localhost] => (item=etcd.yaml) 
              
              TASK [common : Setting persistentVolumeReclaimPolicy (mysql)] ******************
              skipping: [localhost]
              
              TASK [common : Setting persistentVolumeReclaimPolicy (etcd)] *******************
              skipping: [localhost]
              
              PLAY RECAP *********************************************************************
              localhost                  : ok=29   changed=22   unreachable=0    failed=0    skipped=87   rescued=0    ignored=4   
              
              [WARNING]: No inventory was parsed, only implicit localhost is available
              [WARNING]: provided hosts list is empty, only localhost is available. Note that
              the implicit localhost does not match 'all'
              
              PLAY [localhost] ***************************************************************
              
              TASK [download : include_tasks] ************************************************
              skipping: [localhost]
              
              TASK [download : Download items] ***********************************************
              skipping: [localhost]
              
              TASK [download : Sync container] ***********************************************
              skipping: [localhost]
              
              TASK [kubesphere-defaults : Configure defaults] ********************************
              ok: [localhost] => {
                  "msg": "Check roles/kubesphere-defaults/defaults/main.yml"
              }
              
              TASK [ks-core/prepare : KubeSphere | Create KubeSphere dir] ********************
              ok: [localhost]
              
              TASK [ks-core/prepare : KubeSphere | Getting installation init files] **********
              changed: [localhost] => (item=workspace.yaml)
              changed: [localhost] => (item=ks-init)
              
              TASK [ks-core/prepare : KubeSphere | Init KubeSphere system] *******************
              changed: [localhost]
              
              TASK [ks-core/prepare : KubeSphere | Creating KubeSphere Secret] ***************
              changed: [localhost]
              
              TASK [ks-core/prepare : KubeSphere | Creating KubeSphere Secret] ***************
              ok: [localhost]
              
              TASK [ks-core/prepare : KubeSphere | Enable Token Script] **********************
              changed: [localhost]
              
              TASK [ks-core/prepare : KubeSphere | Getting KS Token] *************************
              changed: [localhost]
              
              TASK [ks-core/prepare : KubeSphere | Setting ks_token] *************************
              ok: [localhost]
              
              TASK [ks-core/prepare : KubeSphere | Creating manifests] ***********************
              changed: [localhost] => (item={u'type': u'init', u'name': u'ks-account-init', u'file': u'ks-account-init.yaml'})
              changed: [localhost] => (item={u'type': u'init', u'name': u'ks-apigateway-init', u'file': u'ks-apigateway-init.yaml'})
              changed: [localhost] => (item={u'type': u'values', u'name': u'custom-values-istio-init', u'file': u'custom-values-istio-init.yaml'})
              changed: [localhost] => (item={u'type': u'cm', u'name': u'kubesphere-config', u'file': u'kubesphere-config.yaml'})
              
              TASK [ks-core/prepare : KubeSphere | Init KubeSphere] **************************
              changed: [localhost] => (item=ks-account-init.yaml)
              changed: [localhost] => (item=ks-apigateway-init.yaml)
              changed: [localhost] => (item=kubesphere-config.yaml)
              
              TASK [ks-core/prepare : KubeSphere | Getting controls-system file] *************
              changed: [localhost] => (item={u'name': u'kubesphere-controls-system', u'file': u'kubesphere-controls-system.yaml'})
              
              TASK [ks-core/prepare : KubeSphere | Installing controls-system] ***************
              changed: [localhost]
              
              TASK [ks-core/prepare : KubeSphere | Create KubeSphere workspace] **************
              changed: [localhost]
              
              TASK [ks-core/prepare : KubeSphere | Create KubeSphere vpa] ********************
              skipping: [localhost]
              
              TASK [ks-core/prepare : KubeSphere | Generate kubeconfig-admin] ****************
              skipping: [localhost]
              
              TASK [ks-core/prepare : Kubesphere | Checking kubesphere component] ************
              changed: [localhost]
              
              TASK [ks-core/prepare : Kubesphere | Get kubesphere component version] *********
              skipping: [localhost]
              
              TASK [ks-core/prepare : ks-upgrade | disable ks-apiserver] *********************
              fatal: [localhost]: FAILED! => {"msg": "The conditional check 'console_version.stdout and console_version.stdout != ks_version' failed. The error was: error while evaluating conditional (console_version.stdout and console_version.stdout != ks_version): 'dict object' has no attribute 'stdout'\n\nThe error appears to be in '/kubesphere/installer/roles/ks-core/prepare/tasks/ks-stop.yaml': line 1, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: ks-upgrade | disable ks-apiserver\n  ^ here\n"}
              ...ignoring
              
              TASK [ks-core/prepare : ks-upgrade | disable ks-apigateway] ********************
              fatal: [localhost]: FAILED! => {"msg": "The conditional check 'console_version.stdout and console_version.stdout != ks_version' failed. The error was: error while evaluating conditional (console_version.stdout and console_version.stdout != ks_version): 'dict object' has no attribute 'stdout'\n\nThe error appears to be in '/kubesphere/installer/roles/ks-core/prepare/tasks/ks-stop.yaml': line 6, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: ks-upgrade | disable ks-apigateway\n  ^ here\n"}
              ...ignoring
              
              TASK [ks-core/prepare : ks-upgrade | disable ks-account] ***********************
              fatal: [localhost]: FAILED! => {"msg": "The conditional check 'console_version.stdout and console_version.stdout != ks_version' failed. The error was: error while evaluating conditional (console_version.stdout and console_version.stdout != ks_version): 'dict object' has no attribute 'stdout'\n\nThe error appears to be in '/kubesphere/installer/roles/ks-core/prepare/tasks/ks-stop.yaml': line 11, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: ks-upgrade | disable ks-account\n  ^ here\n"}
              ...ignoring
              
              TASK [ks-core/prepare : ks-upgrade | disable ks-console] ***********************
              fatal: [localhost]: FAILED! => {"msg": "The conditional check 'console_version.stdout and console_version.stdout != ks_version' failed. The error was: error while evaluating conditional (console_version.stdout and console_version.stdout != ks_version): 'dict object' has no attribute 'stdout'\n\nThe error appears to be in '/kubesphere/installer/roles/ks-core/prepare/tasks/ks-stop.yaml': line 16, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: ks-upgrade | disable ks-console\n  ^ here\n"}
              ...ignoring
              
              TASK [ks-core/prepare : ks-upgrade | disable ks-controller-manager] ************
              fatal: [localhost]: FAILED! => {"msg": "The conditional check 'console_version.stdout and console_version.stdout != ks_version' failed. The error was: error while evaluating conditional (console_version.stdout and console_version.stdout != ks_version): 'dict object' has no attribute 'stdout'\n\nThe error appears to be in '/kubesphere/installer/roles/ks-core/prepare/tasks/ks-stop.yaml': line 21, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: ks-upgrade | disable ks-controller-manager\n  ^ here\n"}
              ...ignoring
              
              TASK [ks-core/prepare : ks-upgrade | restart ks-apiserver] *********************
              fatal: [localhost]: FAILED! => {"msg": "The conditional check 'console_version.stdout and console_version.stdout == ks_version' failed. The error was: error while evaluating conditional (console_version.stdout and console_version.stdout == ks_version): 'dict object' has no attribute 'stdout'\n\nThe error appears to be in '/kubesphere/installer/roles/ks-core/prepare/tasks/ks-restart.yaml': line 1, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: ks-upgrade | restart ks-apiserver\n  ^ here\n"}
              ...ignoring
              
              TASK [ks-core/prepare : ks-upgrade | restart ks-apigateway] ********************
              fatal: [localhost]: FAILED! => {"msg": "The conditional check 'console_version.stdout and console_version.stdout == ks_version' failed. The error was: error while evaluating conditional (console_version.stdout and console_version.stdout == ks_version): 'dict object' has no attribute 'stdout'\n\nThe error appears to be in '/kubesphere/installer/roles/ks-core/prepare/tasks/ks-restart.yaml': line 6, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: ks-upgrade | restart ks-apigateway\n  ^ here\n"}
              ...ignoring
              
              TASK [ks-core/prepare : ks-upgrade | restart ks-account] ***********************
              fatal: [localhost]: FAILED! => {"msg": "The conditional check 'console_version.stdout and console_version.stdout == ks_version' failed. The error was: error while evaluating conditional (console_version.stdout and console_version.stdout == ks_version): 'dict object' has no attribute 'stdout'\n\nThe error appears to be in '/kubesphere/installer/roles/ks-core/prepare/tasks/ks-restart.yaml': line 11, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: ks-upgrade | restart ks-account\n  ^ here\n"}
              ...ignoring
              
              TASK [ks-core/prepare : ks-upgrade | restart ks-console] ***********************
              fatal: [localhost]: FAILED! => {"msg": "The conditional check 'console_version.stdout and console_version.stdout == ks_version' failed. The error was: error while evaluating conditional (console_version.stdout and console_version.stdout == ks_version): 'dict object' has no attribute 'stdout'\n\nThe error appears to be in '/kubesphere/installer/roles/ks-core/prepare/tasks/ks-restart.yaml': line 16, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: ks-upgrade | restart ks-console\n  ^ here\n"}
              ...ignoring
              
              TASK [ks-core/prepare : ks-upgrade | restart ks-controller-manager] ************
              fatal: [localhost]: FAILED! => {"msg": "The conditional check 'console_version.stdout and console_version.stdout == ks_version' failed. The error was: error while evaluating conditional (console_version.stdout and console_version.stdout == ks_version): 'dict object' has no attribute 'stdout'\n\nThe error appears to be in '/kubesphere/installer/roles/ks-core/prepare/tasks/ks-restart.yaml': line 21, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: ks-upgrade | restart ks-controller-manager\n  ^ here\n"}
              ...ignoring
              
              TASK [ks-core/ks-core : KubeSphere | Getting kubernetes version] ***************
              changed: [localhost]
              
              TASK [ks-core/ks-core : KubeSphere | Setting kubernetes version] ***************
              ok: [localhost]
              
              TASK [ks-core/ks-core : KubeSphere | Getting kubernetes master num] ************
              changed: [localhost]
              
              TASK [ks-core/ks-core : KubeSphere | Setting master num] ***********************
              ok: [localhost]
              
              TASK [ks-core/ks-core : ks-console | Checking ks-console svc] ******************
              changed: [localhost]
              
              TASK [ks-core/ks-core : ks-console | Getting ks-console svc port] **************
              skipping: [localhost]
              
              TASK [ks-core/ks-core : ks-console | Setting console_port] *********************
              skipping: [localhost]
              
              TASK [ks-core/ks-core : KubeSphere | Getting Ingress installation files] *******
              changed: [localhost] => (item=ingress)
              changed: [localhost] => (item=ks-account)
              changed: [localhost] => (item=ks-apigateway)
              changed: [localhost] => (item=ks-apiserver)
              changed: [localhost] => (item=ks-console)
              changed: [localhost] => (item=ks-controller-manager)
              
              TASK [ks-core/ks-core : KubeSphere | Creating manifests] ***********************
              changed: [localhost] => (item={u'path': u'ingress', u'type': u'config', u'file': u'ingress-controller.yaml'})
              changed: [localhost] => (item={u'path': u'ks-account', u'type': u'deployment', u'file': u'ks-account.yml'})
              changed: [localhost] => (item={u'path': u'ks-apigateway', u'type': u'deploy', u'file': u'ks-apigateway.yaml'})
              changed: [localhost] => (item={u'path': u'ks-apiserver', u'type': u'deploy', u'file': u'ks-apiserver.yml'})
              changed: [localhost] => (item={u'path': u'ks-controller-manager', u'type': u'deploy', u'file': u'ks-controller-manager.yaml'})
              changed: [localhost] => (item={u'path': u'ks-console', u'type': u'config', u'file': u'ks-console-config.yml'})
              changed: [localhost] => (item={u'path': u'ks-console', u'type': u'deploy', u'file': u'ks-console-deployment.yml'})
              changed: [localhost] => (item={u'path': u'ks-console', u'type': u'svc', u'file': u'ks-console-svc.yml'})
              changed: [localhost] => (item={u'path': u'ks-console', u'type': u'deploy', u'file': u'ks-docs-deployment.yaml'})
              changed: [localhost] => (item={u'path': u'ks-console', u'type': u'config', u'file': u'sample-bookinfo-configmap.yaml'})
              
              TASK [ks-core/ks-core : KubeSphere | Delete Ingress-controller configmap] ******
              fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl delete cm -n kubesphere-system ks-router-config\n", "delta": "0:00:00.775287", "end": "2020-05-21 22:24:22.549046", "msg": "non-zero return code", "rc": 1, "start": "2020-05-21 22:24:21.773759", "stderr": "Error from server (NotFound): configmaps \"ks-router-config\" not found", "stderr_lines": ["Error from server (NotFound): configmaps \"ks-router-config\" not found"], "stdout": "", "stdout_lines": []}
              ...ignoring
              
              TASK [ks-core/ks-core : KubeSphere | Creating Ingress-controller configmap] ****
              changed: [localhost]
              
              TASK [ks-core/ks-core : KubeSphere | Check ks-account version] *****************
              changed: [localhost]
              
              TASK [ks-core/ks-core : KubeSphere | Update kubectl image] *********************
              skipping: [localhost]
              
              TASK [ks-core/ks-core : KubeSphere | Creating ks-core] *************************
              changed: [localhost] => (item={u'path': u'ks-apigateway', u'file': u'ks-apigateway.yaml'})
              changed: [localhost] => (item={u'path': u'ks-apiserver', u'file': u'ks-apiserver.yml'})
              changed: [localhost] => (item={u'path': u'ks-account', u'file': u'ks-account.yml'})
              changed: [localhost] => (item={u'path': u'ks-controller-manager', u'file': u'ks-controller-manager.yaml'})
              changed: [localhost] => (item={u'path': u'ks-console', u'file': u'ks-console-config.yml'})
              changed: [localhost] => (item={u'path': u'ks-console', u'file': u'sample-bookinfo-configmap.yaml'})
              changed: [localhost] => (item={u'path': u'ks-console', u'file': u'ks-console-deployment.yml'})
              
              TASK [ks-core/ks-core : KubeSphere | Check ks-console svc] *********************
              changed: [localhost]
              
              TASK [ks-core/ks-core : KubeSphere | Creating ks-console svc] ******************
              changed: [localhost] => (item={u'path': u'ks-console', u'file': u'ks-console-svc.yml'})
              
              TASK [ks-core/ks-core : KubeSphere | Patch ks-console svc] *********************
              skipping: [localhost]
              
              PLAY RECAP *********************************************************************
              localhost                  : ok=38   changed=22   unreachable=0    failed=0    skipped=10   rescued=0    ignored=11  
              
              Start installing monitoring
              **************************************************
              task monitoring status is failed
              total: 1     completed:1
              **************************************************
              
              
              Task 'monitoring' failed:
              ******************************************************************************************************************************************************
              {
                "counter": 74, 
                "created": "2020-05-21T22:27:01.701795", 
                "end_line": 74, 
                "event": "runner_on_failed", 
                "event_data": {
                  "event_loop": null, 
                  "host": "localhost", 
                  "ignore_errors": null, 
                  "play": "localhost", 
                  "play_pattern": "localhost", 
                  "play_uuid": "12927f56-6706-f2d9-ead1-000000000005", 
                  "playbook": "/kubesphere/playbooks/monitoring.yaml", 
                  "playbook_uuid": "4253bbba-33ba-4fad-bebf-400fd795c502", 
                  "remote_addr": "127.0.0.1", 
                  "res": {
                    "changed": true, 
                    "msg": "All items completed", 
                    "results": [
                      {
                        "_ansible_item_label": "sources", 
                        "_ansible_no_log": false, 
                        "ansible_loop_var": "item", 
                        "attempts": 5, 
                        "changed": true, 
                        "cmd": "/usr/local/bin/kubectl apply -f /etc/kubesphere/prometheus/sources", 
                        "delta": "0:00:02.055784", 
                        "end": "2020-05-21 22:26:31.342519", 
                        "failed": true, 
                        "failed_when_result": true, 
                        "invocation": {
                          "module_args": {
                            "_raw_params": "/usr/local/bin/kubectl apply -f /etc/kubesphere/prometheus/sources", 
                            "_uses_shell": true, 
                            "argv": null, 
                            "chdir": null, 
                            "creates": null, 
                            "executable": null, 
                            "removes": null, 
                            "stdin": null, 
                            "stdin_add_newline": true, 
                            "strip_empty_ends": true, 
                            "warn": true
                          }
                        }, 
                        "item": "sources", 
                        "msg": "non-zero return code", 
                        "rc": 1, 
                        "start": "2020-05-21 22:26:29.286735", 
                        "stderr": "unable to recognize \"/etc/kubesphere/prometheus/sources/kube-state-metrics-serviceMonitor.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"\nunable to recognize \"/etc/kubesphere/prometheus/sources/node-exporter-serviceMonitor.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"\nunable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-prometheus.yaml\": no matches for kind \"Prometheus\" in version \"monitoring.coreos.com/v1\"\nunable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-prometheusSystem.yaml\": no matches for kind \"Prometheus\" in version \"monitoring.coreos.com/v1\"\nunable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitor.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"\nunable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorApiserver.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"\nunable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorCoreDNS.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"\nunable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorKubeControllerManager.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"\nunable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorKubeScheduler.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"\nunable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorKubelet.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"\nunable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorSystem.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"", 
                        "stderr_lines": [
                          "unable to recognize \"/etc/kubesphere/prometheus/sources/kube-state-metrics-serviceMonitor.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"", 
                          "unable to recognize \"/etc/kubesphere/prometheus/sources/node-exporter-serviceMonitor.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"", 
                          "unable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-prometheus.yaml\": no matches for kind \"Prometheus\" in version \"monitoring.coreos.com/v1\"", 
                          "unable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-prometheusSystem.yaml\": no matches for kind \"Prometheus\" in version \"monitoring.coreos.com/v1\"", 
                          "unable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitor.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"", 
                          "unable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorApiserver.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"", 
                          "unable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorCoreDNS.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"", 
                          "unable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorKubeControllerManager.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"", 
                          "unable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorKubeScheduler.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"", 
                          "unable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorKubelet.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"", 
                          "unable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorSystem.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\""
                        ], 
                        "stdout": "secret/additional-scrape-configs unchanged\nclusterrole.rbac.authorization.k8s.io/kubesphere-kube-state-metrics unchanged\nclusterrolebinding.rbac.authorization.k8s.io/kubesphere-kube-state-metrics unchanged\ndeployment.apps/kube-state-metrics unchanged\nrole.rbac.authorization.k8s.io/kube-state-metrics unchanged\nrolebinding.rbac.authorization.k8s.io/kube-state-metrics unchanged\nservice/kube-state-metrics unchanged\nserviceaccount/kube-state-metrics unchanged\nclusterrole.rbac.authorization.k8s.io/kubesphere-node-exporter unchanged\nclusterrolebinding.rbac.authorization.k8s.io/kubesphere-node-exporter unchanged\ndaemonset.apps/node-exporter configured\nservice/node-exporter unchanged\nserviceaccount/node-exporter unchanged\nclusterrole.rbac.authorization.k8s.io/kubesphere-prometheus-k8s unchanged\nclusterrolebinding.rbac.authorization.k8s.io/kubesphere-prometheus-k8s unchanged\nrolebinding.rbac.authorization.k8s.io/prometheus-k8s-config unchanged\nrole.rbac.authorization.k8s.io/prometheus-k8s-config unchanged\nservice/prometheus-k8s unchanged\nserviceaccount/prometheus-k8s unchanged\nservice/kube-controller-manager-headless unchanged\nservice/kube-scheduler-headless unchanged\nservice/prometheus-k8s-system unchanged", 
                        "stdout_lines": [
                          "secret/additional-scrape-configs unchanged", 
                          "clusterrole.rbac.authorization.k8s.io/kubesphere-kube-state-metrics unchanged", 
                          "clusterrolebinding.rbac.authorization.k8s.io/kubesphere-kube-state-metrics unchanged", 
                          "deployment.apps/kube-state-metrics unchanged", 
                          "role.rbac.authorization.k8s.io/kube-state-metrics unchanged", 
                          "rolebinding.rbac.authorization.k8s.io/kube-state-metrics unchanged", 
                          "service/kube-state-metrics unchanged", 
                          "serviceaccount/kube-state-metrics unchanged", 
                          "clusterrole.rbac.authorization.k8s.io/kubesphere-node-exporter unchanged", 
                          "clusterrolebinding.rbac.authorization.k8s.io/kubesphere-node-exporter unchanged", 
                          "daemonset.apps/node-exporter configured", 
                          "service/node-exporter unchanged", 
                          "serviceaccount/node-exporter unchanged", 
                          "clusterrole.rbac.authorization.k8s.io/kubesphere-prometheus-k8s unchanged", 
                          "clusterrolebinding.rbac.authorization.k8s.io/kubesphere-prometheus-k8s unchanged", 
                          "rolebinding.rbac.authorization.k8s.io/prometheus-k8s-config unchanged", 
                          "role.rbac.authorization.k8s.io/prometheus-k8s-config unchanged", 
                          "service/prometheus-k8s unchanged", 
                          "serviceaccount/prometheus-k8s unchanged", 
                          "service/kube-controller-manager-headless unchanged", 
                          "service/kube-scheduler-headless unchanged", 
                          "service/prometheus-k8s-system unchanged"
                        ]
                      }, 
                      {
                        "_ansible_item_label": "sources", 
                        "_ansible_no_log": false, 
                        "ansible_loop_var": "item", 
                        "attempts": 5, 
                        "changed": true, 
                        "cmd": "/usr/local/bin/kubectl apply -f /etc/kubesphere/prometheus/sources", 
                        "delta": "0:00:01.208305", 
                        "end": "2020-05-21 22:27:01.585331", 
                        "failed": true, 
                        "failed_when_result": true, 
                        "invocation": {
                          "module_args": {
                            "_raw_params": "/usr/local/bin/kubectl apply -f /etc/kubesphere/prometheus/sources", 
                            "_uses_shell": true, 
                            "argv": null, 
                            "chdir": null, 
                            "creates": null, 
                            "executable": null, 
                            "removes": null, 
                            "stdin": null, 
                            "stdin_add_newline": true, 
                            "strip_empty_ends": true, 
                            "warn": true
                          }
                        }, 
                        "item": "sources", 
                        "msg": "non-zero return code", 
                        "rc": 1, 
                        "start": "2020-05-21 22:27:00.377026", 
                        "stderr": "unable to recognize \"/etc/kubesphere/prometheus/sources/kube-state-metrics-serviceMonitor.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"\nunable to recognize \"/etc/kubesphere/prometheus/sources/node-exporter-serviceMonitor.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"\nunable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-prometheus.yaml\": no matches for kind \"Prometheus\" in version \"monitoring.coreos.com/v1\"\nunable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-prometheusSystem.yaml\": no matches for kind \"Prometheus\" in version \"monitoring.coreos.com/v1\"\nunable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitor.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"\nunable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorApiserver.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"\nunable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorCoreDNS.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"\nunable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorKubeControllerManager.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"\nunable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorKubeScheduler.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"\nunable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorKubelet.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"\nunable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorSystem.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"", 
                        "stderr_lines": [
                          "unable to recognize \"/etc/kubesphere/prometheus/sources/kube-state-metrics-serviceMonitor.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"", 
                          "unable to recognize \"/etc/kubesphere/prometheus/sources/node-exporter-serviceMonitor.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"", 
                          "unable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-prometheus.yaml\": no matches for kind \"Prometheus\" in version \"monitoring.coreos.com/v1\"", 
                          "unable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-prometheusSystem.yaml\": no matches for kind \"Prometheus\" in version \"monitoring.coreos.com/v1\"", 
                          "unable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitor.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"", 
                          "unable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorApiserver.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"", 
                          "unable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorCoreDNS.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"", 
                          "unable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorKubeControllerManager.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"", 
                          "unable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorKubeScheduler.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"", 
                          "unable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorKubelet.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"", 
                          "unable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorSystem.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\""
                        ], 
                        "stdout": "secret/additional-scrape-configs unchanged\nclusterrole.rbac.authorization.k8s.io/kubesphere-kube-state-metrics unchanged\nclusterrolebinding.rbac.authorization.k8s.io/kubesphere-kube-state-metrics unchanged\ndeployment.apps/kube-state-metrics unchanged\nrole.rbac.authorization.k8s.io/kube-state-metrics unchanged\nrolebinding.rbac.authorization.k8s.io/kube-state-metrics unchanged\nservice/kube-state-metrics unchanged\nserviceaccount/kube-state-metrics unchanged\nclusterrole.rbac.authorization.k8s.io/kubesphere-node-exporter unchanged\nclusterrolebinding.rbac.authorization.k8s.io/kubesphere-node-exporter unchanged\ndaemonset.apps/node-exporter configured\nservice/node-exporter unchanged\nserviceaccount/node-exporter unchanged\nclusterrole.rbac.authorization.k8s.io/kubesphere-prometheus-k8s unchanged\nclusterrolebinding.rbac.authorization.k8s.io/kubesphere-prometheus-k8s unchanged\nrolebinding.rbac.authorization.k8s.io/prometheus-k8s-config unchanged\nrole.rbac.authorization.k8s.io/prometheus-k8s-config unchanged\nservice/prometheus-k8s unchanged\nserviceaccount/prometheus-k8s unchanged\nservice/kube-controller-manager-headless unchanged\nservice/kube-scheduler-headless unchanged\nservice/prometheus-k8s-system unchanged", 
                        "stdout_lines": [
                          "secret/additional-scrape-configs unchanged", 
                          "clusterrole.rbac.authorization.k8s.io/kubesphere-kube-state-metrics unchanged", 
                          "clusterrolebinding.rbac.authorization.k8s.io/kubesphere-kube-state-metrics unchanged", 
                          "deployment.apps/kube-state-metrics unchanged", 
                          "role.rbac.authorization.k8s.io/kube-state-metrics unchanged", 
                          "rolebinding.rbac.authorization.k8s.io/kube-state-metrics unchanged", 
                          "service/kube-state-metrics unchanged", 
                          "serviceaccount/kube-state-metrics unchanged", 
                          "clusterrole.rbac.authorization.k8s.io/kubesphere-node-exporter unchanged", 
                          "clusterrolebinding.rbac.authorization.k8s.io/kubesphere-node-exporter unchanged", 
                          "daemonset.apps/node-exporter configured", 
                          "service/node-exporter unchanged", 
                          "serviceaccount/node-exporter unchanged", 
                          "clusterrole.rbac.authorization.k8s.io/kubesphere-prometheus-k8s unchanged", 
                          "clusterrolebinding.rbac.authorization.k8s.io/kubesphere-prometheus-k8s unchanged", 
                          "rolebinding.rbac.authorization.k8s.io/prometheus-k8s-config unchanged", 
                          "role.rbac.authorization.k8s.io/prometheus-k8s-config unchanged", 
                          "service/prometheus-k8s unchanged", 
                          "serviceaccount/prometheus-k8s unchanged", 
                          "service/kube-controller-manager-headless unchanged", 
                          "service/kube-scheduler-headless unchanged", 
                          "service/prometheus-k8s-system unchanged"
                        ]
                      }
                    ]
                  }, 
                  "role": "ks-monitor", 
                  "task": "ks-monitor | Installing prometheus-operator", 
                  "task_action": "shell", 
                  "task_args": "", 
                  "task_path": "/kubesphere/installer/roles/ks-monitor/tasks/main.yaml:66", 
                  "task_uuid": "12927f56-6706-f2d9-ead1-000000000024", 
                  "uuid": "aecb0548-4e53-4783-a261-a997d5b17ed5"
                }, 
                "parent_uuid": "12927f56-6706-f2d9-ead1-000000000024", 
                "pid": 2473, 
                "runner_ident": "monitoring", 
                "start_line": 74, 
                "stdout": "", 
                "uuid": "aecb0548-4e53-4783-a261-a997d5b17ed5"
              }
              ******************************************************************************************************************************************************
              #####################################################
              ###              Welcome to KubeSphere!           ###
              #####################################################
              
              Console: http://10.0.2.5:30880
              Account: admin
              Password: P@88w0rd
              
              NOTES:
                1. After logging into the console, please check the
                   monitoring status of service components in
                   the "Cluster Status". If the service is not
                   ready, please wait patiently. You can start
                   to use when all components are ready.
                2. Please modify the default password after login.
              
              #####################################################
              

              日志里看到 Task ‘monitoring’ failed,是表示监控安装不成功吧?看看这个问题能否协助解决一下,再次对你们的支持表示感谢!管理平台页面如下:

                rayzhou2017
                目前看 Monitor 好像有点问题,如下图:

                相关资源信息:

                [root@k8s-node1 ~]# kubectl get ClusterRole|grep state-metrics
                kubesphere-kube-state-metrics                                          49m
                [root@k8s-node1 ~]# kubectl get sa -n kubesphere-monitoring-system
                NAME                  SECRETS   AGE
                default               1         2d4h
                kube-state-metrics    1         49m
                node-exporter         1         49m
                prometheus-k8s        1         49m
                prometheus-operator   1         49m