Cauchy
首先十分感谢您的回复,同时感谢 Feynman 一开始就跟进这个问题的进度。
这是第四次安装失败后,根据您的建议,查看了 crd 和 workspaces
[root@k8s-node1 ~]# kubectl get crd
NAME CREATED AT
applications.app.k8s.io 2020-05-24T18:09:09Z
blockdeviceclaims.openebs.io 2020-05-24T13:08:30Z
blockdevices.openebs.io 2020-05-24T13:08:26Z
castemplates.openebs.io 2020-05-24T14:36:41Z
cstorbackups.openebs.io 2020-05-24T14:36:41Z
cstorcompletedbackups.openebs.io 2020-05-24T14:36:41Z
cstorpoolinstances.openebs.io 2020-05-24T14:36:41Z
cstorpools.openebs.io 2020-05-24T14:36:41Z
cstorrestores.openebs.io 2020-05-24T14:36:41Z
cstorvolumeclaims.openebs.io 2020-05-24T14:36:41Z
cstorvolumereplicas.openebs.io 2020-05-24T14:36:41Z
cstorvolumes.openebs.io 2020-05-24T14:36:41Z
destinationrules.networking.istio.io 2020-05-24T18:09:10Z
disks.openebs.io 2020-05-24T13:08:22Z
fluentbits.logging.kubesphere.io 2020-05-24T18:09:10Z
runtasks.openebs.io 2020-05-24T14:36:41Z
s2ibinaries.devops.kubesphere.io 2020-05-24T18:09:09Z
s2ibuilders.devops.kubesphere.io 2020-05-24T18:09:09Z
s2ibuildertemplates.devops.kubesphere.io 2020-05-24T18:09:10Z
s2iruns.devops.kubesphere.io 2020-05-24T18:09:10Z
servicepolicies.servicemesh.kubesphere.io 2020-05-24T18:09:10Z
storagepoolclaims.openebs.io 2020-05-24T14:36:41Z
storagepools.openebs.io 2020-05-24T14:36:41Z
strategies.servicemesh.kubesphere.io 2020-05-24T18:09:10Z
upgradetasks.openebs.io 2020-05-24T14:36:41Z
virtualservices.networking.istio.io 2020-05-24T18:09:10Z
volumesnapshotdatas.volumesnapshot.external-storage.k8s.io 2020-05-24T13:08:07Z
volumesnapshots.volumesnapshot.external-storage.k8s.io 2020-05-24T13:08:07Z
workspaces.tenant.kubesphere.io 2020-05-24T18:09:10Z
[root@k8s-node1 ~]# kubectl get workspaces
No resources found in default namespace.
之后,又根据您的建议重新安装,日志信息如下:
[root@k8s-node1 ~]# kubectl rollout restart deploy -n kubesphere-system ks-installer
deployment.apps/ks-installer restarted
[root@k8s-node1 ~]# kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
2020-05-21T22:21:44Z INFO : shell-operator v1.0.0-beta.5
2020-05-21T22:21:44Z INFO : Use temporary dir: /tmp/shell-operator
2020-05-21T22:21:44Z INFO : Initialize hooks manager ...
2020-05-21T22:21:44Z INFO : Search and load hooks ...
2020-05-21T22:21:44Z INFO : Load hook config from '/hooks/kubesphere/installRunner.py'
2020-05-21T22:21:44Z INFO : HTTP SERVER Listening on 0.0.0.0:9115
2020-05-21T22:21:45Z INFO : Initializing schedule manager ...
2020-05-21T22:21:45Z INFO : KUBE Init Kubernetes client
2020-05-21T22:21:45Z INFO : KUBE-INIT Kubernetes client is configured successfully
2020-05-21T22:21:45Z INFO : MAIN: run main loop
2020-05-21T22:21:45Z INFO : MAIN: add onStartup tasks
2020-05-21T22:21:45Z INFO : Running schedule manager ...
2020-05-21T22:21:45Z INFO : QUEUE add all HookRun@OnStartup
2020-05-21T22:21:45Z INFO : MSTOR Create new metric shell_operator_live_ticks
2020-05-21T22:21:45Z INFO : MSTOR Create new metric shell_operator_tasks_queue_length
2020-05-21T22:21:45Z INFO : GVR for kind 'ConfigMap' is /v1, Resource=configmaps
2020-05-21T22:21:45Z INFO : EVENT Kube event '4c624d86-852b-4838-9d28-377d11707af9'
2020-05-21T22:21:45Z INFO : QUEUE add TASK_HOOK_RUN@KUBE_EVENTS kubesphere/installRunner.py
2020-05-21T22:21:48Z INFO : TASK_RUN HookRun@KUBE_EVENTS kubesphere/installRunner.py
2020-05-21T22:21:48Z INFO : Running hook 'kubesphere/installRunner.py' binding 'KUBE_EVENTS' ...
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'
PLAY [localhost] ***************************************************************
TASK [download : include_tasks] ************************************************
skipping: [localhost]
TASK [download : Download items] ***********************************************
skipping: [localhost]
TASK [download : Sync container] ***********************************************
skipping: [localhost]
TASK [kubesphere-defaults : Configure defaults] ********************************
ok: [localhost] => {
"msg": "Check roles/kubesphere-defaults/defaults/main.yml"
}
TASK [preinstall : check k8s version] ******************************************
changed: [localhost]
TASK [preinstall : init k8s version] *******************************************
ok: [localhost]
TASK [preinstall : Stop if kuernetes version is nonsupport] ********************
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}
TASK [preinstall : check helm status] ******************************************
changed: [localhost]
TASK [preinstall : Stop if Helm is not available] ******************************
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}
TASK [preinstall : check storage class] ****************************************
changed: [localhost]
TASK [preinstall : Stop if StorageClass was not found] *************************
skipping: [localhost]
TASK [preinstall : check default storage class] ********************************
changed: [localhost]
TASK [preinstall : Stop if defaultStorageClass was not found] ******************
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}
PLAY RECAP *********************************************************************
localhost : ok=9 changed=4 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'
PLAY [localhost] ***************************************************************
TASK [download : include_tasks] ************************************************
skipping: [localhost]
TASK [download : Download items] ***********************************************
skipping: [localhost]
TASK [download : Sync container] ***********************************************
skipping: [localhost]
TASK [kubesphere-defaults : Configure defaults] ********************************
ok: [localhost] => {
"msg": "Check roles/kubesphere-defaults/defaults/main.yml"
}
TASK [metrics-server : Metrics-Server | Checking old installation files] *******
skipping: [localhost]
TASK [metrics-server : Metrics-Server | deleting old prometheus-operator] ******
skipping: [localhost]
TASK [metrics-server : Metrics-Server | deleting old metrics-server files] *****
skipping: [localhost] => (item=metrics-server)
TASK [metrics-server : Metrics-Server | Getting metrics-server installation files] ***
skipping: [localhost]
TASK [metrics-server : Metrics-Server | Creating manifests] ********************
skipping: [localhost] => (item={u'type': u'config', u'name': u'values', u'file': u'values.yaml'})
TASK [metrics-server : Metrics-Server | Check Metrics-Server] ******************
skipping: [localhost]
TASK [metrics-server : Metrics-Server | Installing metrics-server] *************
skipping: [localhost]
TASK [metrics-server : Metrics-Server | Installing metrics-server retry] *******
skipping: [localhost]
TASK [metrics-server : Metrics-Server | Waitting for v1beta1.metrics.k8s.io ready] ***
skipping: [localhost]
PLAY RECAP *********************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'
PLAY [localhost] ***************************************************************
TASK [download : include_tasks] ************************************************
skipping: [localhost]
TASK [download : Download items] ***********************************************
skipping: [localhost]
TASK [download : Sync container] ***********************************************
skipping: [localhost]
TASK [kubesphere-defaults : Configure defaults] ********************************
ok: [localhost] => {
"msg": "Check roles/kubesphere-defaults/defaults/main.yml"
}
TASK [common : Kubesphere | Check kube-node-lease namespace] *******************
changed: [localhost]
TASK [common : KubeSphere | Get system namespaces] *****************************
ok: [localhost]
TASK [common : set_fact] *******************************************************
ok: [localhost]
TASK [common : debug] **********************************************************
ok: [localhost] => {
"msg": [
"kubesphere-system",
"kubesphere-controls-system",
"kubesphere-monitoring-system",
"kube-node-lease"
]
}
TASK [common : KubeSphere | Create kubesphere namespace] ***********************
changed: [localhost] => (item=kubesphere-system)
changed: [localhost] => (item=kubesphere-controls-system)
changed: [localhost] => (item=kubesphere-monitoring-system)
changed: [localhost] => (item=kube-node-lease)
TASK [common : KubeSphere | Labeling system-workspace] *************************
changed: [localhost] => (item=default)
changed: [localhost] => (item=kube-public)
changed: [localhost] => (item=kube-system)
changed: [localhost] => (item=kubesphere-system)
changed: [localhost] => (item=kubesphere-controls-system)
changed: [localhost] => (item=kubesphere-monitoring-system)
changed: [localhost] => (item=kube-node-lease)
TASK [common : KubeSphere | Create ImagePullSecrets] ***************************
changed: [localhost] => (item=default)
changed: [localhost] => (item=kube-public)
changed: [localhost] => (item=kube-system)
changed: [localhost] => (item=kubesphere-system)
changed: [localhost] => (item=kubesphere-controls-system)
changed: [localhost] => (item=kubesphere-monitoring-system)
changed: [localhost] => (item=kube-node-lease)
TASK [common : KubeSphere | Getting kubernetes master num] *********************
changed: [localhost]
TASK [common : KubeSphere | Setting master num] ********************************
ok: [localhost]
TASK [common : Kubesphere | Getting common component installation files] *******
changed: [localhost] => (item=common)
changed: [localhost] => (item=ks-crds)
TASK [common : KubeSphere | Create KubeSphere crds] ****************************
changed: [localhost]
TASK [common : Kubesphere | Checking openpitrix common component] **************
changed: [localhost]
TASK [common : include_tasks] **************************************************
skipping: [localhost] => (item={u'ks': u'mysql-pvc', u'op': u'openpitrix-db'})
skipping: [localhost] => (item={u'ks': u'etcd-pvc', u'op': u'openpitrix-etcd'})
TASK [common : Getting PersistentVolumeName (mysql)] ***************************
skipping: [localhost]
TASK [common : Getting PersistentVolumeSize (mysql)] ***************************
skipping: [localhost]
TASK [common : Setting PersistentVolumeName (mysql)] ***************************
skipping: [localhost]
TASK [common : Setting PersistentVolumeSize (mysql)] ***************************
skipping: [localhost]
TASK [common : Getting PersistentVolumeName (etcd)] ****************************
skipping: [localhost]
TASK [common : Getting PersistentVolumeSize (etcd)] ****************************
skipping: [localhost]
TASK [common : Setting PersistentVolumeName (etcd)] ****************************
skipping: [localhost]
TASK [common : Setting PersistentVolumeSize (etcd)] ****************************
skipping: [localhost]
TASK [common : Kubesphere | Check mysql PersistentVolumeClaim] *****************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl get pvc -n kubesphere-system mysql-pvc -o jsonpath='{.status.capacity.storage}'\n", "delta": "0:00:00.780890", "end": "2020-05-21 22:23:02.376997", "msg": "non-zero return code", "rc": 1, "start": "2020-05-21 22:23:01.596107", "stderr": "Error from server (NotFound): persistentvolumeclaims \"mysql-pvc\" not found", "stderr_lines": ["Error from server (NotFound): persistentvolumeclaims \"mysql-pvc\" not found"], "stdout": "", "stdout_lines": []}
...ignoring
TASK [common : Kubesphere | Setting mysql db pv size] **************************
skipping: [localhost]
TASK [common : Kubesphere | Check redis PersistentVolumeClaim] *****************
changed: [localhost]
TASK [common : Kubesphere | Setting redis db pv size] **************************
ok: [localhost]
TASK [common : Kubesphere | Check minio PersistentVolumeClaim] *****************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl get pvc -n kubesphere-system minio -o jsonpath='{.status.capacity.storage}'\n", "delta": "0:00:00.928649", "end": "2020-05-21 22:23:05.736725", "msg": "non-zero return code", "rc": 1, "start": "2020-05-21 22:23:04.808076", "stderr": "Error from server (NotFound): persistentvolumeclaims \"minio\" not found", "stderr_lines": ["Error from server (NotFound): persistentvolumeclaims \"minio\" not found"], "stdout": "", "stdout_lines": []}
...ignoring
TASK [common : Kubesphere | Setting minio pv size] *****************************
skipping: [localhost]
TASK [common : Kubesphere | Check openldap PersistentVolumeClaim] **************
changed: [localhost]
TASK [common : Kubesphere | Setting openldap pv size] **************************
ok: [localhost]
TASK [common : Kubesphere | Check etcd db PersistentVolumeClaim] ***************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl get pvc -n kubesphere-system etcd-pvc -o jsonpath='{.status.capacity.storage}'\n", "delta": "0:00:00.819573", "end": "2020-05-21 22:23:08.447702", "msg": "non-zero return code", "rc": 1, "start": "2020-05-21 22:23:07.628129", "stderr": "Error from server (NotFound): persistentvolumeclaims \"etcd-pvc\" not found", "stderr_lines": ["Error from server (NotFound): persistentvolumeclaims \"etcd-pvc\" not found"], "stdout": "", "stdout_lines": []}
...ignoring
TASK [common : Kubesphere | Setting etcd pv size] ******************************
skipping: [localhost]
TASK [common : Kubesphere | Check redis ha PersistentVolumeClaim] **************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl get pvc -n kubesphere-system data-redis-ha-server-0 -o jsonpath='{.status.capacity.storage}'\n", "delta": "0:00:00.774771", "end": "2020-05-21 22:23:09.777650", "msg": "non-zero return code", "rc": 1, "start": "2020-05-21 22:23:09.002879", "stderr": "Error from server (NotFound): persistentvolumeclaims \"data-redis-ha-server-0\" not found", "stderr_lines": ["Error from server (NotFound): persistentvolumeclaims \"data-redis-ha-server-0\" not found"], "stdout": "", "stdout_lines": []}
...ignoring
TASK [common : Kubesphere | Setting redis ha pv size] **************************
skipping: [localhost]
TASK [common : Kubesphere | Creating common component manifests] ***************
changed: [localhost] => (item={u'path': u'etcd', u'file': u'etcd.yaml'})
changed: [localhost] => (item={u'name': u'mysql', u'file': u'mysql.yaml'})
changed: [localhost] => (item={u'path': u'redis', u'file': u'redis.yaml'})
TASK [common : Kubesphere | Creating mysql sercet] *****************************
changed: [localhost]
TASK [common : Kubesphere | Deploying etcd and mysql] **************************
skipping: [localhost] => (item=etcd.yaml)
skipping: [localhost] => (item=mysql.yaml)
TASK [common : Kubesphere | Getting minio installation files] ******************
skipping: [localhost] => (item=minio-ha)
TASK [common : Kubesphere | Creating manifests] ********************************
skipping: [localhost] => (item={u'name': u'custom-values-minio', u'file': u'custom-values-minio.yaml'})
TASK [common : Kubesphere | Check minio] ***************************************
skipping: [localhost]
TASK [common : Kubesphere | Deploy minio] **************************************
skipping: [localhost]
TASK [common : debug] **********************************************************
skipping: [localhost]
TASK [common : fail] ***********************************************************
skipping: [localhost]
TASK [common : Kubesphere | create minio config directory] *********************
skipping: [localhost]
TASK [common : Kubesphere | Creating common component manifests] ***************
skipping: [localhost] => (item={u'path': u'/root/.config/rclone', u'file': u'rclone.conf'})
TASK [common : include_tasks] **************************************************
skipping: [localhost] => (item=helm)
skipping: [localhost] => (item=vmbased)
TASK [common : Kubesphere | Check ha-redis] ************************************
skipping: [localhost]
TASK [common : Kubesphere | Getting redis installation files] ******************
skipping: [localhost] => (item=redis-ha)
TASK [common : Kubesphere | Creating manifests] ********************************
skipping: [localhost] => (item={u'name': u'custom-values-redis', u'file': u'custom-values-redis.yaml'})
TASK [common : Kubesphere | Check old redis status] ****************************
skipping: [localhost]
TASK [common : Kubesphere | Delete and backup old redis svc] *******************
skipping: [localhost]
TASK [common : Kubesphere | Deploying redis] ***********************************
skipping: [localhost]
TASK [common : Kubesphere | Getting redis PodIp] *******************************
skipping: [localhost]
TASK [common : Kubesphere | Creating redis migration script] *******************
skipping: [localhost] => (item={u'path': u'/etc/kubesphere', u'file': u'redisMigrate.py'})
TASK [common : Kubesphere | Check redis-ha status] *****************************
skipping: [localhost]
TASK [common : ks-logging | Migrating redis data] ******************************
skipping: [localhost]
TASK [common : Kubesphere | Disable old redis] *********************************
skipping: [localhost]
TASK [common : Kubesphere | Deploying redis] ***********************************
skipping: [localhost] => (item=redis.yaml)
TASK [common : Kubesphere | Getting openldap installation files] ***************
skipping: [localhost] => (item=openldap-ha)
TASK [common : Kubesphere | Creating manifests] ********************************
skipping: [localhost] => (item={u'name': u'custom-values-openldap', u'file': u'custom-values-openldap.yaml'})
TASK [common : Kubesphere | Check old openldap status] *************************
skipping: [localhost]
TASK [common : KubeSphere | Shutdown ks-account] *******************************
skipping: [localhost]
TASK [common : Kubesphere | Delete and backup old openldap svc] ****************
skipping: [localhost]
TASK [common : Kubesphere | Check openldap] ************************************
skipping: [localhost]
TASK [common : Kubesphere | Deploy openldap] ***********************************
skipping: [localhost]
TASK [common : Kubesphere | Load old openldap data] ****************************
skipping: [localhost]
TASK [common : Kubesphere | Check openldap-ha status] **************************
skipping: [localhost]
TASK [common : Kubesphere | Get openldap-ha pod list] **************************
skipping: [localhost]
TASK [common : Kubesphere | Get old openldap data] *****************************
skipping: [localhost]
TASK [common : Kubesphere | Migrating openldap data] ***************************
skipping: [localhost]
TASK [common : Kubesphere | Disable old openldap] ******************************
skipping: [localhost]
TASK [common : Kubesphere | Restart openldap] **********************************
skipping: [localhost]
TASK [common : KubeSphere | Restarting ks-account] *****************************
skipping: [localhost]
TASK [common : Kubesphere | Check ha-redis] ************************************
changed: [localhost]
TASK [common : Kubesphere | Getting redis installation files] ******************
skipping: [localhost] => (item=redis-ha)
TASK [common : Kubesphere | Creating manifests] ********************************
skipping: [localhost] => (item={u'name': u'custom-values-redis', u'file': u'custom-values-redis.yaml'})
TASK [common : Kubesphere | Check old redis status] ****************************
skipping: [localhost]
TASK [common : Kubesphere | Delete and backup old redis svc] *******************
skipping: [localhost]
TASK [common : Kubesphere | Deploying redis] ***********************************
skipping: [localhost]
TASK [common : Kubesphere | Getting redis PodIp] *******************************
skipping: [localhost]
TASK [common : Kubesphere | Creating redis migration script] *******************
skipping: [localhost] => (item={u'path': u'/etc/kubesphere', u'file': u'redisMigrate.py'})
TASK [common : Kubesphere | Check redis-ha status] *****************************
skipping: [localhost]
TASK [common : ks-logging | Migrating redis data] ******************************
skipping: [localhost]
TASK [common : Kubesphere | Disable old redis] *********************************
skipping: [localhost]
TASK [common : Kubesphere | Deploying redis] ***********************************
changed: [localhost] => (item=redis.yaml)
TASK [common : Kubesphere | Getting openldap installation files] ***************
changed: [localhost] => (item=openldap-ha)
TASK [common : Kubesphere | Creating manifests] ********************************
changed: [localhost] => (item={u'name': u'custom-values-openldap', u'file': u'custom-values-openldap.yaml'})
TASK [common : Kubesphere | Check old openldap status] *************************
changed: [localhost]
TASK [common : KubeSphere | Shutdown ks-account] *******************************
skipping: [localhost]
TASK [common : Kubesphere | Delete and backup old openldap svc] ****************
skipping: [localhost]
TASK [common : Kubesphere | Check openldap] ************************************
changed: [localhost]
TASK [common : Kubesphere | Deploy openldap] ***********************************
skipping: [localhost]
TASK [common : Kubesphere | Load old openldap data] ****************************
skipping: [localhost]
TASK [common : Kubesphere | Check openldap-ha status] **************************
skipping: [localhost]
TASK [common : Kubesphere | Get openldap-ha pod list] **************************
skipping: [localhost]
TASK [common : Kubesphere | Get old openldap data] *****************************
skipping: [localhost]
TASK [common : Kubesphere | Migrating openldap data] ***************************
skipping: [localhost]
TASK [common : Kubesphere | Disable old openldap] ******************************
skipping: [localhost]
TASK [common : Kubesphere | Restart openldap] **********************************
skipping: [localhost]
TASK [common : KubeSphere | Restarting ks-account] *****************************
skipping: [localhost]
TASK [common : Kubesphere | Getting minio installation files] ******************
skipping: [localhost] => (item=minio-ha)
TASK [common : Kubesphere | Creating manifests] ********************************
skipping: [localhost] => (item={u'name': u'custom-values-minio', u'file': u'custom-values-minio.yaml'})
TASK [common : Kubesphere | Check minio] ***************************************
skipping: [localhost]
TASK [common : Kubesphere | Deploy minio] **************************************
skipping: [localhost]
TASK [common : debug] **********************************************************
skipping: [localhost]
TASK [common : fail] ***********************************************************
skipping: [localhost]
TASK [common : Kubesphere | create minio config directory] *********************
skipping: [localhost]
TASK [common : Kubesphere | Creating common component manifests] ***************
skipping: [localhost] => (item={u'path': u'/root/.config/rclone', u'file': u'rclone.conf'})
TASK [common : include_tasks] **************************************************
skipping: [localhost] => (item=helm)
skipping: [localhost] => (item=vmbased)
TASK [common : Kubesphere | Deploying common component] ************************
skipping: [localhost] => (item=mysql.yaml)
TASK [common : Kubesphere | Deploying common component] ************************
skipping: [localhost] => (item=etcd.yaml)
TASK [common : Setting persistentVolumeReclaimPolicy (mysql)] ******************
skipping: [localhost]
TASK [common : Setting persistentVolumeReclaimPolicy (etcd)] *******************
skipping: [localhost]
PLAY RECAP *********************************************************************
localhost : ok=29 changed=22 unreachable=0 failed=0 skipped=87 rescued=0 ignored=4
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'
PLAY [localhost] ***************************************************************
TASK [download : include_tasks] ************************************************
skipping: [localhost]
TASK [download : Download items] ***********************************************
skipping: [localhost]
TASK [download : Sync container] ***********************************************
skipping: [localhost]
TASK [kubesphere-defaults : Configure defaults] ********************************
ok: [localhost] => {
"msg": "Check roles/kubesphere-defaults/defaults/main.yml"
}
TASK [ks-core/prepare : KubeSphere | Create KubeSphere dir] ********************
ok: [localhost]
TASK [ks-core/prepare : KubeSphere | Getting installation init files] **********
changed: [localhost] => (item=workspace.yaml)
changed: [localhost] => (item=ks-init)
TASK [ks-core/prepare : KubeSphere | Init KubeSphere system] *******************
changed: [localhost]
TASK [ks-core/prepare : KubeSphere | Creating KubeSphere Secret] ***************
changed: [localhost]
TASK [ks-core/prepare : KubeSphere | Creating KubeSphere Secret] ***************
ok: [localhost]
TASK [ks-core/prepare : KubeSphere | Enable Token Script] **********************
changed: [localhost]
TASK [ks-core/prepare : KubeSphere | Getting KS Token] *************************
changed: [localhost]
TASK [ks-core/prepare : KubeSphere | Setting ks_token] *************************
ok: [localhost]
TASK [ks-core/prepare : KubeSphere | Creating manifests] ***********************
changed: [localhost] => (item={u'type': u'init', u'name': u'ks-account-init', u'file': u'ks-account-init.yaml'})
changed: [localhost] => (item={u'type': u'init', u'name': u'ks-apigateway-init', u'file': u'ks-apigateway-init.yaml'})
changed: [localhost] => (item={u'type': u'values', u'name': u'custom-values-istio-init', u'file': u'custom-values-istio-init.yaml'})
changed: [localhost] => (item={u'type': u'cm', u'name': u'kubesphere-config', u'file': u'kubesphere-config.yaml'})
TASK [ks-core/prepare : KubeSphere | Init KubeSphere] **************************
changed: [localhost] => (item=ks-account-init.yaml)
changed: [localhost] => (item=ks-apigateway-init.yaml)
changed: [localhost] => (item=kubesphere-config.yaml)
TASK [ks-core/prepare : KubeSphere | Getting controls-system file] *************
changed: [localhost] => (item={u'name': u'kubesphere-controls-system', u'file': u'kubesphere-controls-system.yaml'})
TASK [ks-core/prepare : KubeSphere | Installing controls-system] ***************
changed: [localhost]
TASK [ks-core/prepare : KubeSphere | Create KubeSphere workspace] **************
changed: [localhost]
TASK [ks-core/prepare : KubeSphere | Create KubeSphere vpa] ********************
skipping: [localhost]
TASK [ks-core/prepare : KubeSphere | Generate kubeconfig-admin] ****************
skipping: [localhost]
TASK [ks-core/prepare : Kubesphere | Checking kubesphere component] ************
changed: [localhost]
TASK [ks-core/prepare : Kubesphere | Get kubesphere component version] *********
skipping: [localhost]
TASK [ks-core/prepare : ks-upgrade | disable ks-apiserver] *********************
fatal: [localhost]: FAILED! => {"msg": "The conditional check 'console_version.stdout and console_version.stdout != ks_version' failed. The error was: error while evaluating conditional (console_version.stdout and console_version.stdout != ks_version): 'dict object' has no attribute 'stdout'\n\nThe error appears to be in '/kubesphere/installer/roles/ks-core/prepare/tasks/ks-stop.yaml': line 1, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: ks-upgrade | disable ks-apiserver\n ^ here\n"}
...ignoring
TASK [ks-core/prepare : ks-upgrade | disable ks-apigateway] ********************
fatal: [localhost]: FAILED! => {"msg": "The conditional check 'console_version.stdout and console_version.stdout != ks_version' failed. The error was: error while evaluating conditional (console_version.stdout and console_version.stdout != ks_version): 'dict object' has no attribute 'stdout'\n\nThe error appears to be in '/kubesphere/installer/roles/ks-core/prepare/tasks/ks-stop.yaml': line 6, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: ks-upgrade | disable ks-apigateway\n ^ here\n"}
...ignoring
TASK [ks-core/prepare : ks-upgrade | disable ks-account] ***********************
fatal: [localhost]: FAILED! => {"msg": "The conditional check 'console_version.stdout and console_version.stdout != ks_version' failed. The error was: error while evaluating conditional (console_version.stdout and console_version.stdout != ks_version): 'dict object' has no attribute 'stdout'\n\nThe error appears to be in '/kubesphere/installer/roles/ks-core/prepare/tasks/ks-stop.yaml': line 11, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: ks-upgrade | disable ks-account\n ^ here\n"}
...ignoring
TASK [ks-core/prepare : ks-upgrade | disable ks-console] ***********************
fatal: [localhost]: FAILED! => {"msg": "The conditional check 'console_version.stdout and console_version.stdout != ks_version' failed. The error was: error while evaluating conditional (console_version.stdout and console_version.stdout != ks_version): 'dict object' has no attribute 'stdout'\n\nThe error appears to be in '/kubesphere/installer/roles/ks-core/prepare/tasks/ks-stop.yaml': line 16, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: ks-upgrade | disable ks-console\n ^ here\n"}
...ignoring
TASK [ks-core/prepare : ks-upgrade | disable ks-controller-manager] ************
fatal: [localhost]: FAILED! => {"msg": "The conditional check 'console_version.stdout and console_version.stdout != ks_version' failed. The error was: error while evaluating conditional (console_version.stdout and console_version.stdout != ks_version): 'dict object' has no attribute 'stdout'\n\nThe error appears to be in '/kubesphere/installer/roles/ks-core/prepare/tasks/ks-stop.yaml': line 21, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: ks-upgrade | disable ks-controller-manager\n ^ here\n"}
...ignoring
TASK [ks-core/prepare : ks-upgrade | restart ks-apiserver] *********************
fatal: [localhost]: FAILED! => {"msg": "The conditional check 'console_version.stdout and console_version.stdout == ks_version' failed. The error was: error while evaluating conditional (console_version.stdout and console_version.stdout == ks_version): 'dict object' has no attribute 'stdout'\n\nThe error appears to be in '/kubesphere/installer/roles/ks-core/prepare/tasks/ks-restart.yaml': line 1, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: ks-upgrade | restart ks-apiserver\n ^ here\n"}
...ignoring
TASK [ks-core/prepare : ks-upgrade | restart ks-apigateway] ********************
fatal: [localhost]: FAILED! => {"msg": "The conditional check 'console_version.stdout and console_version.stdout == ks_version' failed. The error was: error while evaluating conditional (console_version.stdout and console_version.stdout == ks_version): 'dict object' has no attribute 'stdout'\n\nThe error appears to be in '/kubesphere/installer/roles/ks-core/prepare/tasks/ks-restart.yaml': line 6, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: ks-upgrade | restart ks-apigateway\n ^ here\n"}
...ignoring
TASK [ks-core/prepare : ks-upgrade | restart ks-account] ***********************
fatal: [localhost]: FAILED! => {"msg": "The conditional check 'console_version.stdout and console_version.stdout == ks_version' failed. The error was: error while evaluating conditional (console_version.stdout and console_version.stdout == ks_version): 'dict object' has no attribute 'stdout'\n\nThe error appears to be in '/kubesphere/installer/roles/ks-core/prepare/tasks/ks-restart.yaml': line 11, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: ks-upgrade | restart ks-account\n ^ here\n"}
...ignoring
TASK [ks-core/prepare : ks-upgrade | restart ks-console] ***********************
fatal: [localhost]: FAILED! => {"msg": "The conditional check 'console_version.stdout and console_version.stdout == ks_version' failed. The error was: error while evaluating conditional (console_version.stdout and console_version.stdout == ks_version): 'dict object' has no attribute 'stdout'\n\nThe error appears to be in '/kubesphere/installer/roles/ks-core/prepare/tasks/ks-restart.yaml': line 16, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: ks-upgrade | restart ks-console\n ^ here\n"}
...ignoring
TASK [ks-core/prepare : ks-upgrade | restart ks-controller-manager] ************
fatal: [localhost]: FAILED! => {"msg": "The conditional check 'console_version.stdout and console_version.stdout == ks_version' failed. The error was: error while evaluating conditional (console_version.stdout and console_version.stdout == ks_version): 'dict object' has no attribute 'stdout'\n\nThe error appears to be in '/kubesphere/installer/roles/ks-core/prepare/tasks/ks-restart.yaml': line 21, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: ks-upgrade | restart ks-controller-manager\n ^ here\n"}
...ignoring
TASK [ks-core/ks-core : KubeSphere | Getting kubernetes version] ***************
changed: [localhost]
TASK [ks-core/ks-core : KubeSphere | Setting kubernetes version] ***************
ok: [localhost]
TASK [ks-core/ks-core : KubeSphere | Getting kubernetes master num] ************
changed: [localhost]
TASK [ks-core/ks-core : KubeSphere | Setting master num] ***********************
ok: [localhost]
TASK [ks-core/ks-core : ks-console | Checking ks-console svc] ******************
changed: [localhost]
TASK [ks-core/ks-core : ks-console | Getting ks-console svc port] **************
skipping: [localhost]
TASK [ks-core/ks-core : ks-console | Setting console_port] *********************
skipping: [localhost]
TASK [ks-core/ks-core : KubeSphere | Getting Ingress installation files] *******
changed: [localhost] => (item=ingress)
changed: [localhost] => (item=ks-account)
changed: [localhost] => (item=ks-apigateway)
changed: [localhost] => (item=ks-apiserver)
changed: [localhost] => (item=ks-console)
changed: [localhost] => (item=ks-controller-manager)
TASK [ks-core/ks-core : KubeSphere | Creating manifests] ***********************
changed: [localhost] => (item={u'path': u'ingress', u'type': u'config', u'file': u'ingress-controller.yaml'})
changed: [localhost] => (item={u'path': u'ks-account', u'type': u'deployment', u'file': u'ks-account.yml'})
changed: [localhost] => (item={u'path': u'ks-apigateway', u'type': u'deploy', u'file': u'ks-apigateway.yaml'})
changed: [localhost] => (item={u'path': u'ks-apiserver', u'type': u'deploy', u'file': u'ks-apiserver.yml'})
changed: [localhost] => (item={u'path': u'ks-controller-manager', u'type': u'deploy', u'file': u'ks-controller-manager.yaml'})
changed: [localhost] => (item={u'path': u'ks-console', u'type': u'config', u'file': u'ks-console-config.yml'})
changed: [localhost] => (item={u'path': u'ks-console', u'type': u'deploy', u'file': u'ks-console-deployment.yml'})
changed: [localhost] => (item={u'path': u'ks-console', u'type': u'svc', u'file': u'ks-console-svc.yml'})
changed: [localhost] => (item={u'path': u'ks-console', u'type': u'deploy', u'file': u'ks-docs-deployment.yaml'})
changed: [localhost] => (item={u'path': u'ks-console', u'type': u'config', u'file': u'sample-bookinfo-configmap.yaml'})
TASK [ks-core/ks-core : KubeSphere | Delete Ingress-controller configmap] ******
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl delete cm -n kubesphere-system ks-router-config\n", "delta": "0:00:00.775287", "end": "2020-05-21 22:24:22.549046", "msg": "non-zero return code", "rc": 1, "start": "2020-05-21 22:24:21.773759", "stderr": "Error from server (NotFound): configmaps \"ks-router-config\" not found", "stderr_lines": ["Error from server (NotFound): configmaps \"ks-router-config\" not found"], "stdout": "", "stdout_lines": []}
...ignoring
TASK [ks-core/ks-core : KubeSphere | Creating Ingress-controller configmap] ****
changed: [localhost]
TASK [ks-core/ks-core : KubeSphere | Check ks-account version] *****************
changed: [localhost]
TASK [ks-core/ks-core : KubeSphere | Update kubectl image] *********************
skipping: [localhost]
TASK [ks-core/ks-core : KubeSphere | Creating ks-core] *************************
changed: [localhost] => (item={u'path': u'ks-apigateway', u'file': u'ks-apigateway.yaml'})
changed: [localhost] => (item={u'path': u'ks-apiserver', u'file': u'ks-apiserver.yml'})
changed: [localhost] => (item={u'path': u'ks-account', u'file': u'ks-account.yml'})
changed: [localhost] => (item={u'path': u'ks-controller-manager', u'file': u'ks-controller-manager.yaml'})
changed: [localhost] => (item={u'path': u'ks-console', u'file': u'ks-console-config.yml'})
changed: [localhost] => (item={u'path': u'ks-console', u'file': u'sample-bookinfo-configmap.yaml'})
changed: [localhost] => (item={u'path': u'ks-console', u'file': u'ks-console-deployment.yml'})
TASK [ks-core/ks-core : KubeSphere | Check ks-console svc] *********************
changed: [localhost]
TASK [ks-core/ks-core : KubeSphere | Creating ks-console svc] ******************
changed: [localhost] => (item={u'path': u'ks-console', u'file': u'ks-console-svc.yml'})
TASK [ks-core/ks-core : KubeSphere | Patch ks-console svc] *********************
skipping: [localhost]
PLAY RECAP *********************************************************************
localhost : ok=38 changed=22 unreachable=0 failed=0 skipped=10 rescued=0 ignored=11
Start installing monitoring
**************************************************
task monitoring status is failed
total: 1 completed:1
**************************************************
Task 'monitoring' failed:
******************************************************************************************************************************************************
{
"counter": 74,
"created": "2020-05-21T22:27:01.701795",
"end_line": 74,
"event": "runner_on_failed",
"event_data": {
"event_loop": null,
"host": "localhost",
"ignore_errors": null,
"play": "localhost",
"play_pattern": "localhost",
"play_uuid": "12927f56-6706-f2d9-ead1-000000000005",
"playbook": "/kubesphere/playbooks/monitoring.yaml",
"playbook_uuid": "4253bbba-33ba-4fad-bebf-400fd795c502",
"remote_addr": "127.0.0.1",
"res": {
"changed": true,
"msg": "All items completed",
"results": [
{
"_ansible_item_label": "sources",
"_ansible_no_log": false,
"ansible_loop_var": "item",
"attempts": 5,
"changed": true,
"cmd": "/usr/local/bin/kubectl apply -f /etc/kubesphere/prometheus/sources",
"delta": "0:00:02.055784",
"end": "2020-05-21 22:26:31.342519",
"failed": true,
"failed_when_result": true,
"invocation": {
"module_args": {
"_raw_params": "/usr/local/bin/kubectl apply -f /etc/kubesphere/prometheus/sources",
"_uses_shell": true,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": true
}
},
"item": "sources",
"msg": "non-zero return code",
"rc": 1,
"start": "2020-05-21 22:26:29.286735",
"stderr": "unable to recognize \"/etc/kubesphere/prometheus/sources/kube-state-metrics-serviceMonitor.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"\nunable to recognize \"/etc/kubesphere/prometheus/sources/node-exporter-serviceMonitor.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"\nunable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-prometheus.yaml\": no matches for kind \"Prometheus\" in version \"monitoring.coreos.com/v1\"\nunable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-prometheusSystem.yaml\": no matches for kind \"Prometheus\" in version \"monitoring.coreos.com/v1\"\nunable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitor.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"\nunable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorApiserver.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"\nunable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorCoreDNS.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"\nunable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorKubeControllerManager.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"\nunable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorKubeScheduler.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"\nunable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorKubelet.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"\nunable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorSystem.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"",
"stderr_lines": [
"unable to recognize \"/etc/kubesphere/prometheus/sources/kube-state-metrics-serviceMonitor.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"",
"unable to recognize \"/etc/kubesphere/prometheus/sources/node-exporter-serviceMonitor.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"",
"unable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-prometheus.yaml\": no matches for kind \"Prometheus\" in version \"monitoring.coreos.com/v1\"",
"unable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-prometheusSystem.yaml\": no matches for kind \"Prometheus\" in version \"monitoring.coreos.com/v1\"",
"unable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitor.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"",
"unable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorApiserver.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"",
"unable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorCoreDNS.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"",
"unable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorKubeControllerManager.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"",
"unable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorKubeScheduler.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"",
"unable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorKubelet.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"",
"unable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorSystem.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\""
],
"stdout": "secret/additional-scrape-configs unchanged\nclusterrole.rbac.authorization.k8s.io/kubesphere-kube-state-metrics unchanged\nclusterrolebinding.rbac.authorization.k8s.io/kubesphere-kube-state-metrics unchanged\ndeployment.apps/kube-state-metrics unchanged\nrole.rbac.authorization.k8s.io/kube-state-metrics unchanged\nrolebinding.rbac.authorization.k8s.io/kube-state-metrics unchanged\nservice/kube-state-metrics unchanged\nserviceaccount/kube-state-metrics unchanged\nclusterrole.rbac.authorization.k8s.io/kubesphere-node-exporter unchanged\nclusterrolebinding.rbac.authorization.k8s.io/kubesphere-node-exporter unchanged\ndaemonset.apps/node-exporter configured\nservice/node-exporter unchanged\nserviceaccount/node-exporter unchanged\nclusterrole.rbac.authorization.k8s.io/kubesphere-prometheus-k8s unchanged\nclusterrolebinding.rbac.authorization.k8s.io/kubesphere-prometheus-k8s unchanged\nrolebinding.rbac.authorization.k8s.io/prometheus-k8s-config unchanged\nrole.rbac.authorization.k8s.io/prometheus-k8s-config unchanged\nservice/prometheus-k8s unchanged\nserviceaccount/prometheus-k8s unchanged\nservice/kube-controller-manager-headless unchanged\nservice/kube-scheduler-headless unchanged\nservice/prometheus-k8s-system unchanged",
"stdout_lines": [
"secret/additional-scrape-configs unchanged",
"clusterrole.rbac.authorization.k8s.io/kubesphere-kube-state-metrics unchanged",
"clusterrolebinding.rbac.authorization.k8s.io/kubesphere-kube-state-metrics unchanged",
"deployment.apps/kube-state-metrics unchanged",
"role.rbac.authorization.k8s.io/kube-state-metrics unchanged",
"rolebinding.rbac.authorization.k8s.io/kube-state-metrics unchanged",
"service/kube-state-metrics unchanged",
"serviceaccount/kube-state-metrics unchanged",
"clusterrole.rbac.authorization.k8s.io/kubesphere-node-exporter unchanged",
"clusterrolebinding.rbac.authorization.k8s.io/kubesphere-node-exporter unchanged",
"daemonset.apps/node-exporter configured",
"service/node-exporter unchanged",
"serviceaccount/node-exporter unchanged",
"clusterrole.rbac.authorization.k8s.io/kubesphere-prometheus-k8s unchanged",
"clusterrolebinding.rbac.authorization.k8s.io/kubesphere-prometheus-k8s unchanged",
"rolebinding.rbac.authorization.k8s.io/prometheus-k8s-config unchanged",
"role.rbac.authorization.k8s.io/prometheus-k8s-config unchanged",
"service/prometheus-k8s unchanged",
"serviceaccount/prometheus-k8s unchanged",
"service/kube-controller-manager-headless unchanged",
"service/kube-scheduler-headless unchanged",
"service/prometheus-k8s-system unchanged"
]
},
{
"_ansible_item_label": "sources",
"_ansible_no_log": false,
"ansible_loop_var": "item",
"attempts": 5,
"changed": true,
"cmd": "/usr/local/bin/kubectl apply -f /etc/kubesphere/prometheus/sources",
"delta": "0:00:01.208305",
"end": "2020-05-21 22:27:01.585331",
"failed": true,
"failed_when_result": true,
"invocation": {
"module_args": {
"_raw_params": "/usr/local/bin/kubectl apply -f /etc/kubesphere/prometheus/sources",
"_uses_shell": true,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": true
}
},
"item": "sources",
"msg": "non-zero return code",
"rc": 1,
"start": "2020-05-21 22:27:00.377026",
"stderr": "unable to recognize \"/etc/kubesphere/prometheus/sources/kube-state-metrics-serviceMonitor.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"\nunable to recognize \"/etc/kubesphere/prometheus/sources/node-exporter-serviceMonitor.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"\nunable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-prometheus.yaml\": no matches for kind \"Prometheus\" in version \"monitoring.coreos.com/v1\"\nunable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-prometheusSystem.yaml\": no matches for kind \"Prometheus\" in version \"monitoring.coreos.com/v1\"\nunable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitor.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"\nunable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorApiserver.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"\nunable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorCoreDNS.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"\nunable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorKubeControllerManager.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"\nunable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorKubeScheduler.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"\nunable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorKubelet.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"\nunable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorSystem.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"",
"stderr_lines": [
"unable to recognize \"/etc/kubesphere/prometheus/sources/kube-state-metrics-serviceMonitor.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"",
"unable to recognize \"/etc/kubesphere/prometheus/sources/node-exporter-serviceMonitor.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"",
"unable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-prometheus.yaml\": no matches for kind \"Prometheus\" in version \"monitoring.coreos.com/v1\"",
"unable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-prometheusSystem.yaml\": no matches for kind \"Prometheus\" in version \"monitoring.coreos.com/v1\"",
"unable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitor.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"",
"unable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorApiserver.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"",
"unable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorCoreDNS.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"",
"unable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorKubeControllerManager.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"",
"unable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorKubeScheduler.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"",
"unable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorKubelet.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"",
"unable to recognize \"/etc/kubesphere/prometheus/sources/prometheus-serviceMonitorSystem.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\""
],
"stdout": "secret/additional-scrape-configs unchanged\nclusterrole.rbac.authorization.k8s.io/kubesphere-kube-state-metrics unchanged\nclusterrolebinding.rbac.authorization.k8s.io/kubesphere-kube-state-metrics unchanged\ndeployment.apps/kube-state-metrics unchanged\nrole.rbac.authorization.k8s.io/kube-state-metrics unchanged\nrolebinding.rbac.authorization.k8s.io/kube-state-metrics unchanged\nservice/kube-state-metrics unchanged\nserviceaccount/kube-state-metrics unchanged\nclusterrole.rbac.authorization.k8s.io/kubesphere-node-exporter unchanged\nclusterrolebinding.rbac.authorization.k8s.io/kubesphere-node-exporter unchanged\ndaemonset.apps/node-exporter configured\nservice/node-exporter unchanged\nserviceaccount/node-exporter unchanged\nclusterrole.rbac.authorization.k8s.io/kubesphere-prometheus-k8s unchanged\nclusterrolebinding.rbac.authorization.k8s.io/kubesphere-prometheus-k8s unchanged\nrolebinding.rbac.authorization.k8s.io/prometheus-k8s-config unchanged\nrole.rbac.authorization.k8s.io/prometheus-k8s-config unchanged\nservice/prometheus-k8s unchanged\nserviceaccount/prometheus-k8s unchanged\nservice/kube-controller-manager-headless unchanged\nservice/kube-scheduler-headless unchanged\nservice/prometheus-k8s-system unchanged",
"stdout_lines": [
"secret/additional-scrape-configs unchanged",
"clusterrole.rbac.authorization.k8s.io/kubesphere-kube-state-metrics unchanged",
"clusterrolebinding.rbac.authorization.k8s.io/kubesphere-kube-state-metrics unchanged",
"deployment.apps/kube-state-metrics unchanged",
"role.rbac.authorization.k8s.io/kube-state-metrics unchanged",
"rolebinding.rbac.authorization.k8s.io/kube-state-metrics unchanged",
"service/kube-state-metrics unchanged",
"serviceaccount/kube-state-metrics unchanged",
"clusterrole.rbac.authorization.k8s.io/kubesphere-node-exporter unchanged",
"clusterrolebinding.rbac.authorization.k8s.io/kubesphere-node-exporter unchanged",
"daemonset.apps/node-exporter configured",
"service/node-exporter unchanged",
"serviceaccount/node-exporter unchanged",
"clusterrole.rbac.authorization.k8s.io/kubesphere-prometheus-k8s unchanged",
"clusterrolebinding.rbac.authorization.k8s.io/kubesphere-prometheus-k8s unchanged",
"rolebinding.rbac.authorization.k8s.io/prometheus-k8s-config unchanged",
"role.rbac.authorization.k8s.io/prometheus-k8s-config unchanged",
"service/prometheus-k8s unchanged",
"serviceaccount/prometheus-k8s unchanged",
"service/kube-controller-manager-headless unchanged",
"service/kube-scheduler-headless unchanged",
"service/prometheus-k8s-system unchanged"
]
}
]
},
"role": "ks-monitor",
"task": "ks-monitor | Installing prometheus-operator",
"task_action": "shell",
"task_args": "",
"task_path": "/kubesphere/installer/roles/ks-monitor/tasks/main.yaml:66",
"task_uuid": "12927f56-6706-f2d9-ead1-000000000024",
"uuid": "aecb0548-4e53-4783-a261-a997d5b17ed5"
},
"parent_uuid": "12927f56-6706-f2d9-ead1-000000000024",
"pid": 2473,
"runner_ident": "monitoring",
"start_line": 74,
"stdout": "",
"uuid": "aecb0548-4e53-4783-a261-a997d5b17ed5"
}
******************************************************************************************************************************************************
#####################################################
### Welcome to KubeSphere! ###
#####################################################
Console: http://10.0.2.5:30880
Account: admin
Password: P@88w0rd
NOTES:
1. After logging into the console, please check the
monitoring status of service components in
the "Cluster Status". If the service is not
ready, please wait patiently. You can start
to use when all components are ready.
2. Please modify the default password after login.
#####################################################
日志里看到 Task ‘monitoring’ failed,是表示监控安装不成功吧?看看这个问题能否协助解决一下,再次对你们的支持表示感谢!管理平台页面如下:
