操作系统信息,物理机,Debian 4.9.130-2 (2018-10-27) x86_64 GNU/Linux,4C/32G

Kubernetes版本信息,v1.17.9,多节点(1master/3node)

KubeSphere版本信息,v3.0.0。在线安装。k8s+ks全套安装。

描述:首次安装成功过,然后delete卸载(./kk delete cluster -f config.yaml)后重新安装(./kk create cluster -f config.yaml)就再也没起来过,kubenetes集群是可以安装成功的且可以正常使用,但kubesphere一直部署失败…..

尝试:关掉应用商店配置、重装helm、卸载所有docker镜像等方法仍然无果。

root@master:/opt/kubekey# kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
2021-04-01T12:22:55+08:00 INFO     : shell-operator latest
2021-04-01T12:22:55+08:00 INFO     : HTTP SERVER Listening on 0.0.0.0:9115
2021-04-01T12:22:55+08:00 INFO     : Use temporary dir: /tmp/shell-operator
2021-04-01T12:22:55+08:00 INFO     : Initialize hooks manager ...
2021-04-01T12:22:55+08:00 INFO     : Search and load hooks ...
2021-04-01T12:22:55+08:00 INFO     : Load hook config from '/hooks/kubesphere/installRunner.py'
2021-04-01T12:22:57+08:00 INFO     : Load hook config from '/hooks/kubesphere/schedule.sh'
2021-04-01T12:22:57+08:00 INFO     : Initializing schedule manager ...
2021-04-01T12:22:57+08:00 INFO     : KUBE Init Kubernetes client
2021-04-01T12:22:57+08:00 INFO     : KUBE-INIT Kubernetes client is configured successfully
2021-04-01T12:22:57+08:00 INFO     : MAIN: run main loop
2021-04-01T12:22:57+08:00 INFO     : MAIN: add onStartup tasks
2021-04-01T12:22:57+08:00 INFO     : Running schedule manager ...
2021-04-01T12:22:57+08:00 INFO     : MSTOR Create new metric shell_operator_live_ticks
2021-04-01T12:22:57+08:00 INFO     : MSTOR Create new metric shell_operator_tasks_queue_length
2021-04-01T12:22:57+08:00 INFO     : QUEUE add all HookRun@OnStartup
2021-04-01T12:22:57+08:00 INFO     : GVR for kind 'ClusterConfiguration' is installer.kubesphere.io/v1alpha1, Resource=clusterconfigurations
2021-04-01T12:22:57+08:00 INFO     : EVENT Kube event 'a732a198-baf8-4661-9188-126d9b71233b'
2021-04-01T12:22:57+08:00 INFO     : QUEUE add TASK_HOOK_RUN@KUBE_EVENTS kubesphere/installRunner.py
2021-04-01T12:23:00+08:00 INFO     : TASK_RUN HookRun@KUBE_EVENTS kubesphere/installRunner.py
2021-04-01T12:23:00+08:00 INFO     : Running hook 'kubesphere/installRunner.py' binding 'KUBE_EVENTS' ...
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'

PLAY [localhost] ***************************************************************

TASK [download : include_tasks] ************************************************
skipping: [localhost]

TASK [download : Download items] ***********************************************
skipping: [localhost]

TASK [download : Sync container] ***********************************************
skipping: [localhost]

TASK [kubesphere-defaults : Configure defaults] ********************************
ok: [localhost] => {
    "msg": "Check roles/kubesphere-defaults/defaults/main.yml"
}

TASK [preinstall : check k8s version] ******************************************
changed: [localhost]

TASK [preinstall : init k8s version] *******************************************
ok: [localhost]

TASK [preinstall : Stop if kubernetes version is nonsupport] *******************
ok: [localhost] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [preinstall : check storage class] ****************************************
changed: [localhost]

TASK [preinstall : Stop if StorageClass was not found] *************************
skipping: [localhost]

TASK [preinstall : check default storage class] ********************************
changed: [localhost]

TASK [preinstall : Stop if defaultStorageClass was not found] ******************
ok: [localhost] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [preinstall : Kubesphere | Checking kubesphere component] *****************
changed: [localhost]

TASK [preinstall : Kubesphere | Get kubesphere component version] **************
skipping: [localhost]

TASK [preinstall : Kubesphere | Get kubesphere component version] **************
skipping: [localhost] => (item=ks-openldap) 
skipping: [localhost] => (item=ks-redis) 
skipping: [localhost] => (item=ks-minio) 
skipping: [localhost] => (item=ks-openpitrix) 
skipping: [localhost] => (item=elasticsearch-logging) 
skipping: [localhost] => (item=elasticsearch-logging-curator) 
skipping: [localhost] => (item=istio) 
skipping: [localhost] => (item=istio-init) 
skipping: [localhost] => (item=jaeger-operator) 
skipping: [localhost] => (item=ks-jenkins) 
skipping: [localhost] => (item=ks-sonarqube) 
skipping: [localhost] => (item=logging-fluentbit-operator) 
skipping: [localhost] => (item=uc) 
skipping: [localhost] => (item=metrics-server) 

PLAY RECAP *********************************************************************
localhost                  : ok=8    changed=4    unreachable=0    failed=0    skipped=6    rescued=0    ignored=0   

[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'

PLAY [localhost] ***************************************************************

TASK [download : include_tasks] ************************************************
skipping: [localhost]

TASK [download : Download items] ***********************************************
skipping: [localhost]

TASK [download : Sync container] ***********************************************
skipping: [localhost]

TASK [kubesphere-defaults : Configure defaults] ********************************
ok: [localhost] => {
    "msg": "Check roles/kubesphere-defaults/defaults/main.yml"
}

TASK [metrics-server : Metrics-Server | Checking old installation files] *******
ok: [localhost]

TASK [metrics-server : Metrics-Server | deleting old metrics-server] ***********
skipping: [localhost]

TASK [metrics-server : Metrics-Server | deleting old metrics-server files] *****
[DEPRECATION WARNING]: evaluating {'changed': False, 'stat': {'exists': False},
 'failed': False} as a bare variable, this behaviour will go away and you might
 need to add |bool to the expression in the future. Also see 
CONDITIONAL_BARE_VARS configuration toggle.. This feature will be removed in 
version 2.12. Deprecation warnings can be disabled by setting 
deprecation_warnings=False in ansible.cfg.
ok: [localhost] => (item=metrics-server)

TASK [metrics-server : Metrics-Server | Getting metrics-server installation files] ***
changed: [localhost]

TASK [metrics-server : Metrics-Server | Creating manifests] ********************
changed: [localhost] => (item={'name': 'values', 'file': 'values.yaml', 'type': 'config'})

TASK [metrics-server : Metrics-Server | Check Metrics-Server] ******************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/helm list metrics-server -n kube-system\n", "delta": "0:00:00.295294", "end": "2021-04-01 12:23:23.656448", "msg": "non-zero return code", "rc": 1, "start": "2021-04-01 12:23:23.361154", "stderr": "Error: \"helm list\" accepts no arguments\n\nUsage:  helm list [flags]", "stderr_lines": ["Error: \"helm list\" accepts no arguments", "", "Usage:  helm list [flags]"], "stdout": "", "stdout_lines": []}
...ignoring

TASK [metrics-server : Metrics-Server | Installing metrics-server] *************
changed: [localhost]

TASK [metrics-server : Metrics-Server | Installing metrics-server retry] *******
skipping: [localhost]

TASK [metrics-server : Metrics-Server | Waitting for v1beta1.metrics.k8s.io ready] ***
FAILED - RETRYING: Metrics-Server | Waitting for v1beta1.metrics.k8s.io ready (60 retries left).
FAILED - RETRYING: Metrics-Server | Waitting for v1beta1.metrics.k8s.io ready (59 retries left).
FAILED - RETRYING: Metrics-Server | Waitting for v1beta1.metrics.k8s.io ready (58 retries left).
changed: [localhost]

TASK [metrics-server : Metrics-Server | import metrics-server status] **********
changed: [localhost]

PLAY RECAP *********************************************************************
localhost                  : ok=9    changed=6    unreachable=0    failed=0    skipped=5    rescued=0    ignored=1   

[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'

PLAY [localhost] ***************************************************************

TASK [download : include_tasks] ************************************************
skipping: [localhost]

TASK [download : Download items] ***********************************************
skipping: [localhost]

TASK [download : Sync container] ***********************************************
skipping: [localhost]

TASK [kubesphere-defaults : Configure defaults] ********************************
ok: [localhost] => {
    "msg": "Check roles/kubesphere-defaults/defaults/main.yml"
}

TASK [common : Kubesphere | Check kube-node-lease namespace] *******************
changed: [localhost]

TASK [common : KubeSphere | Get system namespaces] *****************************
ok: [localhost]

TASK [common : set_fact] *******************************************************
ok: [localhost]

TASK [common : debug] **********************************************************
ok: [localhost] => {
    "msg": [
        "kubesphere-system",
        "kubesphere-controls-system",
        "kubesphere-monitoring-system",
        "kube-node-lease",
        "kubesphere-devops-system",
        "kubesphere-alerting-system"
    ]
}

TASK [common : KubeSphere | Create kubesphere namespace] ***********************
changed: [localhost] => (item=kubesphere-system)
changed: [localhost] => (item=kubesphere-controls-system)
changed: [localhost] => (item=kubesphere-monitoring-system)
changed: [localhost] => (item=kube-node-lease)
changed: [localhost] => (item=kubesphere-devops-system)
changed: [localhost] => (item=kubesphere-alerting-system)

TASK [common : KubeSphere | Labeling system-workspace] *************************
changed: [localhost] => (item=default)
changed: [localhost] => (item=kube-public)
changed: [localhost] => (item=kube-system)
changed: [localhost] => (item=kubesphere-system)
changed: [localhost] => (item=kubesphere-controls-system)
changed: [localhost] => (item=kubesphere-monitoring-system)

changed: [localhost] => (item=kube-node-lease)
changed: [localhost] => (item=kubesphere-devops-system)
changed: [localhost] => (item=kubesphere-alerting-system)

TASK [common : KubeSphere | Create ImagePullSecrets] ***************************
changed: [localhost] => (item=default)
changed: [localhost] => (item=kube-public)
changed: [localhost] => (item=kube-system)
changed: [localhost] => (item=kubesphere-system)
changed: [localhost] => (item=kubesphere-controls-system)
changed: [localhost] => (item=kubesphere-monitoring-system)
changed: [localhost] => (item=kube-node-lease)
changed: [localhost] => (item=kubesphere-devops-system)
changed: [localhost] => (item=kubesphere-alerting-system)

TASK [common : Kubesphere | Label namespace for network policy] ****************
changed: [localhost]

TASK [common : KubeSphere | Getting kubernetes master num] *********************
changed: [localhost]

TASK [common : KubeSphere | Setting master num] ********************************
ok: [localhost]

TASK [common : Kubesphere | Getting common component installation files] *******
changed: [localhost] => (item=common)
changed: [localhost] => (item=ks-crds)

TASK [common : KubeSphere | Create KubeSphere crds] ****************************
changed: [localhost]

TASK [common : KubeSphere | Recreate KubeSphere crds] **************************
changed: [localhost]

TASK [common : KubeSphere | check k8s version] *********************************
changed: [localhost]

TASK [common : Kubesphere | Getting common component installation files] *******
changed: [localhost] => (item=snapshot-controller)

TASK [common : Kubesphere | Creating snapshot controller values] ***************
changed: [localhost] => (item={'name': 'custom-values-snapshot-controller', 'file': 'custom-values-snapshot-controller.yaml'})

TASK [common : Kubesphere | Remove old snapshot crd] ***************************
changed: [localhost]

TASK [common : Kubesphere | Deploy snapshot controller] ************************
changed: [localhost]

TASK [common : Kubesphere | Checking openpitrix common component] **************
changed: [localhost]

TASK [common : include_tasks] **************************************************
skipping: [localhost] => (item={'op': 'openpitrix-db', 'ks': 'mysql-pvc'}) 
skipping: [localhost] => (item={'op': 'openpitrix-etcd', 'ks': 'etcd-pvc'}) 

TASK [common : Getting PersistentVolumeName (mysql)] ***************************
skipping: [localhost]

TASK [common : Getting PersistentVolumeSize (mysql)] ***************************
skipping: [localhost]

TASK [common : Setting PersistentVolumeName (mysql)] ***************************
skipping: [localhost]

TASK [common : Setting PersistentVolumeSize (mysql)] ***************************
skipping: [localhost]

TASK [common : Getting PersistentVolumeName (etcd)] ****************************
skipping: [localhost]

TASK [common : Getting PersistentVolumeSize (etcd)] ****************************
skipping: [localhost]

TASK [common : Setting PersistentVolumeName (etcd)] ****************************
skipping: [localhost]

TASK [common : Setting PersistentVolumeSize (etcd)] ****************************
skipping: [localhost]

TASK [common : Kubesphere | Check mysql PersistentVolumeClaim] *****************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl get pvc -n kubesphere-system mysql-pvc -o jsonpath='{.status.capacity.storage}'\n", "delta": "0:00:00.093850", "end": "2021-04-01 12:24:41.492841", "msg": "non-zero return code", "rc": 1, "start": "2021-04-01 12:24:41.398991", "stderr": "Error from server (NotFound): persistentvolumeclaims \"mysql-pvc\" not found", "stderr_lines": ["Error from server (NotFound): persistentvolumeclaims \"mysql-pvc\" not found"], "stdout": "", "stdout_lines": []}
...ignoring

TASK [common : Kubesphere | Setting mysql db pv size] **************************
skipping: [localhost]

TASK [common : Kubesphere | Check redis PersistentVolumeClaim] *****************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl get pvc -n kubesphere-system redis-pvc -o jsonpath='{.status.capacity.storage}'\n", "delta": "0:00:00.107953", "end": "2021-04-01 12:24:41.913938", "msg": "non-zero return code", "rc": 1, "start": "2021-04-01 12:24:41.805985", "stderr": "Error from server (NotFound): persistentvolumeclaims \"redis-pvc\" not found", "stderr_lines": ["Error from server (NotFound): persistentvolumeclaims \"redis-pvc\" not found"], "stdout": "", "stdout_lines": []}
...ignoring

TASK [common : Kubesphere | Setting redis db pv size] **************************
skipping: [localhost]

TASK [common : Kubesphere | Check minio PersistentVolumeClaim] *****************

...ignoring
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl get pvc -n kubesphere-system minio -o jsonpath='{.status.capacity.storage}'\n", "delta": "0:00:00.089693", "end": "2021-04-01 12:24:42.332207", "msg": "non-zero return code", "rc": 1, "start": "2021-04-01 12:24:42.242514", "stderr": "Error from server (NotFound): persistentvolumeclaims \"minio\" not found", "stderr_lines": ["Error from server (NotFound): persistentvolumeclaims \"minio\" not found"], "stdout": "", "stdout_lines": []}
...ignoring

TASK [common : Kubesphere | Setting minio pv size] *****************************
skipping: [localhost]

TASK [common : Kubesphere | Check openldap PersistentVolumeClaim] **************

...ignoring
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl get pvc -n kubesphere-system openldap-pvc-openldap-0 -o jsonpath='{.status.capacity.storage}'\n", "delta": "0:00:00.097568", "end": "2021-04-01 12:24:42.760373", "msg": "non-zero return code", "rc": 1, "start": "2021-04-01 12:24:42.662805", "stderr": "Error from server (NotFound): persistentvolumeclaims \"openldap-pvc-openldap-0\" not found", "stderr_lines": ["Error from server (NotFound): persistentvolumeclaims \"openldap-pvc-openldap-0\" not found"], "stdout": "", "stdout_lines": []}
...ignoring

TASK [common : Kubesphere | Setting openldap pv size] **************************
skipping: [localhost]

TASK [common : Kubesphere | Check etcd db PersistentVolumeClaim] ***************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl get pvc -n kubesphere-system etcd-pvc -o jsonpath='{.status.capacity.storage}'\n", "delta": "0:00:00.102731", "end": "2021-04-01 12:24:43.167008", "msg": "non-zero return code", "rc": 1, "start": "2021-04-01 12:24:43.064277", "stderr": "Error from server (NotFound): persistentvolumeclaims \"etcd-pvc\" not found", "stderr_lines": ["Error from server (NotFound): persistentvolumeclaims \"etcd-pvc\" not found"], "stdout": "", "stdout_lines": []}
...ignoring

TASK [common : Kubesphere | Setting etcd pv size] ******************************
skipping: [localhost]

TASK [common : Kubesphere | Check redis ha PersistentVolumeClaim] **************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl get pvc -n kubesphere-system data-redis-ha-server-0 -o jsonpath='{.status.capacity.storage}'\n", "delta": "0:00:00.083082", "end": "2021-04-01 12:24:43.578843", "msg": "non-zero return code", "rc": 1, "start": "2021-04-01 12:24:43.495761", "stderr": "Error from server (NotFound): persistentvolumeclaims \"data-redis-ha-server-0\" not found", "stderr_lines": ["Error from server (NotFound): persistentvolumeclaims \"data-redis-ha-server-0\" not found"], "stdout": "", "stdout_lines": []}
...ignoring

TASK [common : Kubesphere | Setting redis ha pv size] **************************
skipping: [localhost]

TASK [common : Kubesphere | Check es-master PersistentVolumeClaim] *************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl get pvc -n kubesphere-logging-system data-elasticsearch-logging-discovery-0 -o jsonpath='{.status.capacity.storage}'\n", "delta": "0:00:00.082483", "end": "2021-04-01 12:24:43.957445", "msg": "non-zero return code", "rc": 1, "start": "2021-04-01 12:24:43.874962", "stderr": "Error from server (NotFound): namespaces \"kubesphere-logging-system\" not found", "stderr_lines": ["Error from server (NotFound): namespaces \"kubesphere-logging-system\" not found"], "stdout": "", "stdout_lines": []}
...ignoring

TASK [common : Kubesphere | Setting es master pv size] *************************
skipping: [localhost]

TASK [common : Kubesphere | Check es data PersistentVolumeClaim] ***************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl get pvc -n kubesphere-logging-system data-elasticsearch-logging-data-0 -o jsonpath='{.status.capacity.storage}'\n", "delta": "0:00:00.091803", "end": "2021-04-01 12:24:44.360298", "msg": "non-zero return code", "rc": 1, "start": "2021-04-01 12:24:44.268495", "stderr": "Error from server (NotFound): namespaces \"kubesphere-logging-system\" not found", "stderr_lines": ["Error from server (NotFound): namespaces \"kubesphere-logging-system\" not found"], "stdout": "", "stdout_lines": []}
...ignoring

TASK [common : Kubesphere | Setting es data pv size] ***************************
skipping: [localhost]

TASK [common : Kubesphere | Creating common component manifests] ***************
changed: [localhost] => (item={'path': 'etcd', 'file': 'etcd.yaml'})

changed: [localhost] => (item={'name': 'mysql', 'file': 'mysql.yaml'})
changed: [localhost] => (item={'path': 'redis', 'file': 'redis.yaml'})

TASK [common : Kubesphere | Creating mysql sercet] *****************************
changed: [localhost]

TASK [common : Kubesphere | Deploying etcd and mysql] **************************
skipping: [localhost] => (item=etcd.yaml) 
skipping: [localhost] => (item=mysql.yaml) 

TASK [common : Kubesphere | Getting minio installation files] ******************
skipping: [localhost] => (item=minio-ha) 

TASK [common : Kubesphere | Creating manifests] ********************************
skipping: [localhost] => (item={'name': 'custom-values-minio', 'file': 'custom-values-minio.yaml'}) 

TASK [common : Kubesphere | Check minio] ***************************************
skipping: [localhost]

TASK [common : Kubesphere | Deploy minio] **************************************
skipping: [localhost]

TASK [common : debug] **********************************************************
skipping: [localhost]

TASK [common : fail] ***********************************************************
skipping: [localhost]

TASK [common : Kubesphere | create minio config directory] *********************
skipping: [localhost]

TASK [common : Kubesphere | Creating common component manifests] ***************
skipping: [localhost] => (item={'path': '/root/.config/rclone', 'file': 'rclone.conf'}) 

TASK [common : include_tasks] **************************************************
skipping: [localhost] => (item=helm) 
skipping: [localhost] => (item=vmbased) 

TASK [common : Kubesphere | import minio status] *******************************
skipping: [localhost]

TASK [common : Kubesphere | Check ha-redis] ************************************
skipping: [localhost]

TASK [common : Kubesphere | Getting redis installation files] ******************
skipping: [localhost] => (item=redis-ha) 

TASK [common : Kubesphere | Creating manifests] ********************************
skipping: [localhost] => (item={'name': 'custom-values-redis', 'file': 'custom-values-redis.yaml'}) 

TASK [common : Kubesphere | Check old redis status] ****************************
skipping: [localhost]

TASK [common : Kubesphere | Delete and backup old redis svc] *******************
skipping: [localhost]

TASK [common : Kubesphere | Deploying redis] ***********************************
skipping: [localhost]

TASK [common : Kubesphere | Getting redis PodIp] *******************************
skipping: [localhost]

TASK [common : Kubesphere | Creating redis migration script] *******************
skipping: [localhost] => (item={'path': '/etc/kubesphere', 'file': 'redisMigrate.py'}) 

TASK [common : Kubesphere | Check redis-ha status] *****************************
skipping: [localhost]

TASK [common : ks-logging | Migrating redis data] ******************************
skipping: [localhost]

TASK [common : Kubesphere | Disable old redis] *********************************
skipping: [localhost]

TASK [common : Kubesphere | Deploying redis] ***********************************
skipping: [localhost] => (item=redis.yaml) 

TASK [common : Kubesphere | import redis status] *******************************
skipping: [localhost]

TASK [common : Kubesphere | Getting openldap installation files] ***************
skipping: [localhost] => (item=openldap-ha) 

TASK [common : Kubesphere | Creating manifests] ********************************
skipping: [localhost] => (item={'name': 'custom-values-openldap', 'file': 'custom-values-openldap.yaml'}) 

TASK [common : Kubesphere | Check old openldap status] *************************
skipping: [localhost]

TASK [common : KubeSphere | Shutdown ks-account] *******************************
skipping: [localhost]

TASK [common : Kubesphere | Delete and backup old openldap svc] ****************
skipping: [localhost]

TASK [common : Kubesphere | Check openldap] ************************************
skipping: [localhost]

TASK [common : Kubesphere | Deploy openldap] ***********************************
skipping: [localhost]

TASK [common : Kubesphere | Load old openldap data] ****************************
skipping: [localhost]

TASK [common : Kubesphere | Check openldap-ha status] **************************
skipping: [localhost]

TASK [common : Kubesphere | Get openldap-ha pod list] **************************
skipping: [localhost]

TASK [common : Kubesphere | Get old openldap data] *****************************
skipping: [localhost]

TASK [common : Kubesphere | Migrating openldap data] ***************************
skipping: [localhost]

TASK [common : Kubesphere | Disable old openldap] ******************************
skipping: [localhost]

TASK [common : Kubesphere | Restart openldap] **********************************
skipping: [localhost]

TASK [common : KubeSphere | Restarting ks-account] *****************************
skipping: [localhost]

TASK [common : Kubesphere | import openldap status] ****************************
skipping: [localhost]

TASK [common : Kubesphere | Check ha-redis] ************************************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/helm list -n kubesphere-system | grep \"ks-redis\"\n", "delta": "0:00:00.079073", "end": "2021-04-01 12:24:48.175249", "msg": "non-zero return code", "rc": 1, "start": "2021-04-01 12:24:48.096176", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
...ignoring

TASK [common : Kubesphere | Getting redis installation files] ******************
skipping: [localhost] => (item=redis-ha) 

TASK [common : Kubesphere | Creating manifests] ********************************
skipping: [localhost] => (item={'name': 'custom-values-redis', 'file': 'custom-values-redis.yaml'}) 

TASK [common : Kubesphere | Check old redis status] ****************************
skipping: [localhost]

TASK [common : Kubesphere | Delete and backup old redis svc] *******************
skipping: [localhost]

TASK [common : Kubesphere | Deploying redis] ***********************************
skipping: [localhost]

TASK [common : Kubesphere | Getting redis PodIp] *******************************
skipping: [localhost]

TASK [common : Kubesphere | Creating redis migration script] *******************
skipping: [localhost] => (item={'path': '/etc/kubesphere', 'file': 'redisMigrate.py'}) 

TASK [common : Kubesphere | Check redis-ha status] *****************************
skipping: [localhost]

TASK [common : ks-logging | Migrating redis data] ******************************
skipping: [localhost]

TASK [common : Kubesphere | Disable old redis] *********************************
skipping: [localhost]

TASK [common : Kubesphere | Deploying redis] ***********************************
changed: [localhost] => (item=redis.yaml)

TASK [common : Kubesphere | import redis status] *******************************
changed: [localhost]

TASK [common : Kubesphere | Getting openldap installation files] ***************
changed: [localhost] => (item=openldap-ha)

TASK [common : Kubesphere | Creating manifests] ********************************
changed: [localhost] => (item={'name': 'custom-values-openldap', 'file': 'custom-values-openldap.yaml'})

TASK [common : Kubesphere | Check old openldap status] *************************
changed: [localhost]

TASK [common : KubeSphere | Shutdown ks-account] *******************************
skipping: [localhost]

TASK [common : Kubesphere | Delete and backup old openldap svc] ****************
skipping: [localhost]

TASK [common : Kubesphere | Check openldap] ************************************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/helm list -n kubesphere-system | grep \"ks-openldap\"\n", "delta": "0:00:00.077766", "end": "2021-04-01 12:24:54.646424", "msg": "non-zero return code", "rc": 1, "start": "2021-04-01 12:24:54.568658", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
...ignoring

TASK [common : Kubesphere | Deploy openldap] ***********************************
changed: [localhost]

TASK [common : Kubesphere | Load old openldap data] ****************************
skipping: [localhost]

TASK [common : Kubesphere | Check openldap-ha status] **************************
skipping: [localhost]

TASK [common : Kubesphere | Get openldap-ha pod list] **************************
skipping: [localhost]

TASK [common : Kubesphere | Get old openldap data] *****************************
skipping: [localhost]

TASK [common : Kubesphere | Migrating openldap data] ***************************
skipping: [localhost]

TASK [common : Kubesphere | Disable old openldap] ******************************
skipping: [localhost]

TASK [common : Kubesphere | Restart openldap] **********************************
skipping: [localhost]

TASK [common : KubeSphere | Restarting ks-account] *****************************
skipping: [localhost]

TASK [common : Kubesphere | import openldap status] ****************************
changed: [localhost]

TASK [common : Kubesphere | Getting minio installation files] ******************
changed: [localhost] => (item=minio-ha)

TASK [common : Kubesphere | Creating manifests] ********************************
changed: [localhost] => (item={'name': 'custom-values-minio', 'file': 'custom-values-minio.yaml'})

TASK [common : Kubesphere | Check minio] ***************************************

...ignoring
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/helm list -n kubesphere-system | grep \"ks-minio\"\n", "delta": "0:00:00.076522", "end": "2021-04-01 12:25:06.667585", "msg": "non-zero return code", "rc": 1, "start": "2021-04-01 12:25:06.591063", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
...ignoring

TASK [common : Kubesphere | Deploy minio] **************************************
oot@master:/opt/kubekey# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-59d85c5c84-pf794 1/1 Running 0 19m
kube-system calico-node-8hmw5 1/1 Running 0 19m
kube-system calico-node-c596s 0/1 Running 0 19m
kube-system calico-node-jd65g 0/1 Running 0 19m
kube-system calico-node-l4qwx 1/1 Running 0 19m
kube-system coredns-74d59cc5c6-7cqvq 1/1 Running 0 19m
kube-system coredns-74d59cc5c6-9zb5f 1/1 Running 0 19m
kube-system kube-apiserver-master 1/1 Running 0 19m
kube-system kube-controller-manager-master 1/1 Running 0 19m
kube-system kube-proxy-4gm5p 1/1 Running 0 19m
kube-system kube-proxy-6952f 1/1 Running 0 19m
kube-system kube-proxy-jcs7z 1/1 Running 0 19m
kube-system kube-proxy-l8mz2 1/1 Running 0 19m
kube-system kube-scheduler-master 1/1 Running 0 19m
kube-system metrics-server-5ddd98b7f9-m8zt7 1/1 Running 0 18m
kube-system nodelocaldns-646pz 1/1 Running 0 19m
kube-system nodelocaldns-6nczc 1/1 Running 0 19m
kube-system nodelocaldns-ff4gk 1/1 Running 0 19m
kube-system nodelocaldns-xvxc9 1/1 Running 0 19m
kube-system openebs-localpv-provisioner-84956ddb89-9mj4m 1/1 Running 0 19m
kube-system openebs-ndm-bfmxn 1/1 Running 0 18m
kube-system openebs-ndm-bsnpv 1/1 Running 0 19m
kube-system openebs-ndm-operator-6896cbf7b8-ddfbv 1/1 Running 1 19m
kube-system openebs-ndm-s5dz4 1/1 Running 0 19m
kube-system snapshot-controller-0 1/1 Running 0 17m
kubesphere-system ks-installer-85854b8c8-pm2h4 1/1 Running 0 19m
kubesphere-system minio-764b67f6fb-mf4rv 1/1 Running 0 16m
kubesphere-system minio-make-bucket-job-bzpkn 1/1 Running 0 16m
kubesphere-system openldap-0 1/1 Running 0 16m
kubesphere-system redis-6fd6c6d6f9-cd4hd 1/1 Running 0 16m
root@master:/opt/kubekey# kubectl get pvc --all-namespaces
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
kubesphere-system minio Bound pvc-cc19da3f-802f-4b0d-866f-dcf3f9528bd2 20Gi RWO local 17m
kubesphere-system openldap-pvc-openldap-0 Bound pvc-88d033fb-a8de-4ec7-847a-cac984208e50 2Gi RWO local 17m
kubesphere-system redis-pvc Bound pvc-ef485260-d3f6-46a9-9c06-c71725754195 2Gi RWO local 17m
root@master:/opt/kubekey# kubectl get pv --all-namespaces
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-88d033fb-a8de-4ec7-847a-cac984208e50 2Gi RWO Delete Bound kubesphere-system/openldap-pvc-openldap-0 local 17m
pvc-cc19da3f-802f-4b0d-866f-dcf3f9528bd2 20Gi RWO Delete Bound kubesphere-system/minio local 17m
pvc-ef485260-d3f6-46a9-9c06-c71725754195 2Gi RWO Delete Bound kubesphere-system/redis-pvc local 17m

安装配置如下:

apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
  name: 101-kube 
spec:
  hosts:
  - {name: master, address: 10.94.xx.xx, internalAddress: 10.94.xx.xx, user: root, password: xxx}
  - {name: node1, address: 10.94.xx.xx, internalAddress: 10.94.xx.xx, user: root, password: xxx}
  - {name: node2, address: 10.94.xx.xx, internalAddress: 10.94.xx.xx, user: root, password: xxx}
  - {name: node3, address: 10.94.xx.xx, internalAddress: 10.94.xx.xx, user: root, password: xxx}
  roleGroups:
    etcd:
    - master 
    master: 
    - master
    worker:
    - node1
    - node2
    - node3
  controlPlaneEndpoint:
    domain: lb.kubesphere.local
    address: ""
    port: "6443"
  kubernetes:
    version: v1.17.9
    imageRepo: kubesphere
    clusterName: cluster.local
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
  registry:
    registryMirrors: []
    insecureRegistries: []
  addons: []


---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.0.0
spec:
  local_registry: ""
  persistence:
    storageClass: ""
  authentication:
    jwtSecret: ""
  etcd:
    monitoring: true
    endpointIps: localhost
    port: 2379
    tlsEnable: true
  common:
    es:
      elasticsearchDataVolumeSize: 20Gi
      elasticsearchMasterVolumeSize: 4Gi
      elkPrefix: logstash
      logMaxAge: 7
    mysqlVolumeSize: 20Gi
    minioVolumeSize: 20Gi
    etcdVolumeSize: 20Gi
    openldapVolumeSize: 2Gi
    redisVolumSize: 2Gi
  console:
    enableMultiLogin: false  # enable/disable multi login
    port: 30880
  alerting:
    enabled: true
  auditing:
    enabled: false
  devops:
    enabled: true
    jenkinsMemoryLim: 2Gi
    jenkinsMemoryReq: 1500Mi
    jenkinsVolumeSize: 8Gi
    jenkinsJavaOpts_Xms: 512m
    jenkinsJavaOpts_Xmx: 512m
    jenkinsJavaOpts_MaxRAM: 2g
  events:
    enabled: false
    ruler:
      enabled: true
      replicas: 2
  logging:
    enabled: false
    logsidecarReplicas: 2
  metrics_server:
    enabled: true
  monitoring:
    prometheusMemoryRequest: 400Mi
    prometheusVolumeSize: 20Gi
  multicluster:
    clusterRole: none  # host | member | none
  networkpolicy:
    enabled: false
  notification:
    enabled: true
  openpitrix:
    enabled: false
  servicemesh:
    enabled: false
8 天 后