各位好,我是在一台全新的裸机上安装KubeSphere2.1.1,采用的是all-in-one模式,系统是Ubuntu 16.04.6,教程是参照官方文档(https://kubesphere.io/docs/zh-CN/installation/all-in-one/)来的。对于配置文件进行了修改,按照 https://kubesphere.io/docs/zh-CN/installation/complete-installation 把所有组件都开启了,并且使用 https://raw.githubusercontent.com/kubesphere/ks-installer/master/scripts/download-image-list.sh 预先下载了docker镜像。安装完成提示安装成功,但是并没有像文档所述出现web地址和用户名密码,而是给出了一条命令查看安装日志:

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

使用netstat检查端口,30880端口也没有进程监听。

想请教一下各位使用KubeSphere的朋友,这个问题如何解决?

使用上面命令产生的日志如下:

2020-06-27T15:48:28Z INFO     : shell-operator v1.0.0-beta.5
2020-06-27T15:48:28Z INFO     : HTTP SERVER Listening on 0.0.0.0:9115
2020-06-27T15:48:28Z INFO     : Use temporary dir: /tmp/shell-operator
2020-06-27T15:48:28Z INFO     : Initialize hooks manager ...
2020-06-27T15:48:28Z INFO     : Search and load hooks ...
2020-06-27T15:48:28Z INFO     : Load hook config from '/hooks/kubesphere/installRunner.py'
2020-06-27T15:48:29Z INFO     : Initializing schedule manager ...
2020-06-27T15:48:29Z INFO     : KUBE Init Kubernetes client
2020-06-27T15:48:29Z INFO     : KUBE-INIT Kubernetes client is configured successfully
2020-06-27T15:48:29Z INFO     : MAIN: run main loop
2020-06-27T15:48:29Z INFO     : MAIN: add onStartup tasks
2020-06-27T15:48:29Z INFO     : QUEUE add all HookRun@OnStartup
2020-06-27T15:48:29Z INFO     : Running schedule manager ...
2020-06-27T15:48:29Z INFO     : MSTOR Create new metric shell_operator_live_ticks
2020-06-27T15:48:29Z INFO     : MSTOR Create new metric shell_operator_tasks_queue_length
2020-06-27T15:48:29Z INFO     : GVR for kind 'ConfigMap' is /v1, Resource=configmaps
2020-06-27T15:48:29Z INFO     : EVENT Kube event '4d6862b8-4289-4140-84e8-23358b91aaf8'
2020-06-27T15:48:29Z INFO     : QUEUE add TASK_HOOK_RUN@KUBE_EVENTS kubesphere/installRunner.py
2020-06-27T15:48:32Z INFO     : TASK_RUN HookRun@KUBE_EVENTS kubesphere/installRunner.py
2020-06-27T15:48:32Z INFO     : Running hook 'kubesphere/installRunner.py' binding 'KUBE_EVENTS' ...
[WARNING]: No inventory was parsed, only implicit localhost is available

[WARNING]: provided hosts list is empty, only localhost is available. Note that

the implicit localhost does not match 'all'



PLAY [localhost] ***************************************************************


TASK [download : include_tasks] ************************************************
skipping: [localhost]


TASK [download : Download items] ***********************************************
skipping: [localhost]


TASK [download : Sync container] ***********************************************
skipping: [localhost]


TASK [kubesphere-defaults : Configure defaults] ********************************
ok: [localhost] => {

    "msg": "Check roles/kubesphere-defaults/defaults/main.yml"

}


TASK [preinstall : check k8s version] ******************************************
changed: [localhost]


TASK [preinstall : init k8s version] *******************************************
ok: [localhost]


TASK [preinstall : Stop if kuernetes version is nonsupport] ********************
ok: [localhost] => {

    "changed": false, 

    "msg": "All assertions passed"

}


TASK [preinstall : check helm status] ******************************************
changed: [localhost]


TASK [preinstall : Stop if Helm is not available] ******************************
ok: [localhost] => {

    "changed": false, 

    "msg": "All assertions passed"

}


TASK [preinstall : check storage class] ****************************************
changed: [localhost]


TASK [preinstall : Stop if StorageClass was not found] *************************
ok: [localhost] => {

    "changed": false, 

    "msg": "All assertions passed"

}


TASK [preinstall : check default storage class] ********************************
changed: [localhost]


TASK [preinstall : Stop if defaultStorageClass was not found] ******************
skipping: [localhost]


PLAY RECAP *********************************************************************

localhost                  : ok=9    changed=4    unreachable=0    failed=0    skipped=4    rescued=0    ignored=0   


[WARNING]: No inventory was parsed, only implicit localhost is available

[WARNING]: provided hosts list is empty, only localhost is available. Note that

the implicit localhost does not match 'all'



PLAY [localhost] ***************************************************************


TASK [download : include_tasks] ************************************************
skipping: [localhost]


TASK [download : Download items] ***********************************************
skipping: [localhost]


TASK [download : Sync container] ***********************************************
skipping: [localhost]


TASK [kubesphere-defaults : Configure defaults] ********************************
ok: [localhost] => {

    "msg": "Check roles/kubesphere-defaults/defaults/main.yml"

}


TASK [metrics-server : Metrics-Server | Checking old installation files] *******
ok: [localhost]


TASK [metrics-server : Metrics-Server | deleting old prometheus-operator] ******
skipping: [localhost]


TASK [metrics-server : Metrics-Server | deleting old metrics-server files] *****
[DEPRECATION WARNING]: evaluating {'failed': False, u'stat': {u'exists': 

False}, u'changed': False} as a bare variable, this behaviour will go away and 

you might need to add |bool to the expression in the future. Also see 

CONDITIONAL_BARE_VARS configuration toggle.. This feature will be removed in 

version 2.12. Deprecation warnings can be disabled by setting 

deprecation_warnings=False in ansible.cfg.
ok: [localhost] => (item=metrics-server)


TASK [metrics-server : Metrics-Server | Getting metrics-server installation files] ***
changed: [localhost]


TASK [metrics-server : Metrics-Server | Creating manifests] ********************
changed: [localhost] => (item={u'type': u'config', u'name': u'values', u'file': u'values.yaml'})


TASK [metrics-server : Metrics-Server | Check Metrics-Server] ******************
changed: [localhost]


TASK [metrics-server : Metrics-Server | Installing metrics-server] *************
changed: [localhost]


TASK [metrics-server : Metrics-Server | Installing metrics-server retry] *******
skipping: [localhost]


TASK [metrics-server : Metrics-Server | Waitting for v1beta1.metrics.k8s.io ready] ***
FAILED - RETRYING: Metrics-Server | Waitting for v1beta1.metrics.k8s.io ready (60 retries left).
changed: [localhost]


PLAY RECAP *********************************************************************

localhost                  : ok=8    changed=5    unreachable=0    failed=0    skipped=5    rescued=0    ignored=0   


[WARNING]: No inventory was parsed, only implicit localhost is available

[WARNING]: provided hosts list is empty, only localhost is available. Note that

the implicit localhost does not match 'all'



PLAY [localhost] ***************************************************************


TASK [download : include_tasks] ************************************************
skipping: [localhost]


TASK [download : Download items] ***********************************************
skipping: [localhost]


TASK [download : Sync container] ***********************************************
skipping: [localhost]


TASK [kubesphere-defaults : Configure defaults] ********************************
ok: [localhost] => {

    "msg": "Check roles/kubesphere-defaults/defaults/main.yml"

}


TASK [common : Kubesphere | Check kube-node-lease namespace] *******************
changed: [localhost]


TASK [common : KubeSphere | Get system namespaces] *****************************
ok: [localhost]


TASK [common : set_fact] *******************************************************
ok: [localhost]


TASK [common : debug] **********************************************************
ok: [localhost] => {

    "msg": [

        "kubesphere-system", 

        "kubesphere-controls-system", 

        "kubesphere-monitoring-system", 

        "kube-node-lease", 

        "kubesphere-logging-system", 

        "openpitrix-system", 

        "kubesphere-devops-system", 

        "istio-system", 

        "kubesphere-alerting-system", 

        "istio-system"

    ]

}


TASK [common : KubeSphere | Create kubesphere namespace] ***********************
changed: [localhost] => (item=kubesphere-system)
changed: [localhost] => (item=kubesphere-controls-system)
changed: [localhost] => (item=kubesphere-monitoring-system)
changed: [localhost] => (item=kube-node-lease)
changed: [localhost] => (item=kubesphere-logging-system)
changed: [localhost] => (item=openpitrix-system)
changed: [localhost] => (item=kubesphere-devops-system)
changed: [localhost] => (item=istio-system)
changed: [localhost] => (item=kubesphere-alerting-system)
changed: [localhost] => (item=istio-system)


TASK [common : KubeSphere | Labeling system-workspace] *************************
changed: [localhost] => (item=default)
changed: [localhost] => (item=kube-public)
changed: [localhost] => (item=kube-system)
changed: [localhost] => (item=kubesphere-system)
changed: [localhost] => (item=kubesphere-controls-system)
changed: [localhost] => (item=kubesphere-monitoring-system)
changed: [localhost] => (item=kube-node-lease)
changed: [localhost] => (item=kubesphere-logging-system)
changed: [localhost] => (item=openpitrix-system)
changed: [localhost] => (item=kubesphere-devops-system)
changed: [localhost] => (item=istio-system)
changed: [localhost] => (item=kubesphere-alerting-system)
changed: [localhost] => (item=istio-system)


TASK [common : KubeSphere | Create ImagePullSecrets] ***************************
changed: [localhost] => (item=default)
changed: [localhost] => (item=kube-public)
changed: [localhost] => (item=kube-system)
changed: [localhost] => (item=kubesphere-system)
changed: [localhost] => (item=kubesphere-controls-system)
changed: [localhost] => (item=kubesphere-monitoring-system)
changed: [localhost] => (item=kube-node-lease)
changed: [localhost] => (item=kubesphere-logging-system)
changed: [localhost] => (item=openpitrix-system)
changed: [localhost] => (item=kubesphere-devops-system)
changed: [localhost] => (item=istio-system)
changed: [localhost] => (item=kubesphere-alerting-system)
changed: [localhost] => (item=istio-system)


TASK [common : KubeSphere | Getting kubernetes master num] *********************
changed: [localhost]


TASK [common : KubeSphere | Setting master num] ********************************
ok: [localhost]


TASK [common : Kubesphere | Getting common component installation files] *******
changed: [localhost] => (item=common)
changed: [localhost] => (item=ks-crds)


TASK [common : KubeSphere | Create KubeSphere crds] ****************************
changed: [localhost]


TASK [common : Kubesphere | Checking openpitrix common component] **************
changed: [localhost]


TASK [common : include_tasks] **************************************************
skipping: [localhost] => (item={u'ks': u'mysql-pvc', u'op': u'openpitrix-db'}) 
skipping: [localhost] => (item={u'ks': u'etcd-pvc', u'op': u'openpitrix-etcd'}) 


TASK [common : Getting PersistentVolumeName (mysql)] ***************************
skipping: [localhost]


TASK [common : Getting PersistentVolumeSize (mysql)] ***************************
skipping: [localhost]


TASK [common : Setting PersistentVolumeName (mysql)] ***************************
skipping: [localhost]


TASK [common : Setting PersistentVolumeSize (mysql)] ***************************
skipping: [localhost]


TASK [common : Getting PersistentVolumeName (etcd)] ****************************
skipping: [localhost]


TASK [common : Getting PersistentVolumeSize (etcd)] ****************************
skipping: [localhost]


TASK [common : Setting PersistentVolumeName (etcd)] ****************************
skipping: [localhost]


TASK [common : Setting PersistentVolumeSize (etcd)] ****************************
skipping: [localhost]


TASK [common : Kubesphere | Check mysql PersistentVolumeClaim] *****************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl get pvc -n kubesphere-system mysql-pvc -o jsonpath='{.status.capacity.storage}'\n", "delta": "0:00:00.670112", "end": "2020-06-27 15:49:52.123717", "msg": "non-zero return code", "rc": 1, "start": "2020-06-27 15:49:51.453605", "stderr": "Error from server (NotFound): persistentvolumeclaims \"mysql-pvc\" not found", "stderr_lines": ["Error from server (NotFound): persistentvolumeclaims \"mysql-pvc\" not found"], "stdout": "", "stdout_lines": []}

...ignoring


TASK [common : Kubesphere | Setting mysql db pv size] **************************
skipping: [localhost]


TASK [common : Kubesphere | Check redis PersistentVolumeClaim] *****************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl get pvc -n kubesphere-system redis-pvc -o jsonpath='{.status.capacity.storage}'\n", "delta": "0:00:00.675818", "end": "2020-06-27 15:49:53.041182", "msg": "non-zero return code", "rc": 1, "start": "2020-06-27 15:49:52.365364", "stderr": "Error from server (NotFound): persistentvolumeclaims \"redis-pvc\" not found", "stderr_lines": ["Error from server (NotFound): persistentvolumeclaims \"redis-pvc\" not found"], "stdout": "", "stdout_lines": []}

...ignoring


TASK [common : Kubesphere | Setting redis db pv size] **************************
skipping: [localhost]


TASK [common : Kubesphere | Check minio PersistentVolumeClaim] *****************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl get pvc -n kubesphere-system minio -o jsonpath='{.status.capacity.storage}'\n", "delta": "0:00:00.681939", "end": "2020-06-27 15:49:53.974427", "msg": "non-zero return code", "rc": 1, "start": "2020-06-27 15:49:53.292488", "stderr": "Error from server (NotFound): persistentvolumeclaims \"minio\" not found", "stderr_lines": ["Error from server (NotFound): persistentvolumeclaims \"minio\" not found"], "stdout": "", "stdout_lines": []}

...ignoring


TASK [common : Kubesphere | Setting minio pv size] *****************************
skipping: [localhost]


TASK [common : Kubesphere | Check openldap PersistentVolumeClaim] **************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl get pvc -n kubesphere-system openldap-pvc-openldap-0 -o jsonpath='{.status.capacity.storage}'\n", "delta": "0:00:00.677670", "end": "2020-06-27 15:49:54.899927", "msg": "non-zero return code", "rc": 1, "start": "2020-06-27 15:49:54.222257", "stderr": "Error from server (NotFound): persistentvolumeclaims \"openldap-pvc-openldap-0\" not found", "stderr_lines": ["Error from server (NotFound): persistentvolumeclaims \"openldap-pvc-openldap-0\" not found"], "stdout": "", "stdout_lines": []}

...ignoring


TASK [common : Kubesphere | Setting openldap pv size] **************************
skipping: [localhost]


TASK [common : Kubesphere | Check etcd db PersistentVolumeClaim] ***************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl get pvc -n kubesphere-system etcd-pvc -o jsonpath='{.status.capacity.storage}'\n", "delta": "0:00:00.673573", "end": "2020-06-27 15:49:55.819936", "msg": "non-zero return code", "rc": 1, "start": "2020-06-27 15:49:55.146363", "stderr": "Error from server (NotFound): persistentvolumeclaims \"etcd-pvc\" not found", "stderr_lines": ["Error from server (NotFound): persistentvolumeclaims \"etcd-pvc\" not found"], "stdout": "", "stdout_lines": []}

...ignoring


TASK [common : Kubesphere | Setting etcd pv size] ******************************
skipping: [localhost]


TASK [common : Kubesphere | Check redis ha PersistentVolumeClaim] **************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl get pvc -n kubesphere-system data-redis-ha-server-0 -o jsonpath='{.status.capacity.storage}'\n", "delta": "0:00:00.684187", "end": "2020-06-27 15:49:56.755023", "msg": "non-zero return code", "rc": 1, "start": "2020-06-27 15:49:56.070836", "stderr": "Error from server (NotFound): persistentvolumeclaims \"data-redis-ha-server-0\" not found", "stderr_lines": ["Error from server (NotFound): persistentvolumeclaims \"data-redis-ha-server-0\" not found"], "stdout": "", "stdout_lines": []}

...ignoring


TASK [common : Kubesphere | Setting redis ha pv size] **************************
skipping: [localhost]


TASK [common : Kubesphere | Creating common component manifests] ***************
changed: [localhost] => (item={u'path': u'etcd', u'file': u'etcd.yaml'})
changed: [localhost] => (item={u'name': u'mysql', u'file': u'mysql.yaml'})
changed: [localhost] => (item={u'path': u'redis', u'file': u'redis.yaml'})


TASK [common : Kubesphere | Creating mysql sercet] *****************************
changed: [localhost]


TASK [common : Kubesphere | Deploying etcd and mysql] **************************
skipping: [localhost] => (item=etcd.yaml) 
skipping: [localhost] => (item=mysql.yaml) 


TASK [common : Kubesphere | Getting minio installation files] ******************
skipping: [localhost] => (item=minio-ha) 


TASK [common : Kubesphere | Creating manifests] ********************************
skipping: [localhost] => (item={u'name': u'custom-values-minio', u'file': u'custom-values-minio.yaml'}) 


TASK [common : Kubesphere | Check minio] ***************************************
skipping: [localhost]


TASK [common : Kubesphere | Deploy minio] **************************************
skipping: [localhost]


TASK [common : debug] **********************************************************
skipping: [localhost]


TASK [common : fail] ***********************************************************
skipping: [localhost]


TASK [common : Kubesphere | create minio config directory] *********************
skipping: [localhost]


TASK [common : Kubesphere | Creating common component manifests] ***************
skipping: [localhost] => (item={u'path': u'/root/.config/rclone', u'file': u'rclone.conf'}) 


TASK [common : include_tasks] **************************************************
skipping: [localhost] => (item=helm) 
skipping: [localhost] => (item=vmbased) 


TASK [common : Kubesphere | Check ha-redis] ************************************
skipping: [localhost]


TASK [common : Kubesphere | Getting redis installation files] ******************
skipping: [localhost] => (item=redis-ha) 


TASK [common : Kubesphere | Creating manifests] ********************************
skipping: [localhost] => (item={u'name': u'custom-values-redis', u'file': u'custom-values-redis.yaml'}) 


TASK [common : Kubesphere | Check old redis status] ****************************
skipping: [localhost]


TASK [common : Kubesphere | Delete and backup old redis svc] *******************
skipping: [localhost]


TASK [common : Kubesphere | Deploying redis] ***********************************
skipping: [localhost]


TASK [common : Kubesphere | Getting redis PodIp] *******************************
skipping: [localhost]


TASK [common : Kubesphere | Creating redis migration script] *******************
skipping: [localhost] => (item={u'path': u'/etc/kubesphere', u'file': u'redisMigrate.py'}) 


TASK [common : Kubesphere | Check redis-ha status] *****************************
skipping: [localhost]


TASK [common : ks-logging | Migrating redis data] ******************************
skipping: [localhost]


TASK [common : Kubesphere | Disable old redis] *********************************
skipping: [localhost]


TASK [common : Kubesphere | Deploying redis] ***********************************
skipping: [localhost] => (item=redis.yaml) 


TASK [common : Kubesphere | Getting openldap installation files] ***************
skipping: [localhost] => (item=openldap-ha) 


TASK [common : Kubesphere | Creating manifests] ********************************
skipping: [localhost] => (item={u'name': u'custom-values-openldap', u'file': u'custom-values-openldap.yaml'}) 


TASK [common : Kubesphere | Check old openldap status] *************************
skipping: [localhost]


TASK [common : KubeSphere | Shutdown ks-account] *******************************
skipping: [localhost]


TASK [common : Kubesphere | Delete and backup old openldap svc] ****************
skipping: [localhost]


TASK [common : Kubesphere | Check openldap] ************************************
skipping: [localhost]


TASK [common : Kubesphere | Deploy openldap] ***********************************
skipping: [localhost]


TASK [common : Kubesphere | Load old openldap data] ****************************
skipping: [localhost]


TASK [common : Kubesphere | Check openldap-ha status] **************************
skipping: [localhost]


TASK [common : Kubesphere | Get openldap-ha pod list] **************************
skipping: [localhost]


TASK [common : Kubesphere | Get old openldap data] *****************************
skipping: [localhost]


TASK [common : Kubesphere | Migrating openldap data] ***************************
skipping: [localhost]


TASK [common : Kubesphere | Disable old openldap] ******************************
skipping: [localhost]


TASK [common : Kubesphere | Restart openldap] **********************************
skipping: [localhost]


TASK [common : KubeSphere | Restarting ks-account] *****************************
skipping: [localhost]


TASK [common : Kubesphere | Check ha-redis] ************************************
changed: [localhost]


TASK [common : Kubesphere | Getting redis installation files] ******************
skipping: [localhost] => (item=redis-ha) 


TASK [common : Kubesphere | Creating manifests] ********************************
skipping: [localhost] => (item={u'name': u'custom-values-redis', u'file': u'custom-values-redis.yaml'}) 


TASK [common : Kubesphere | Check old redis status] ****************************
skipping: [localhost]


TASK [common : Kubesphere | Delete and backup old redis svc] *******************
skipping: [localhost]


TASK [common : Kubesphere | Deploying redis] ***********************************
skipping: [localhost]


TASK [common : Kubesphere | Getting redis PodIp] *******************************
skipping: [localhost]


TASK [common : Kubesphere | Creating redis migration script] *******************
skipping: [localhost] => (item={u'path': u'/etc/kubesphere', u'file': u'redisMigrate.py'}) 


TASK [common : Kubesphere | Check redis-ha status] *****************************
skipping: [localhost]


TASK [common : ks-logging | Migrating redis data] ******************************
skipping: [localhost]


TASK [common : Kubesphere | Disable old redis] *********************************
skipping: [localhost]


TASK [common : Kubesphere | Deploying redis] ***********************************
changed: [localhost] => (item=redis.yaml)


TASK [common : Kubesphere | Getting openldap installation files] ***************
changed: [localhost] => (item=openldap-ha)


TASK [common : Kubesphere | Creating manifests] ********************************
changed: [localhost] => (item={u'name': u'custom-values-openldap', u'file': u'custom-values-openldap.yaml'})


TASK [common : Kubesphere | Check old openldap status] *************************
changed: [localhost]


TASK [common : KubeSphere | Shutdown ks-account] *******************************
skipping: [localhost]


TASK [common : Kubesphere | Delete and backup old openldap svc] ****************
skipping: [localhost]


TASK [common : Kubesphere | Check openldap] ************************************
changed: [localhost]


TASK [common : Kubesphere | Deploy openldap] ***********************************
changed: [localhost]


TASK [common : Kubesphere | Load old openldap data] ****************************
skipping: [localhost]


TASK [common : Kubesphere | Check openldap-ha status] **************************
skipping: [localhost]


TASK [common : Kubesphere | Get openldap-ha pod list] **************************
skipping: [localhost]


TASK [common : Kubesphere | Get old openldap data] *****************************
skipping: [localhost]


TASK [common : Kubesphere | Migrating openldap data] ***************************
skipping: [localhost]


TASK [common : Kubesphere | Disable old openldap] ******************************
skipping: [localhost]


TASK [common : Kubesphere | Restart openldap] **********************************
skipping: [localhost]


TASK [common : KubeSphere | Restarting ks-account] *****************************
skipping: [localhost]


TASK [common : Kubesphere | Getting minio installation files] ******************
changed: [localhost] => (item=minio-ha)


TASK [common : Kubesphere | Creating manifests] ********************************
changed: [localhost] => (item={u'name': u'custom-values-minio', u'file': u'custom-values-minio.yaml'})


TASK [common : Kubesphere | Check minio] ***************************************
changed: [localhost]


TASK [common : Kubesphere | Deploy minio] **************************************
  • a5467021 关闭防火墙,配置国内的镜像加速器,严格参考文档来应该是没有问题的。

你可以看看所有 Pod 的运行状态

kubectl get pod --all-namespaces

    Feynman 感谢回复!运行状况如下:

    NAMESPACE           NAME                                           READY   STATUS             RESTARTS   AGE
    kube-system         calico-kube-controllers-746546ff7d-26c7f       1/1     Running            0          14h
    kube-system         calico-node-nd685                              1/1     Running            1          14h
    kube-system         coredns-7f9d8dc6c8-lnjhn                       0/1     CrashLoopBackOff   168        14h
    kube-system         dns-autoscaler-796f4ddddf-trwmb                1/1     Running            0          14h
    kube-system         kube-apiserver-ks-allinone                     1/1     Running            0          14h
    kube-system         kube-controller-manager-ks-allinone            1/1     Running            0          14h
    kube-system         kube-proxy-8gswl                               1/1     Running            0          14h
    kube-system         kube-scheduler-ks-allinone                     1/1     Running            0          14h
    kube-system         metrics-server-66444bf745-wqh26                1/1     Running            0          14h
    kube-system         nodelocaldns-fp82n                             1/1     Running            0          14h
    kube-system         openebs-localpv-provisioner-77fbd6858d-g827z   1/1     Running            0          14h
    kube-system         openebs-ndm-operator-59c75c96fc-pr75z          1/1     Running            1          14h
    kube-system         openebs-ndm-rkvjg                              1/1     Running            0          14h
    kube-system         tiller-deploy-79b566b5ff-m2ldv                 1/1     Running            0          14h
    kubesphere-system   ks-installer-7d9fb945c7-xpglx                  1/1     Running            0          14h
    kubesphere-system   minio-845b7bd867-d2fhh                         1/1     Running            0          14h
    kubesphere-system   openldap-0                                     1/1     Running            0          14h
    kubesphere-system   redis-6fd6c6d6f9-h74pk                         1/1     Running            0          14h

    请问能看出问题吗?

      a5467021 CoreDNS 没有起来,可以 kubectl 看看 coredns-7f9d8dc6c8-lnjhn 的 Pod 日志和 Events。
      另外,机器的防火墙关了吗?

        Feynman 该pod的日志如下:

        $ kubectl logs --namespace=kube-system coredns-7f9d8dc6c8-lnjhn
        .:53
        2020-06-28T09:14:51.341Z [INFO] plugin/reload: Running configuration MD5 = b9d55fc86b311e1d1a0507440727efd2
        2020-06-28T09:14:51.341Z [INFO] CoreDNS-1.6.0
        2020-06-28T09:14:51.341Z [INFO] linux/amd64, go1.12.7, 0a218d3
        CoreDNS-1.6.0
        linux/amd64, go1.12.7, 0a218d3
        2020-06-28T09:14:51.342Z [FATAL] plugin/loop: Loop (127.0.0.1:54076 -> :53) detected for zone ".", see https://coredns.io/plugins/loop#troubleshooting. Query: "HINFO 4129614543035057629.6793774896785983640."

        events信息如下:

        kubectl describe pod --namespace=kube-system coredns-7f9d8dc6c8-lnjhn
        Name:                 coredns-7f9d8dc6c8-lnjhn
        Namespace:            kube-system
        Priority:             2000000000
        Priority Class Name:  system-cluster-critical
        Node:                 ks-allinone/192.168.*.*
        Start Time:           Sat, 27 Jun 2020 23:46:40 +0800
        Labels:               k8s-app=kube-dns
                              pod-template-hash=7f9d8dc6c8
        Annotations:          seccomp.security.alpha.kubernetes.io/pod: docker/default
        Status:               Running
        IP:                   10.233.99.1
        IPs:
          IP:           10.233.99.1
        Controlled By:  ReplicaSet/coredns-7f9d8dc6c8
        Containers:
          coredns:
            Container ID:  docker://41b895af3621b095ce64ea9ea97c3dbc0e02af6d154388165df3f28fba7fd1a7
            Image:         coredns/coredns:1.6.0
            Image ID:      docker-pullable://coredns/coredns@sha256:263d03f2b889a75a0b91e035c2a14d45d7c1559c53444c5f7abf3a76014b779d
            Ports:         53/UDP, 53/TCP, 9153/TCP
            Host Ports:    0/UDP, 0/TCP, 0/TCP
            Args:
              -conf
              /etc/coredns/Corefile
            State:          Waiting
              Reason:       CrashLoopBackOff
            Last State:     Terminated
              Reason:       Error
              Exit Code:    1
              Started:      Sun, 28 Jun 2020 17:14:51 +0800
              Finished:     Sun, 28 Jun 2020 17:14:51 +0800
            Ready:          False
            Restart Count:  202
            Limits:
              memory:  170Mi
            Requests:
              cpu:        100m
              memory:     70Mi
            Liveness:     http-get http://:8080/health delay=0s timeout=5s period=10s #success=1 #failure=10
            Readiness:    http-get http://:8181/ready delay=0s timeout=5s period=10s #success=1 #failure=10
            Environment:  <none>
            Mounts:
              /etc/coredns from config-volume (rw)
              /var/run/secrets/kubernetes.io/serviceaccount from coredns-token-zrjdp (ro)
        Conditions:
          Type              Status
          Initialized       True
          Ready             False
          ContainersReady   False
          PodScheduled      True
        Volumes:
          config-volume:
            Type:      ConfigMap (a volume populated by a ConfigMap)
            Name:      coredns
            Optional:  false
          coredns-token-zrjdp:
            Type:        Secret (a volume populated by a Secret)
            SecretName:  coredns-token-zrjdp
            Optional:    false
        QoS Class:       Burstable
        Node-Selectors:  beta.kubernetes.io/os=linux
        Tolerations:     CriticalAddonsOnly
                         node-role.kubernetes.io/master:NoSchedule
                         node.kubernetes.io/not-ready:NoExecute for 300s
                         node.kubernetes.io/unreachable:NoExecute for 300s
        Events:
          Type     Reason   Age                    From                  Message
          ----     ------   ----                   ----                  -------
          Warning  BackOff  112s (x4981 over 17h)  kubelet, ks-allinone  Back-off restarting failed container

        防火墙是开启的,但是安装前检查过iptables的规则是空的,安装过后里面出现了大量的cali-开头的自定义规则。我需要关闭防火墙试一下吗?

          a5467021 关闭防火墙,配置国内的镜像加速器,严格参考文档来应该是没有问题的。

            Feynman 我的docker镜像源设置了ustc的,防火墙刚刚也关闭了,但是再次安装还是有同样的问题。

            换用CentOS 7完成了安装。这个应该不算解决了问题吧?😂

              a5467021 不算。有空可以自己研究下为什么 CoreDNS 在你的 ubuntu 机器起不来