kubesphere-all-offline-v2.1.1.tar.gz 离线包 看来少了 flannel 的镜像哦, 只能默认 calico ?
或者有补救方法吗? flannel 需要哪些镜像,手动补回去。

2020-04-18 13:09:24,698 p=16068 u=xxxxxx | TASK [download : download_container | Download image if required ( 192.168.1.202:5000/coreos/flannel-cni:v0.3.0 )] *******************************************************************
2020-04-18 13:09:24,698 p=16068 u=xxxxxx | Saturday 18 April 2020 13:09:24 +0800 (0:00:00.115) 0:01:46.334 ********
2020-04-18 13:09:24,909 p=16068 u=xxxxxx | FAILED - RETRYING: download_container | Download image if required ( 192.168.1.202:5000/coreos/flannel-cni:v0.3.0 ) (4 retries left).
2020-04-18 13:09:24,978 p=16068 u=xxxxxx | FAILED - RETRYING: download_container | Download image if required ( 192.168.1.202:5000/coreos/flannel-cni:v0.3.0 ) (4 retries left).
2020-04-18 13:09:25,020 p=16068 u=xxxxxx | FAILED - RETRYING: download_container | Download image if required ( 192.168.1.202:5000/coreos/flannel-cni:v0.3.0 ) (4 retries left).
2020-04-18 13:09:31,195 p=16068 u=xxxxxx | FAILED - RETRYING: download_container | Download image if required ( 192.168.1.202:5000/coreos/flannel-cni:v0.3.0 ) (3 retries left).
2020-04-18 13:09:32,151 p=16068 u=xxxxxx | FAILED - RETRYING: download_container | Download image if required ( 192.168.1.202:5000/coreos/flannel-cni:v0.3.0 ) (3 retries left).
2020-04-18 13:09:33,086 p=16068 u=xxxxxx | FAILED - RETRYING: download_container | Download image if required ( 192.168.1.202:5000/coreos/flannel-cni:v0.3.0 ) (3 retries left).
2020-04-18 13:09:37,372 p=16068 u=xxxxxx | FAILED - RETRYING: download_container | Download image if required ( 192.168.1.202:5000/coreos/flannel-cni:v0.3.0 ) (2 retries left).
2020-04-18 13:09:39,327 p=16068 u=xxxxxx | FAILED - RETRYING: download_container | Download image if required ( 192.168.1.202:5000/coreos/flannel-cni:v0.3.0 ) (2 retries left).
2020-04-18 13:09:41,250 p=16068 u=xxxxxx | FAILED - RETRYING: download_container | Download image if required ( 192.168.1.202:5000/coreos/flannel-cni:v0.3.0 ) (2 retries left).
2020-04-18 13:09:43,558 p=16068 u=xxxxxx | FAILED - RETRYING: download_container | Download image if required ( 192.168.1.202:5000/coreos/flannel-cni:v0.3.0 ) (1 retries left).
2020-04-18 13:09:46,493 p=16068 u=xxxxxx | FAILED - RETRYING: download_container | Download image if required ( 192.168.1.202:5000/coreos/flannel-cni:v0.3.0 ) (1 retries left).
2020-04-18 13:09:49,409 p=16068 u=xxxxxx | FAILED - RETRYING: download_container | Download image if required ( 192.168.1.202:5000/coreos/flannel-cni:v0.3.0 ) (1 retries left).
2020-04-18 13:09:49,759 p=16068 u=xxxxxx | fatal: [k8s-m-202 -> k8s-m-202]: FAILED! => {
“attempts”: 4,
“changed”: true,
“cmd”: [
“/usr/bin/docker”,
“pull”,
“192.168.1.202:5000/coreos/flannel-cni:v0.3.0”
],
“delta”: “0:00:00.052470″,
“end”: “2020-04-18 13:09:49.745324″,
“rc”: 1,
“start”: “2020-04-18 13:09:49.692854”
}

STDERR:

Error response from daemon: manifest for 192.168.1.202:5000/coreos/flannel-cni:v0.3.0 not found

MSG:

non-zero return code

    ks-5937 kubesphere/flannel:v0.11.0 kubesphere/flannel-cni:v0.3.0

    4 天 后

    安装以后 日志没有出现成功的那个界面, 日志内容如下,不能访问服务,请问是什么原因,卸载后重装也是这样

    `2020-04-23T03:29:05Z INFO : shell-operator v1.0.0-beta.5
    2020-04-23T03:29:05Z INFO : HTTP SERVER Listening on 0.0.0.0:9115
    2020-04-23T03:29:05Z INFO : Use temporary dir: /tmp/shell-operator
    2020-04-23T03:29:05Z INFO : Initialize hooks manager …
    2020-04-23T03:29:05Z INFO : Search and load hooks …
    2020-04-23T03:29:05Z INFO : Load hook config from ‘/hooks/kubesphere/installRunner.py’
    2020-04-23T03:29:06Z INFO : Initializing schedule manager …
    2020-04-23T03:29:06Z INFO : KUBE Init Kubernetes client
    2020-04-23T03:29:06Z INFO : KUBE-INIT Kubernetes client is configured successfully
    2020-04-23T03:29:06Z INFO : MAIN: run main loop
    2020-04-23T03:29:06Z INFO : MAIN: add onStartup tasks
    2020-04-23T03:29:06Z INFO : Running schedule manager …
    2020-04-23T03:29:06Z INFO : QUEUE add all HookRun@OnStartup
    2020-04-23T03:29:06Z INFO : MSTOR Create new metric shell_operator_live_ticks
    2020-04-23T03:29:06Z INFO : MSTOR Create new metric shell_operator_tasks_queue_length
    2020-04-23T03:29:06Z INFO : GVR for kind ‘ConfigMap’ is /v1, Resource=configmaps
    2020-04-23T03:29:06Z INFO : EVENT Kube event ‘2d28f5bb-f5a5-4134-b784-46f90a4011d1′
    2020-04-23T03:29:06Z INFO : QUEUE add TASK_HOOK_RUN@KUBE_EVENTS kubesphere/installRunner.py
    2020-04-23T03:29:09Z INFO : TASK_RUN HookRun@KUBE_EVENTS kubesphere/installRunner.py
    2020-04-23T03:29:09Z INFO : Running hook ‘kubesphere/installRunner.py’ binding ‘KUBE_EVENTS’ …
    [WARNING]: No inventory was parsed, only implicit localhost is available
    [WARNING]: provided hosts list is empty, only localhost is available. Note that
    the implicit localhost does not match ‘all’

    PLAY [localhost] ***************************************************************

    TASK [download : include_tasks] ************************************************
    skipping: [localhost]

    TASK [download : Download items] ***********************************************
    skipping: [localhost]

    TASK [download : Sync container] ***********************************************
    skipping: [localhost]

    TASK [kubesphere-defaults : Configure defaults] ********************************
    ok: [localhost] => {
    “msg”: “Check roles/kubesphere-defaults/defaults/main.yml”
    }

    TASK [preinstall : check k8s version] ******************************************
    changed: [localhost]

    TASK [preinstall : init k8s version] *******************************************
    ok: [localhost]

    TASK [preinstall : Stop if kuernetes version is nonsupport] ********************
    ok: [localhost] => {
    “changed”: false,
    “msg”: “All assertions passed”
    }

    TASK [preinstall : check helm status] ******************************************
    changed: [localhost]

    TASK [preinstall : Stop if Helm is not available] ******************************
    ok: [localhost] => {
    “changed”: false,
    “msg”: “All assertions passed”
    }

    TASK [preinstall : check storage class] ****************************************
    changed: [localhost]

    TASK [preinstall : Stop if StorageClass was not found] *************************
    ok: [localhost] => {
    “changed”: false,
    “msg”: “All assertions passed”
    }

    TASK [preinstall : check default storage class] ********************************
    changed: [localhost]

    TASK [preinstall : Stop if defaultStorageClass was not found] ******************
    skipping: [localhost]

    PLAY RECAP *********************************************************************
    localhost : ok=9 changed=4 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0

    [WARNING]: No inventory was parsed, only implicit localhost is available
    [WARNING]: provided hosts list is empty, only localhost is available. Note that
    the implicit localhost does not match ‘all’

    PLAY [localhost] ***************************************************************

    TASK [download : include_tasks] ************************************************
    skipping: [localhost]

    TASK [download : Download items] ***********************************************
    skipping: [localhost]

    TASK [download : Sync container] ***********************************************
    skipping: [localhost]

    TASK [kubesphere-defaults : Configure defaults] ********************************
    ok: [localhost] => {
    “msg”: “Check roles/kubesphere-defaults/defaults/main.yml”
    }

    TASK [metrics-server : Metrics-Server | Checking old installation files] *******
    ok: [localhost]

    TASK [metrics-server : Metrics-Server | deleting old prometheus-operator] ******
    skipping: [localhost]

    TASK [metrics-server : Metrics-Server | deleting old metrics-server files] *****
    [DEPRECATION WARNING]: evaluating {‘failed’: False, u’stat’: {u’exists’:
    False}, u’changed’: False} as a bare variable, this behaviour will go away and
    you might need to add |bool to the expression in the future. Also see
    CONDITIONAL_BARE_VARS configuration toggle.. This feature will be removed in
    version 2.12. Deprecation warnings can be disabled by setting
    deprecation_warnings=False in ansible.cfg.
    ok: [localhost] => (item=metrics-server)

    TASK [metrics-server : Metrics-Server | Getting metrics-server installation files] ***
    changed: [localhost]

    TASK [metrics-server : Metrics-Server | Creating manifests] ********************
    changed: [localhost] => (item={u’type’: u’config’, u’name’: u’values’, u’file’: u’values.yaml’})

    TASK [metrics-server : Metrics-Server | Check Metrics-Server] ******************
    changed: [localhost]

    TASK [metrics-server : Metrics-Server | Installing metrics-server] *************
    changed: [localhost]

    TASK [metrics-server : Metrics-Server | Installing metrics-server retry] *******
    skipping: [localhost]

    TASK [metrics-server : Metrics-Server | Waitting for v1beta1.metrics.k8s.io ready] ***
    FAILED - RETRYING: Metrics-Server | Waitting for v1beta1.metrics.k8s.io ready (60 retries left).
    changed: [localhost]

    PLAY RECAP *********************************************************************
    localhost : ok=8 changed=5 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0

    [WARNING]: No inventory was parsed, only implicit localhost is available
    [WARNING]: provided hosts list is empty, only localhost is available. Note that
    the implicit localhost does not match ‘all’

    PLAY [localhost] ***************************************************************

    TASK [download : include_tasks] ************************************************
    skipping: [localhost]

    TASK [download : Download items] ***********************************************
    skipping: [localhost]

    TASK [download : Sync container] ***********************************************
    skipping: [localhost]

    TASK [kubesphere-defaults : Configure defaults] ********************************
    ok: [localhost] => {
    “msg”: “Check roles/kubesphere-defaults/defaults/main.yml”
    }

    TASK [common : Kubesphere | Check kube-node-lease namespace] *******************
    changed: [localhost]

    TASK [common : KubeSphere | Get system namespaces] *****************************
    ok: [localhost]

    TASK [common : set_fact] *******************************************************
    ok: [localhost]

    TASK [common : debug] **********************************************************
    ok: [localhost] => {
    “msg”: [
    “kubesphere-system”,
    “kubesphere-controls-system”,
    “kubesphere-monitoring-system”,
    “kube-node-lease”,
    “kubesphere-logging-system”,
    “openpitrix-system”,
    “kubesphere-devops-system”,
    “istio-system”,
    “kubesphere-alerting-system”,
    “istio-system”
    ]
    }

    TASK [common : KubeSphere | Create kubesphere namespace] ***********************
    changed: [localhost] => (item=kubesphere-system)
    changed: [localhost] => (item=kubesphere-controls-system)
    changed: [localhost] => (item=kubesphere-monitoring-system)
    changed: [localhost] => (item=kube-node-lease)
    changed: [localhost] => (item=kubesphere-logging-system)
    changed: [localhost] => (item=openpitrix-system)
    changed: [localhost] => (item=kubesphere-devops-system)
    changed: [localhost] => (item=istio-system)
    changed: [localhost] => (item=kubesphere-alerting-system)
    changed: [localhost] => (item=istio-system)
    changed: [localhost] => (item=istio-system)

    TASK [common : KubeSphere | Labeling system-workspace] *************************
    changed: [localhost] => (item=default)
    changed: [localhost] => (item=kube-public)
    changed: [localhost] => (item=kube-system)
    changed: [localhost] => (item=kubesphere-system)
    changed: [localhost] => (item=kubesphere-controls-system)
    changed: [localhost] => (item=kubesphere-monitoring-system)
    changed: [localhost] => (item=kube-node-lease)
    changed: [localhost] => (item=kubesphere-logging-system)
    changed: [localhost] => (item=openpitrix-system)
    changed: [localhost] => (item=kubesphere-devops-system)
    changed: [localhost] => (item=istio-system)
    changed: [localhost] => (item=kubesphere-alerting-system)
    changed: [localhost] => (item=istio-system)
    changed: [localhost] => (item=istio-system)

    TASK [common : KubeSphere | Create ImagePullSecrets] ***************************
    changed: [localhost] => (item=default)
    changed: [localhost] => (item=kube-public)
    changed: [localhost] => (item=kube-system)
    changed: [localhost] => (item=kubesphere-system)
    changed: [localhost] => (item=kubesphere-controls-system)
    changed: [localhost] => (item=kubesphere-monitoring-system)
    changed: [localhost] => (item=kube-node-lease)
    changed: [localhost] => (item=kubesphere-logging-system)
    changed: [localhost] => (item=openpitrix-system)
    changed: [localhost] => (item=kubesphere-devops-system)
    changed: [localhost] => (item=istio-system)
    changed: [localhost] => (item=kubesphere-alerting-system)
    changed: [localhost] => (item=istio-system)

    TASK [common : KubeSphere | Getting kubernetes master num] *********************
    changed: [localhost]

    TASK [common : KubeSphere | Setting master num] ********************************
    ok: [localhost]

    TASK [common : Kubesphere | Getting common component installation files] *******
    changed: [localhost] => (item=common)
    changed: [localhost] => (item=ks-crds)

    TASK [common : KubeSphere | Create KubeSphere crds] ****************************
    changed: [localhost]

    TASK [common : Kubesphere | Checking openpitrix common component] **************
    changed: [localhost]

    TASK [common : include_tasks] **************************************************
    skipping: [localhost] => (item={u’ks’: u’mysql-pvc’, u’op’: u’openpitrix-db’})
    skipping: [localhost] => (item={u’ks’: u’etcd-pvc’, u’op’: u’openpitrix-etcd’})

    TASK [common : Getting PersistentVolumeName (mysql)] ***************************
    skipping: [localhost]

    TASK [common : Getting PersistentVolumeSize (mysql)] ***************************
    skipping: [localhost]

    TASK [common : Setting PersistentVolumeName (mysql)] ***************************
    skipping: [localhost]

    TASK [common : Setting PersistentVolumeSize (mysql)] ***************************
    skipping: [localhost]

    TASK [common : Getting PersistentVolumeName (etcd)] ****************************
    skipping: [localhost]

    TASK [common : Getting PersistentVolumeSize (etcd)] ****************************
    skipping: [localhost]

    TASK [common : Setting PersistentVolumeName (etcd)] ****************************
    skipping: [localhost]

    TASK [common : Setting PersistentVolumeSize (etcd)] ****************************
    skipping: [localhost]

    TASK [common : Kubesphere | Check mysql PersistentVolumeClaim] *****************
    fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-system mysql-pvc -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.525579”, “end”: “2020-04-23 03:30:10.804687”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2020-04-23 03:30:10.279108″, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“mysql-pvc\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“mysql-pvc\” not found"], “stdout”: "", “stdout_lines”: []}
    …ignoring

    TASK [common : Kubesphere | Setting mysql db pv size] **************************
    skipping: [localhost]

    TASK [common : Kubesphere | Check redis PersistentVolumeClaim] *****************
    fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-system redis-pvc -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.517019″, “end”: “2020-04-23 03:30:11.472357”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2020-04-23 03:30:10.955338”, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“redis-pvc\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“redis-pvc\” not found"], “stdout”: "", “stdout_lines”: []}
    …ignoring

    TASK [common : Kubesphere | Setting redis db pv size] **************************
    skipping: [localhost]

    TASK [common : Kubesphere | Check minio PersistentVolumeClaim] *****************
    fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-system minio -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.523614″, “end”: “2020-04-23 03:30:12.126804”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2020-04-23 03:30:11.603190″, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“minio\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“minio\” not found"], “stdout”: "", “stdout_lines”: []}
    …ignoring

    TASK [common : Kubesphere | Setting minio pv size] *****************************
    skipping: [localhost]

    TASK [common : Kubesphere | Check openldap PersistentVolumeClaim] **************
    fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-system openldap-pvc-openldap-0 -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.529545”, “end”: “2020-04-23 03:30:12.921917”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2020-04-23 03:30:12.392372″, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“openldap-pvc-openldap-0\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“openldap-pvc-openldap-0\” not found"], “stdout”: "", “stdout_lines”: []}
    …ignoring

    TASK [common : Kubesphere | Setting openldap pv size] **************************
    skipping: [localhost]

    TASK [common : Kubesphere | Check etcd db PersistentVolumeClaim] ***************
    fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-system etcd-pvc -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.517999”, “end”: “2020-04-23 03:30:13.570670″, “msg”: “non-zero return code”, “rc”: 1, “start”: “2020-04-23 03:30:13.052671″, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“etcd-pvc\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“etcd-pvc\” not found"], “stdout”: "", “stdout_lines”: []}
    …ignoring

    TASK [common : Kubesphere | Setting etcd pv size] ******************************
    skipping: [localhost]

    TASK [common : Kubesphere | Check redis ha PersistentVolumeClaim] **************
    fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-system data-redis-ha-server-0 -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.516003″, “end”: “2020-04-23 03:30:14.290529″, “msg”: “non-zero return code”, “rc”: 1, “start”: “2020-04-23 03:30:13.774526″, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“data-redis-ha-server-0\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“data-redis-ha-server-0\” not found"], “stdout”: "", “stdout_lines”: []}
    …ignoring

    TASK [common : Kubesphere | Setting redis ha pv size] **************************
    skipping: [localhost]

    TASK [common : Kubesphere | Creating common component manifests] ***************
    changed: [localhost] => (item={u’path’: u’etcd’, u’file’: u’etcd.yaml’})
    changed: [localhost] => (item={u’name’: u’mysql’, u’file’: u’mysql.yaml’})
    changed: [localhost] => (item={u’path’: u’redis’, u’file’: u’redis.yaml’})

    TASK [common : Kubesphere | Creating mysql sercet] *****************************
    changed: [localhost]

    TASK [common : Kubesphere | Deploying etcd and mysql] **************************
    skipping: [localhost] => (item=etcd.yaml)
    skipping: [localhost] => (item=mysql.yaml)

    TASK [common : Kubesphere | Getting minio installation files] ******************
    skipping: [localhost] => (item=minio-ha)

    TASK [common : Kubesphere | Creating manifests] ********************************
    skipping: [localhost] => (item={u’name’: u’custom-values-minio’, u’file’: u’custom-values-minio.yaml’})

    TASK [common : Kubesphere | Check minio] ***************************************
    skipping: [localhost]

    TASK [common : Kubesphere | Deploy minio] **************************************
    skipping: [localhost]

    TASK [common : debug] **********************************************************
    skipping: [localhost]

    TASK [common : fail] ***********************************************************
    skipping: [localhost]

    TASK [common : Kubesphere | create minio config directory] *********************
    skipping: [localhost]

    TASK [common : Kubesphere | Creating common component manifests] ***************
    skipping: [localhost] => (item={u’path’: u’/root/.config/rclone’, u’file’: u’rclone.conf’})

    TASK [common : include_tasks] **************************************************
    skipping: [localhost] => (item=helm)
    skipping: [localhost] => (item=vmbased)

    TASK [common : Kubesphere | Check ha-redis] ************************************
    skipping: [localhost]

    TASK [common : Kubesphere | Getting redis installation files] ******************
    skipping: [localhost] => (item=redis-ha)

    TASK [common : Kubesphere | Creating manifests] ********************************
    skipping: [localhost] => (item={u’name’: u’custom-values-redis’, u’file’: u’custom-values-redis.yaml’})

    TASK [common : Kubesphere | Check old redis status] ****************************
    skipping: [localhost]

    TASK [common : Kubesphere | Delete and backup old redis svc] *******************
    skipping: [localhost]

    TASK [common : Kubesphere | Deploying redis] ***********************************
    skipping: [localhost]

    TASK [common : Kubesphere | Getting redis PodIp] *******************************
    skipping: [localhost]

    TASK [common : Kubesphere | Creating redis migration script] *******************
    skipping: [localhost] => (item={u’path’: u’/etc/kubesphere’, u’file’: u’redisMigrate.py’})

    TASK [common : Kubesphere | Check redis-ha status] *****************************
    skipping: [localhost]

    TASK [common : ks-logging | Migrating redis data] ******************************
    skipping: [localhost]

    TASK [common : Kubesphere | Disable old redis] *********************************
    skipping: [localhost]

    TASK [common : Kubesphere | Deploying redis] ***********************************
    skipping: [localhost] => (item=redis.yaml)

    TASK [common : Kubesphere | Getting openldap installation files] ***************
    skipping: [localhost] => (item=openldap-ha)

    TASK [common : Kubesphere | Creating manifests] ********************************
    skipping: [localhost] => (item={u’name’: u’custom-values-openldap’, u’file’: u’custom-values-openldap.yaml’})

    TASK [common : Kubesphere | Check old openldap status] *************************
    skipping: [localhost]

    TASK [common : KubeSphere | Shutdown ks-account] *******************************
    skipping: [localhost]

    TASK [common : Kubesphere | Delete and backup old openldap svc] ****************
    skipping: [localhost]

    TASK [common : Kubesphere | Check openldap] ************************************
    skipping: [localhost]

    TASK [common : Kubesphere | Deploy openldap] ***********************************
    skipping: [localhost]

    TASK [common : Kubesphere | Load old openldap data] ****************************
    skipping: [localhost]

    TASK [common : Kubesphere | Check openldap-ha status] **************************
    skipping: [localhost]

    TASK [common : Kubesphere | Get openldap-ha pod list] **************************
    skipping: [localhost]

    TASK [common : Kubesphere | Get old openldap data] *****************************
    skipping: [localhost]

    TASK [common : Kubesphere | Migrating openldap data] ***************************
    skipping: [localhost]

    TASK [common : Kubesphere | Disable old openldap] ******************************
    skipping: [localhost]

    TASK [common : Kubesphere | Restart openldap] **********************************
    skipping: [localhost]

    TASK [common : KubeSphere | Restarting ks-account] *****************************
    skipping: [localhost]

    TASK [common : Kubesphere | Check ha-redis] ************************************
    changed: [localhost]

    TASK [common : Kubesphere | Getting redis installation files] ******************
    skipping: [localhost] => (item=redis-ha)

    TASK [common : Kubesphere | Creating manifests] ********************************
    skipping: [localhost] => (item={u’name’: u’custom-values-redis’, u’file’: u’custom-values-redis.yaml’})

    TASK [common : Kubesphere | Check old redis status] ****************************
    skipping: [localhost]

    TASK [common : Kubesphere | Delete and backup old redis svc] *******************
    skipping: [localhost]

    TASK [common : Kubesphere | Deploying redis] ***********************************
    skipping: [localhost]

    TASK [common : Kubesphere | Getting redis PodIp] *******************************
    skipping: [localhost]

    TASK [common : Kubesphere | Creating redis migration script] *******************
    skipping: [localhost] => (item={u’path’: u’/etc/kubesphere’, u’file’: u’redisMigrate.py’})

    TASK [common : Kubesphere | Check redis-ha status] *****************************
    skipping: [localhost]

    TASK [common : ks-logging | Migrating redis data] ******************************
    skipping: [localhost]

    TASK [common : Kubesphere | Disable old redis] *********************************
    skipping: [localhost]

    TASK [common : Kubesphere | Deploying redis] ***********************************
    changed: [localhost] => (item=redis.yaml)

    TASK [common : Kubesphere | Getting openldap installation files] ***************
    changed: [localhost] => (item=openldap-ha)

    TASK [common : Kubesphere | Creating manifests] ********************************
    changed: [localhost] => (item={u’name’: u’custom-values-openldap’, u’file’: u’custom-values-openldap.yaml’})

    TASK [common : Kubesphere | Check old openldap status] *************************
    changed: [localhost]

    TASK [common : KubeSphere | Shutdown ks-account] *******************************
    skipping: [localhost]

    TASK [common : Kubesphere | Delete and backup old openldap svc] ****************
    skipping: [localhost]

    TASK [common : Kubesphere | Check openldap] ************************************
    changed: [localhost]

    TASK [common : Kubesphere | Deploy openldap] ***********************************
    changed: [localhost]

    TASK [common : Kubesphere | Load old openldap data] ****************************
    skipping: [localhost]

    TASK [common : Kubesphere | Check openldap-ha status] **************************
    skipping: [localhost]

    TASK [common : Kubesphere | Get openldap-ha pod list] **************************
    skipping: [localhost]

    TASK [common : Kubesphere | Get old openldap data] *****************************
    skipping: [localhost]

    TASK [common : Kubesphere | Migrating openldap data] ***************************
    skipping: [localhost]

    TASK [common : Kubesphere | Disable old openldap] ******************************
    skipping: [localhost]

    TASK [common : Kubesphere | Restart openldap] **********************************
    skipping: [localhost]

    TASK [common : KubeSphere | Restarting ks-account] *****************************
    skipping: [localhost]

    TASK [common : Kubesphere | Getting minio installation files] ******************
    changed: [localhost] => (item=minio-ha)

    TASK [common : Kubesphere | Creating manifests] ********************************
    changed: [localhost] => (item={u’name’: u’custom-values-minio’, u’file’: u’custom-values-minio.yaml’})

    TASK [common : Kubesphere | Check minio] ***************************************
    changed: [localhost]

    TASK [common : Kubesphere | Deploy minio] **************************************
    `

    CentOS 7.5 最小化安装后, 再AllInOne安装, 遇到的错误
    `2020-04-23 21:23:09,749 p=24810 u=root | Thursday 23 April 2020 21:23:09 +0800 (0:00:00.228) 0:00:16.730 ********
    2020-04-23 21:23:09,787 p=24810 u=root | skipping: [ks-allinone]
    2020-04-23 21:23:09,861 p=24810 u=root | TASK [Create repo.d] ******************************************************************************
    2020-04-23 21:23:09,861 p=24810 u=root | Thursday 23 April 2020 21:23:09 +0800 (0:00:00.111) 0:00:16.842 ********
    2020-04-23 21:23:09,977 p=24810 u=root | fatal: [ks-allinone]: FAILED! => {
    “changed”: true,
    “rc”: 1
    }

    STDERR:

    mkdir: 无法创建目录"/etc/yum.repos.d": 文件已存在

    MSG:

    non-zero return code

    2020-04-23 21:23:09,978 p=24810 u=root | …ignoring
    2020-04-23 21:23:10,096 p=24810 u=root | TASK [Creat client.repo] **************************************************************************
    `
    这个是因为与我自己装了yum冲突导致的吗?怎么解决呢?

    最后还看到这样的error:
    `2020-04-23 22:21:53,336 p=4091 u=root | fatal: [ks-allinone -> ks-allinone]: FAILED! => {
    “attempts”: 4,
    “changed”: false,
    “dest”: “/root/releases/kubeadm-v1.16.7-amd64”,
    “state”: “absent”,
    “url”: “http://192.168.31.141:5080/k8s_repo/iso/v1.16.7/kubeadm
    }

    MSG:

    Request failed: <urlopen error [Errno 111] 拒绝连接>
    2020-04-23 22:21:53,338 p=4091 u=root | NO MORE HOSTS LEFT ********************************************************************************
    2020-04-23 22:21:53,339 p=4091 u=root | PLAY RECAP ****************************************************************************************
    2020-04-23 22:21:53,340 p=4091 u=root | ks-allinone : ok=122 changed=26 unreachable=0 failed=1
    2020-04-23 22:21:53,340 p=4091 u=root | localhost : ok=1 changed=0 unreachable=0 failed=0
    2020-04-23 22:21:53,341 p=4091 u=root | Thursday 23 April 2020 22:21:53 +0800 (0:00:23.580) 0:03:27.193 ********
    2020-04-23 22:21:53,341 p=4091 u=root | ===============================================================================
    2020-04-23 22:21:53,344 p=4091 u=root | download : download_file | Download item ————————————————– 23.58s
    `

    我太难了,在线、离线都安装,都装不上,装了卸,卸了装。防火墙也关了
    TASK [kubernetes/preinstall : Install packages requirements] ***********************************************************************************************************************************************************
    Sunday 26 April 2020 20:17:45 +0800 (0:00:00.161) 0:00:40.420 **********
    FAILED - RETRYING: Install packages requirements (4 retries left).
    ok: [master]
    FAILED - RETRYING: Install packages requirements (4 retries left).
    FAILED - RETRYING: Install packages requirements (3 retries left).
    FAILED - RETRYING: Install packages requirements (3 retries left).
    FAILED - RETRYING: Install packages requirements (2 retries left).
    FAILED - RETRYING: Install packages requirements (2 retries left).
    FAILED - RETRYING: Install packages requirements (1 retries left).
    FAILED - RETRYING: Install packages requirements (1 retries left).
    fatal: [node2]: FAILED! => {
    “attempts”: 4,
    “changed”: false,
    “rc”: 1,
    “results”: []
    }

    MSG:

    http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
    正在尝试其它镜像。
    http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
    正在尝试其它镜像。
    http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
    正在尝试其它镜像。
    http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
    正在尝试其它镜像。
    http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
    正在尝试其它镜像。
    http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
    正在尝试其它镜像。
    http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
    正在尝试其它镜像。
    http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
    正在尝试其它镜像。
    http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
    正在尝试其它镜像。
    http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
    正在尝试其它镜像。

    One of the configured repositories failed (centos7),
    and yum doesn’t have enough cached data to continue. At this point the only
    safe thing yum can do is fail. There are a few ways to work “fix” this:

     1. Contact the upstream for the repository and get them to fix the problem.
    
     2. Reconfigure the baseurl/etc. for the repository, to point to a working
        upstream. This is most often useful if you are using a newer
        distribution release than is supported by the repository (and the
        packages for the previous distribution release still work).
    
     3. Run the command with the repository temporarily disabled
            yum --disablerepo=bash ...
    
     4. Disable the repository permanently, so yum won't use it by default. Yum
        will then just ignore the repository until you permanently enable it
        again or use --enablerepo for temporary usage:
    
            yum-config-manager --disable bash
        or
            subscription-manager repos --disable=bash
    
     5. Configure the failing repository to be skipped, if it is unavailable.
        Note that yum will try to contact the repo. when it runs most commands,
        so will have to try and fail each time (and thus. yum will be be much
        slower). If it is a very temporary problem though, this is often a nice
        compromise:
    
            yum-config-manager --save --setopt=bash.skip_if_unavailable=true

    failure: repodata/repomd.xml from bash: [Errno 256] No more mirrors to try.
    http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
    http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
    http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
    http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
    http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
    http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
    http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
    http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
    http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
    http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”

    fatal: [node1]: FAILED! => {
    “attempts”: 4,
    “changed”: false,
    “rc”: 1,
    “results”: []
    }

    MSG:

    http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
    正在尝试其它镜像。
    http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
    正在尝试其它镜像。
    http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
    正在尝试其它镜像。
    http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
    正在尝试其它镜像。
    http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
    正在尝试其它镜像。
    http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
    正在尝试其它镜像。
    http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
    正在尝试其它镜像。
    http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
    正在尝试其它镜像。
    http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
    正在尝试其它镜像。
    http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
    正在尝试其它镜像。

    One of the configured repositories failed (centos7),
    and yum doesn’t have enough cached data to continue. At this point the only
    safe thing yum can do is fail. There are a few ways to work “fix” this:

     1. Contact the upstream for the repository and get them to fix the problem.
    
     2. Reconfigure the baseurl/etc. for the repository, to point to a working
        upstream. This is most often useful if you are using a newer
        distribution release than is supported by the repository (and the
        packages for the previous distribution release still work).
    
     3. Run the command with the repository temporarily disabled
            yum --disablerepo=bash ...
    
     4. Disable the repository permanently, so yum won't use it by default. Yum
        will then just ignore the repository until you permanently enable it
        again or use --enablerepo for temporary usage:
    
            yum-config-manager --disable bash
        or
            subscription-manager repos --disable=bash
    
     5. Configure the failing repository to be skipped, if it is unavailable.
        Note that yum will try to contact the repo. when it runs most commands,
        so will have to try and fail each time (and thus. yum will be be much
        slower). If it is a very temporary problem though, this is often a nice
        compromise:
    
            yum-config-manager --save --setopt=bash.skip_if_unavailable=true

    failure: repodata/repomd.xml from bash: [Errno 256] No more mirrors to try.
    http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
    http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
    http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
    http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
    http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
    http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
    http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
    http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
    http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
    http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”

    NO MORE HOSTS LEFT ***************************************

      wcsemail 我安装成功了, 他们关于离线安装文档中没有加入两个前提,关闭防火墙 和提前安装两个组件
      yum install sshpass
      yum install docker-ce docker-ce-cli -y 请了解,做完没安装成功就是没有安装文档严格操作,多了不必要的东西

        ussiwfm 是的,严格按照文档要求来一般都会成功的;防火墙最好关掉(不清楚的请搜索对应机器类型centos、ubuntu等怎么关防火墙);sshpass和docker两个组件都会在脚本中安装,不需要提前安装。

        ussiwfm 多谢提醒,在本帖开头的前提条件中 是有说明关闭防火墙的,两个组件离线安装包会自动安装

        失败,失败了一个星期了,各种失败
        ASK [kubernetes/preinstall : Update common_required_pkgs with ipvsadm when kube_proxy_mode is ipvs] *************************************************************************************************************************************
        Monday 27 April 2020 13:05:31 +0800 (0:00:00.114) 0:00:42.395 **********
        ok: [node1]
        ok: [master]
        ok: [node2]

        TASK [kubernetes/preinstall : Install packages requirements] *****************************************************************************************************************************************************************************
        Monday 27 April 2020 13:05:31 +0800 (0:00:00.185) 0:00:42.580 **********
        FAILED - RETRYING: Install packages requirements (4 retries left).
        FAILED - RETRYING: Install packages requirements (4 retries left).
        ok: [master]
        FAILED - RETRYING: Install packages requirements (3 retries left).
        FAILED - RETRYING: Install packages requirements (3 retries left).
        FAILED - RETRYING: Install packages requirements (2 retries left).
        FAILED - RETRYING: Install packages requirements (1 retries left).
        FAILED - RETRYING: Install packages requirements (2 retries left).
        fatal: [node2]: FAILED! => {
        “attempts”: 4,
        “changed”: false,
        “rc”: 1,
        “results”: []
        }

        MSG:

        http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
        正在尝试其它镜像。
        http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
        正在尝试其它镜像。
        http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
        正在尝试其它镜像。
        http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
        正在尝试其它镜像。
        http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
        正在尝试其它镜像。
        http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
        正在尝试其它镜像。
        http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
        正在尝试其它镜像。
        http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
        正在尝试其它镜像。
        http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
        正在尝试其它镜像。
        http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
        正在尝试其它镜像。

        One of the configured repositories failed (centos7),
        and yum doesn’t have enough cached data to continue. At this point the only
        safe thing yum can do is fail. There are a few ways to work “fix” this:

         1. Contact the upstream for the repository and get them to fix the problem.
        
         2. Reconfigure the baseurl/etc. for the repository, to point to a working
            upstream. This is most often useful if you are using a newer
            distribution release than is supported by the repository (and the
            packages for the previous distribution release still work).
        
         3. Run the command with the repository temporarily disabled
                yum --disablerepo=bash ...
        
         4. Disable the repository permanently, so yum won't use it by default. Yum
            will then just ignore the repository until you permanently enable it
            again or use --enablerepo for temporary usage:
        
                yum-config-manager --disable bash
            or
                subscription-manager repos --disable=bash
        
         5. Configure the failing repository to be skipped, if it is unavailable.
            Note that yum will try to contact the repo. when it runs most commands,
            so will have to try and fail each time (and thus. yum will be be much
            slower). If it is a very temporary problem though, this is often a nice
            compromise:
        
                yum-config-manager --save --setopt=bash.skip_if_unavailable=true

        failure: repodata/repomd.xml from bash: [Errno 256] No more mirrors to try.
        http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
        http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
        http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
        http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
        http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
        http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
        http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
        http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
        http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
        http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”

        FAILED - RETRYING: Install packages requirements (1 retries left).
        fatal: [node1]: FAILED! => {
        “attempts”: 4,
        “changed”: false,
        “rc”: 1,
        “results”: []
        }

        MSG:

        http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
        正在尝试其它镜像。
        http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
        正在尝试其它镜像。
        http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
        正在尝试其它镜像。
        http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
        正在尝试其它镜像。
        http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
        正在尝试其它镜像。
        http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
        正在尝试其它镜像。
        http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
        正在尝试其它镜像。
        http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
        正在尝试其它镜像。
        http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
        正在尝试其它镜像。
        http://192.168.4.99:5080/yum_repo/iso/repodata/repomd.xml: [Errno 14] curl#7 - “Failed connect to 192.168.4.99:5080; 拒绝连接”
        正在尝试其它镜像。

          wcsemail 你这个看起来还是yum源的问题,你没参考https://kubesphere.com.cn/forum/d/929-yum?

          参考了,法子都用了,机器都格式化三次了

          各位大佬,搞不定呀
          TASK [download : download_container | Download image if required ( 192.168.4.99:5000/mirrorgooglecontainers/pause-amd64:3.1 )] ***********************************************************************************************************
          Tuesday 28 April 2020 11:18:13 +0800 (0:00:00.153) 0:01:50.854 *********
          FAILED - RETRYING: download_container | Download image if required ( 192.168.4.99:5000/mirrorgooglecontainers/pause-amd64:3.1 ) (4 retries left).
          FAILED - RETRYING: download_container | Download image if required ( 192.168.4.99:5000/mirrorgooglecontainers/pause-amd64:3.1 ) (4 retries left).
          FAILED - RETRYING: download_container | Download image if required ( 192.168.4.99:5000/mirrorgooglecontainers/pause-amd64:3.1 ) (4 retries left).
          FAILED - RETRYING: download_container | Download image if required ( 192.168.4.99:5000/mirrorgooglecontainers/pause-amd64:3.1 ) (3 retries left).
          FAILED - RETRYING: download_container | Download image if required ( 192.168.4.99:5000/mirrorgooglecontainers/pause-amd64:3.1 ) (3 retries left).
          FAILED - RETRYING: download_container | Download image if required ( 192.168.4.99:5000/mirrorgooglecontainers/pause-amd64:3.1 ) (2 retries left).
          FAILED - RETRYING: download_container | Download image if required ( 192.168.4.99:5000/mirrorgooglecontainers/pause-amd64:3.1 ) (3 retries left).
          FAILED - RETRYING: download_container | Download image if required ( 192.168.4.99:5000/mirrorgooglecontainers/pause-amd64:3.1 ) (2 retries left).
          FAILED - RETRYING: download_container | Download image if required ( 192.168.4.99:5000/mirrorgooglecontainers/pause-amd64:3.1 ) (1 retries left).
          FAILED - RETRYING: download_container | Download image if required ( 192.168.4.99:5000/mirrorgooglecontainers/pause-amd64:3.1 ) (1 retries left).
          fatal: [master -> master]: FAILED! => {
          “attempts”: 4,
          “changed”: true,
          “cmd”: [
          “/usr/bin/docker”,
          “pull”,
          “192.168.4.99:5000/mirrorgooglecontainers/pause-amd64:3.1″
          ],
          “delta”: “0:00:00.068527″,
          “end”: “2020-04-28 11:18:26.854654”,
          “rc”: 1,
          “start”: “2020-04-28 11:18:26.786127″
          }

          STDERR:

          Error response from daemon: Get http://192.168.4.99:5000/v2/: dial tcp 192.168.4.99:5000: connect: connection refused

          MSG:

          non-zero return code

          FAILED - RETRYING: download_container | Download image if required ( 192.168.4.99:5000/mirrorgooglecontainers/pause-amd64:3.1 ) (2 retries left).
          fatal: [node1 -> master]: FAILED! => {
          “attempts”: 4,
          “changed”: true,
          “cmd”: [
          “/usr/bin/docker”,
          “pull”,
          “192.168.4.99:5000/mirrorgooglecontainers/pause-amd64:3.1”
          ],
          “delta”: “0:00:00.075878″,
          “end”: “2020-04-28 11:18:30.915696”,
          “rc”: 1,
          “start”: “2020-04-28 11:18:30.839818″
          }

          STDERR:

          Error response from daemon: Get http://192.168.4.99:5000/v2/: dial tcp 192.168.4.99:5000: connect: connection refused

          MSG:

          non-zero return code

          FAILED - RETRYING: download_container | Downl

            终于解决,centos最小安装/ 目录默认50g肯定不行,加到100g就可以了