`TASK [bootstrap-os : Ensure bash_completion.d folder exists] ****************************************************************************************************************************************
Monday 07 September 2020 21:53:32 +0800 (0:00:00.087) 0:01:03.428 ******
ok: [m]
ok: [n2]
ok: [n1]

PLAY [k8s-cluster:etcd] *****************************************************************************************************************************************************************************

TASK [chrony : apt cache] ***************************************************************************************************************************************************************************
Monday 07 September 2020 21:53:33 +0800 (0:00:00.433) 0:01:03.861 ******
skipping: [n1]
skipping: [n2]
skipping: [m]

TASK [chrony : apt remove ntp] **********************************************************************************************************************************************************************
Monday 07 September 2020 21:53:33 +0800 (0:00:00.098) 0:01:03.959 ******
fatal: [m]: FAILED! => {
“changed”: false
}

MSG:

Unsupported parameters for (yum) module: warn Supported parameters include: allow_downgrade, autoremove, bugfix, conf_file, disable_excludes, disable_gpg_check, disable_plugin, disablerepo, downloa d_only, enable_plugin, enablerepo, exclude, install_repoquery, installroot, list, name, releasever, security, skip_broken, state, update_cache, update_only, use_backend, validate_certs

…ignoring
fatal: [n2]: FAILED! => {
“changed”: false
}

MSG:

Unsupported parameters for (yum) module: warn Supported parameters include: allow_downgrade, autoremove, bugfix, conf_file, disable_excludes, disable_gpg_check, disable_plugin, disablerepo, downloa d_only, enable_plugin, enablerepo, exclude, install_repoquery, installroot, list, name, releasever, security, skip_broken, state, update_cache, update_only, use_backend, validate_certs

…ignoring
fatal: [n1]: FAILED! => {
“changed”: false
}

MSG:

Unsupported parameters for (yum) module: warn Supported parameters include: allow_downgrade, autoremove, bugfix, conf_file, disable_excludes, disable_gpg_check, disable_plugin, disablerepo, downloa d_only, enable_plugin, enablerepo, exclude, install_repoquery, installroot, list, name, releasever, security, skip_broken, state, update_cache, update_only, use_backend, validate_certs

…ignoring

TASK [chrony : apt remove ntp] **********************************************************************************************************************************************************************
Monday 07 September 2020 21:53:33 +0800 (0:00:00.520) 0:01:04.480 ******
skipping: [n1]
skipping: [n2]
skipping: [m]

TASK [chrony : install chrony] **********************************************************************************************************************************************************************
Monday 07 September 2020 21:53:33 +0800 (0:00:00.091) 0:01:04.572 ******
`

这个是什么情况,在Multi-Node 模式安装时出来的

`TASK [etcd : Install etcd launch script] ************************************************************************************************************************************************************
Monday 07 September 2020 21:57:41 +0800 (0:00:00.412) 0:05:12.625 ******
changed: [m]

TASK [etcd : Install etcd-events launch script] *****************************************************************************************************************************************************
Monday 07 September 2020 21:57:42 +0800 (0:00:00.240) 0:05:12.866 ******
skipping: [m]

TASK [etcd : include_tasks] *************************************************************************************************************************************************************************
Monday 07 September 2020 21:57:42 +0800 (0:00:00.036) 0:05:12.902 ******
included: /root/kubesphere-all-v2.1.1/k8s/roles/etcd/tasks/configure.yml for m

TASK [etcd : Configure | Check if etcd cluster is healthy] ******************************************************************************************************************************************
Monday 07 September 2020 21:57:42 +0800 (0:00:00.107) 0:05:13.010 ******
fatal: [m]: FAILED! => {
“changed”: false,
“cmd”: “/usr/local/bin/etcdctl –endpoints=https://192.168.0.155:2379 cluster-health | grep -q ‘cluster is healthy’”,
“delta”: “0:00:00.011008″,
“end”: “2020-09-07 21:57:42.497988”,
“rc”: 1,
“start”: “2020-09-07 21:57:42.486980”
}

STDERR:

Error: client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 192.168.0.155:2379: getsockopt: connection refused

error #0: dial tcp 192.168.0.155:2379: getsockopt: connection refused

MSG:

non-zero return code

…ignoring

TASK [etcd : Configure | Check if etcd-events cluster is healthy] ***********************************************************************************************************************************
Monday 07 September 2020 21:57:42 +0800 (0:00:00.255) 0:05:13.265 ******
skipping: [m]

TASK [etcd : include_tasks] *************************************************************************************************************************************************************************
Monday 07 September 2020 21:57:42 +0800 (0:00:00.040) 0:05:13.305 ******
included: /root/kubesphere-all-v2.1.1/k8s/roles/etcd/tasks/refresh_config.yml for m

TASK [etcd : Refresh config | Create etcd config file] **********************************************************************************************************************************************
Monday 07 September 2020 21:57:42 +0800 (0:00:00.070) 0:05:13.375 ******
changed: [m]

`

  • TAO 回复了此帖

    TAO 这也是在Multi-Node 模式下安装的报错信息,大佬们怎么解决

    `Monday 07 September 2020 22:01:35 +0800 (0:00:00.124) 0:00:02.590 ******
    ok: [m]
    FAILED - RETRYING: Installing epel-release (YUM) (5 retries left).
    FAILED - RETRYING: Installing epel-release (YUM) (5 retries left).
    FAILED - RETRYING: Installing epel-release (YUM) (4 retries left).
    FAILED - RETRYING: Installing epel-release (YUM) (4 retries left).
    FAILED - RETRYING: Installing epel-release (YUM) (3 retries left).
    FAILED - RETRYING: Installing epel-release (YUM) (3 retries left).
    FAILED - RETRYING: Installing epel-release (YUM) (2 retries left).
    FAILED - RETRYING: Installing epel-release (YUM) (2 retries left).
    FAILED - RETRYING: Installing epel-release (YUM) (1 retries left).
    FAILED - RETRYING: Installing epel-release (YUM) (1 retries left).
    fatal: [n2]: FAILED! => {
    “attempts”: 5,
    “changed”: false,
    “rc”: 1,
    “results”: [
    “Resolving Dependencies\n–> Running transaction check\n—> Package epel-release.noarch 0:7-12 will be installed\n–> Finished Dependency Resolution\n\nDependencies Resolved\n\n================================================================================\n Package Arch Version Repository Size\n================================================================================\nInstalling:\n epel-release noarch 7-12 epel 15 k\n\nTransaction Summary\n================================================================================\nInstall 1 Package\n\nTotal size: 15 k\nInstalled size: 24 k\nDownloading packages:\nRetrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7\n”
    ]
    }

    MSG:

    warning: /var/cache/yum/x86_64/7/epel/packages/epel-release-7-12.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID 352c64e5: NOKEY

    GPG key retrieval failed: [Errno 14] curl#37 - “Couldn’t open file /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7″

    fatal: [n1]: FAILED! => {
    “attempts”: 5,
    “changed”: false,
    “rc”: 1,
    “results”: [
    “Resolving Dependencies\n–> Running transaction check\n—> Package epel-release.noarch 0:7-12 will be installed\n–> Finished Dependency Resolution\n\nDependencies Resolved\n\n================================================================================\n Package Arch Version Repository Size\n================================================================================\nInstalling:\n epel-release noarch 7-12 epel 15 k\n\nTransaction Summary\n================================================================================\nInstall 1 Package\n\nTotal size: 15 k\nInstalled size: 24 k\nDownloading packages:\nRetrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7\n”
    ]
    }

    MSG:

    warning: /var/cache/yum/x86_64/7/epel/packages/epel-release-7-12.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID 352c64e5: NOKEY

    GPG key retrieval failed: [Errno 14] curl#37 - “Couldn’t open file /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7”

    TASK [prepare/nodes : KubeSphere | Installing JQ (APT)] ***************************************************************************************************************************************************************************
    Monday 07 September 2020 22:02:12 +0800 (0:00:36.255) 0:00:38.845 ******
    skipping: [m]

    TASK [prepare/nodes : KubeSphere| Installing JQ (YUM)] ****************************************************************************************************************************************************************************
    Monday 07 September 2020 22:02:12 +0800 (0:00:00.035) 0:00:38.881 ******
    ok: [m]

    `
    这个也是Multi-Node 模式下安装报错的信息

    `TASK [ks-installer : kubesphere-installer is ok] **********************************************************************************************************************************************************************************
    Monday 07 September 2020 22:02:42 +0800 (0:00:00.463) 0:01:09.184 ******
    FAILED - RETRYING: kubesphere-installer is ok (30 retries left).
    changed: [m]

    PLAY RECAP ************************************************************************************************************************************************************************************************************************
    m : ok=136 changed=19 unreachable=0 failed=0
    n1 : ok=2 changed=1 unreachable=0 failed=1
    n2 : ok=2 changed=1 unreachable=0 failed=1

    Monday 07 September 2020 22:03:12 +0800 (0:00:30.294) 0:01:39.478 ******

    prepare/nodes : Installing epel-release (YUM) —————————————————————————————————————————————————————————– 36.26s
    ks-installer : kubesphere-installer is ok ——————————————————————————————————————————————————————————— 30.29s
    prepare/nodes : pip | Installing pip ————————————————————————————————————————————————————————————— 5.83s
    download : Download items ————————————————————————————————————————————————————————————————– 1.27s
    download : Download items ————————————————————————————————————————————————————————————————– 0.76s
    download : Sync container ————————————————————————————————————————————————————————————————– 0.70s
    download : Sync container ————————————————————————————————————————————————————————————————– 0.66s
    download : Sync container ————————————————————————————————————————————————————————————————– 0.56s
    download : Download items ————————————————————————————————————————————————————————————————– 0.56s
    download : Download items ————————————————————————————————————————————————————————————————– 0.56s
    download : Sync container ————————————————————————————————————————————————————————————————– 0.55s
    plugins/LocalVolume : openebs | Creating manifests ————————————————————————————————————————————————————————- 0.47s
    ks-installer : ks-installer | Creating manifests ————————————————————————————————————————————————————————— 0.47s
    ks-installer : ks-installer | Creating KubeSphere Installer —————————————————————————————————————————————————————- 0.46s
    prepare/nodes : GlusterFS | Installing glusterfs-client (YUM) ————————————————————————————————————————————————————– 0.42s
    prepare/nodes : Copy get-pip.py ——————————————————————————————————————————————————————————————– 0.33s
    modify resolv.conf ——————————————————————————————————————————————————————————————————— 0.33s
    plugins/LocalVolume : openebs | Creating openebs ————————————————————————————————————————————————————————— 0.32s
    ks-installer : ks-installer | Creating system namespaces ——————————————————————————————————————————————————————- 0.29s
    prepare/nodes : Ceph RBD | Installing ceph-common (YUM) ——————————————————————————————————————————————————————– 0.27s
    failed!


    please refer to https://kubesphere.io/docs/v2.1/zh-CN/faq/faq-install/


    [root@m scripts]#
    `

    Multi-Node 模式下安装后的结果,怎么解决啊大佬们!

    `changed: [master]
    fatal: [node1]: FAILED! => {
    “changed”: true,
    “cmd”: “sudo python /tmp/pip/get-pip.py”,
    “delta”: “0:01:04.280038″,
    “end”: “2020-09-08 01:12:04.637797”,
    “rc”: 2,
    “start”: “2020-09-08 01:11:00.357759″
    }

    STDOUT:

    Collecting pip
    Downloading pip-20.2.2-py2.py3-none-any.whl (1.5 MB)

    STDERR:

    DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. A future version of pip will drop support for Python 2.7. More details about Python 2 support in pip, can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support
    ERROR: Exception:
    Traceback (most recent call last):
    File “/tmp/tmpBudflM/pip.zip/pip/_internal/cli/base_command.py”, line 186, in main
    status = self.run(options, args)
    File “/tmp/tmpBudflM/pip.zip/pip/
    internal/commands/install.py”, line 331, in run
    resolver.resolve(requirement_set)
    File “/tmp/tmpBudflM/pip.zip/pip/internal/legacy_resolve.py”, line 177, in resolve
    discovered_reqs.extend(self.
    resolve_one(requirement_set, req))
    File “/tmp/tmpBudflM/pip.zip/pip/_internal/legacy_resolve.py”, line 333, in resolve_one
    abstract_dist = self.
    get_abstract_dist_for(req_to_install)
    File “/tmp/tmpBudflM/pip.zip/pip/_internal/legacy_resolve.py”, line 282, in get_abstract_dist_for
    abstract_dist = self.preparer.prepare_linked_requirement(req)
    File “/tmp/tmpBudflM/pip.zip/pip/
    internal/operations/prepare.py”, line 482, in prepare_linked_requirement
    hashes=hashes,
    File “/tmp/tmpBudflM/pip.zip/pip/internal/operations/prepare.py”, line 287, in unpack_url
    hashes=hashes,
    File “/tmp/tmpBudflM/pip.zip/pip/
    internal/operations/prepare.py”, line 159, in unpack_http_url
    link, downloader, temp_dir.path, hashes
    File “/tmp/tmpBudflM/pip.zip/pip/_internal/operations/prepare.py”, line 303, in download_http_url
    for chunk in download.chunks:
    File “/tmp/tmpBudflM/pip.zip/pip/
    internal/utils/ui.py”, line 160, in iter
    for x in it:
    File “/tmp/tmpBudflM/pip.zip/pip/internal/network/utils.py”, line 39, in response_chunks
    decode_content=False,
    File “/tmp/tmpBudflM/pip.zip/pip/
    vendor/urllib3/response.py”, line 564, in stream
    data = self.read(amt=amt, decode_content=decode_content)
    File “/tmp/tmpBudflM/pip.zip/pip/vendor/urllib3/response.py”, line 529, in read
    raise IncompleteRead(self.
    fp_bytes_read, self.length_remaining)
    File “/usr/lib64/python2.7/contextlib.py”, line 35, in exit
    self.gen.throw(type, value, traceback)
    File “/tmp/tmpBudflM/pip.zip/pip/_vendor/urllib3/response.py”, line 439, in error_catcher
    raise ReadTimeoutError(self.
    pool, None, “Read timed out.”)
    ReadTimeoutError: HTTPSConnectionPool(host=‘files.pythonhosted.org’, port=443): Read timed out.

    MSG:

    non-zero return code

    `

      `TASK [ks-installer : ks-installer | Creating KubeSphere Installer] *****************************************************************************************************************************
      Tuesday 08 September 2020 01:33:25 +0800 (0:00:00.428) 0:23:09.677 *****
      changed: [master] => (item=ks-installer-config.yaml)
      changed: [master] => (item=ks-installer-deployment.yaml)

      TASK [ks-installer : kubesphere-installer is ok] ***********************************************************************************************************************************************
      Tuesday 08 September 2020 01:33:25 +0800 (0:00:00.443) 0:23:10.120 *****
      FAILED - RETRYING: kubesphere-installer is ok (30 retries left).
      FAILED - RETRYING: kubesphere-installer is ok (29 retries left).
      FAILED - RETRYING: kubesphere-installer is ok (28 retries left).
      FAILED - RETRYING: kubesphere-installer is ok (27 retries left).
      FAILED - RETRYING: kubesphere-installer is ok (26 retries left).
      FAILED - RETRYING: kubesphere-installer is ok (25 retries left).
      FAILED - RETRYING: kubesphere-installer is ok (24 retries left).
      FAILED - RETRYING: kubesphere-installer is ok (23 retries left).
      FAILED - RETRYING: kubesphere-installer is ok (22 retries left).
      FAILED - RETRYING: kubesphere-installer is ok (21 retries left).
      FAILED - RETRYING: kubesphere-installer is ok (20 retries left).
      FAILED - RETRYING: kubesphere-installer is ok (19 retries left).
      FAILED - RETRYING: kubesphere-installer is ok (18 retries left).
      FAILED - RETRYING: kubesphere-installer is ok (17 retries left).
      FAILED - RETRYING: kubesphere-installer is ok (16 retries left).
      FAILED - RETRYING: kubesphere-installer is ok (15 retries left).
      FAILED - RETRYING: kubesphere-installer is ok (14 retries left).
      FAILED - RETRYING: kubesphere-installer is ok (13 retries left).
      FAILED - RETRYING: kubesphere-installer is ok (12 retries left).
      changed: [master]

      PLAY RECAP *************************************************************************************************************************************************************************************
      master : ok=146 changed=41 unreachable=0 failed=0
      node1 : ok=13 changed=11 unreachable=0 failed=1
      node2 : ok=125 changed=25 unreachable=0 failed=0

      Tuesday 08 September 2020 01:42:58 +0800 (0:09:32.738) 0:32:42.859 *****

      ks-installer : kubesphere-installer is ok ——————————————————————————————————————————————— 572.74s
      download : container_download | Download image if required ( haproxy:2.0.4 ) ———————————————————————————————————- 269.72s
      download : container_download | Download image if required ( kubesphere/ks-apiserver:v2.1.1 ) —————————————————————————————– 199.36s
      prepare/nodes : pip | Installing pip ————————————————————————————————————————————————– 186.16s
      download : container_download | Download image if required ( kubesphere/ks-console:v2.1.1 ) ——————————————————————————————- 160.56s
      download : container_download | Download image if required ( mysql:8.0.11 ) ———————————————————————————————————– 141.94s
      download : container_download | Download image if required ( kubesphere/ks-installer:v2.1.1 ) —————————————————————————————– 125.41s
      download : container_download | Download image if required ( osixia/openldap:1.3.0 ) ————————————————————————————————— 63.75s
      download : container_download | Download image if required ( redis:5.0.5-alpine ) —————————————————————————————————— 48.68s
      download : container_download | Download image if required ( kubesphere/ks-account:v2.1.1 ) ——————————————————————————————– 35.14s
      download : container_download | Download image if required ( kubesphere/ks-apigateway:v2.1.1 ) —————————————————————————————– 27.92s
      download : container_download | Download image if required ( alpine:3.10.4 ) ———————————————————————————————————– 24.66s
      prepare/nodes : Ceph RBD | Installing ceph-common (YUM) ——————————————————————————————————————————– 24.28s
      download : container_download | Download image if required ( nginx:1.14-alpine ) ——————————————————————————————————- 20.53s
      prepare/nodes : KubeSphere| Installing JQ (YUM) —————————————————————————————————————————————- 12.01s
      prepare/nodes : NFS-Client | Installing nfs-utils (YUM) ——————————————————————————————————————————— 6.37s
      prepare/nodes : GlusterFS | Installing glusterfs-client (YUM) ————————————————————————————————————————— 4.36s
      prepare/nodes : Installing epel-release (YUM) ——————————————————————————————————————————————- 1.99s
      download : Download items ————————————————————————————————————————————————————— 1.23s
      prepare/nodes : Copy get-pip.py ——————————————————————————————————————————————————— 0.91s
      failed!


      please refer to https://kubesphere.io/docs/v2.1/zh-CN/faq/faq-install/


      [root@master scripts]#

      `


      装了我差不多3-4个小时,然后就这样了,这是咋回事呀,大佬们

      kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
      查看日志后
      `TASK [common : Setting PersistentVolumeName (etcd)] ****************************
      skipping: [localhost]

      TASK [common : Setting PersistentVolumeSize (etcd)] ****************************
      skipping: [localhost]

      TASK [common : Kubesphere | Check mysql PersistentVolumeClaim] *****************
      fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-system mysql-pvc -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.269034”, “end”: “2020-09-07 17:43:19.182103”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2020-09-07 17:43:18.913069″, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“mysql-pvc\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“mysql-pvc\” not found"], “stdout”: "", “stdout_lines”: []}
      …ignoring

      TASK [common : Kubesphere | Setting mysql db pv size] **************************
      skipping: [localhost]

      TASK [common : Kubesphere | Check redis PersistentVolumeClaim] *****************
      fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-system redis-pvc -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.264322”, “end”: “2020-09-07 17:43:19.580897”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2020-09-07 17:43:19.316575″, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“redis-pvc\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“redis-pvc\” not found"], “stdout”: "", “stdout_lines”: []}
      …ignoring

      TASK [common : Kubesphere | Setting redis db pv size] **************************
      skipping: [localhost]

      TASK [common : Kubesphere | Check minio PersistentVolumeClaim] *****************
      fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-system minio -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.267661″, “end”: “2020-09-07 17:43:19.981956”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2020-09-07 17:43:19.714295″, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“minio\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“minio\” not found"], “stdout”: "", “stdout_lines”: []}
      …ignoring

      TASK [common : Kubesphere | Setting minio pv size] *****************************
      skipping: [localhost]

      TASK [common : Kubesphere | Check openldap PersistentVolumeClaim] **************
      fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-system openldap-pvc-openldap-0 -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.269183″, “end”: “2020-09-07 17:43:20.384573”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2020-09-07 17:43:20.115390″, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“openldap-pvc-openldap-0\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“openldap-pvc-openldap-0\” not found"], “stdout”: "", “stdout_lines”: []}
      …ignoring

      TASK [common : Kubesphere | Setting openldap pv size] **************************
      skipping: [localhost]

      TASK [common : Kubesphere | Check etcd db PersistentVolumeClaim] ***************
      fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-system etcd-pvc -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.266951″, “end”: “2020-09-07 17:43:20.786619″, “msg”: “non-zero return code”, “rc”: 1, “start”: “2020-09-07 17:43:20.519668″, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“etcd-pvc\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“etcd-pvc\” not found"], “stdout”: "", “stdout_lines”: []}
      …ignoring

      TASK [common : Kubesphere | Setting etcd pv size] ******************************
      skipping: [localhost]

      TASK [common : Kubesphere | Check redis ha PersistentVolumeClaim] **************
      fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-system data-redis-ha-server-0 -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.266343”, “end”: “2020-09-07 17:43:21.193700”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2020-09-07 17:43:20.927357”, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“data-redis-ha-server-0\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“data-redis-ha-server-0\” not found"], “stdout”: "", “stdout_lines”: []}
      …ignoring

      TASK [common : Kubesphere | Setting redis ha pv size] **************************
      `



      最后访问http://192.168.0.166:30880/也没有响应

      kubectl get pods --all-namespaces -o wide
      
      NAMESPACE                      NAME                                           READY   STATUS    RESTARTS   AGE     IP              NODE     NOMINATED NODE   READINESS GATES
      kube-system                    calico-kube-controllers-6f8f7fd457-qt6cj       1/1     Running   0          7h59m   192.168.0.166   master   <none>           <none>
      kube-system                    calico-node-lgqcb                              1/1     Running   0          7h59m   192.168.0.155   node1    <none>           <none>
      kube-system                    calico-node-qqjz8                              1/1     Running   0          7h59m   192.168.0.144   node2    <none>           <none>
      kube-system                    calico-node-tvsfh                              1/1     Running   1          7h59m   192.168.0.166   master   <none>           <none>
      kube-system                    coredns-7f9d8dc6c8-k6dkg                       1/1     Running   0          7h59m   10.233.70.1     master   <none>           <none>
      kube-system                    dns-autoscaler-796f4ddddf-2f8mf                1/1     Running   0          7h59m   10.233.70.2     master   <none>           <none>
      kube-system                    kube-apiserver-master                          1/1     Running   0          8h      192.168.0.166   master   <none>           <none>
      kube-system                    kube-controller-manager-master                 1/1     Running   1          8h      192.168.0.166   master   <none>           <none>
      kube-system                    kube-proxy-299jd                               1/1     Running   0          8h      192.168.0.144   node2    <none>           <none>
      kube-system                    kube-proxy-5qjxd                               1/1     Running   0          8h      192.168.0.166   master   <none>           <none>
      kube-system                    kube-proxy-7r9p2                               1/1     Running   0          8h      192.168.0.155   node1    <none>           <none>
      kube-system                    kube-scheduler-master                          1/1     Running   1          8h      192.168.0.166   master   <none>           <none>
      kube-system                    nodelocaldns-h577n                             1/1     Running   0          7h59m   192.168.0.166   master   <none>           <none>
      kube-system                    nodelocaldns-qnt28                             1/1     Running   0          7h59m   192.168.0.155   node1    <none>           <none>
      kube-system                    nodelocaldns-sj8tp                             1/1     Running   0          7h59m   192.168.0.144   node2    <none>           <none>
      kube-system                    openebs-localpv-provisioner-77fbd6858d-gpczv   1/1     Running   2          7h36m   10.233.90.2     node1    <none>           <none>
      kube-system                    openebs-ndm-ms2ps                              1/1     Running   0          7h36m   192.168.0.155   node1    <none>           <none>
      kube-system                    openebs-ndm-n54r5                              1/1     Running   0          7h23m   192.168.0.144   node2    <none>           <none>
      kube-system                    openebs-ndm-operator-59c75c96fc-4rhwv          1/1     Running   1          7h36m   10.233.90.3     node1    <none>           <none>
      kube-system                    tiller-deploy-79b566b5ff-8glxm                 1/1     Running   0          7h59m   10.233.90.1     node1    <none>           <none>
      kubesphere-controls-system     default-http-backend-5d464dd566-426kq          1/1     Running   0          7h25m   10.233.90.5     node1    <none>           <none>
      kubesphere-controls-system     kubectl-admin-6c664db975-fbzh8                 1/1     Running   0          7h25m   10.233.90.8     node1    <none>           <none>
      kubesphere-monitoring-system   kube-state-metrics-566cdbcb48-jn9ll            4/4     Running   0          7h25m   10.233.90.7     node1    <none>           <none>
      kubesphere-monitoring-system   node-exporter-4gxcq                            2/2     Running   0          7h25m   192.168.0.144   node2    <none>           <none>
      kubesphere-monitoring-system   node-exporter-f7b2m                            2/2     Running   0          7h25m   192.168.0.166   master   <none>           <none>
      kubesphere-monitoring-system   node-exporter-hn9g9                            2/2     Running   0          7h25m   192.168.0.155   node1    <none>           <none>
      kubesphere-monitoring-system   prometheus-k8s-0                               3/3     Running   1          7h25m   10.233.90.14    node1    <none>           <none>
      kubesphere-monitoring-system   prometheus-k8s-1                               3/3     Running   1          7h25m   10.233.90.13    node1    <none>           <none>
      kubesphere-monitoring-system   prometheus-k8s-system-0                        3/3     Running   1          7h25m   10.233.90.17    node1    <none>           <none>
      kubesphere-monitoring-system   prometheus-k8s-system-1                        3/3     Running   1          7h25m   10.233.90.18    node1    <none>           <none>
      kubesphere-monitoring-system   prometheus-operator-6b97679cfd-kxtm7           1/1     Running   0          7h25m   10.233.90.6     node1    <none>           <none>
      kubesphere-system              ks-account-596657f8c6-c97dv                    1/1     Running   0          7h25m   10.233.70.9     master   <none>           <none>
      kubesphere-system              ks-apigateway-78bcdc8ffc-9nrnn                 1/1     Running   0          7h25m   10.233.70.7     master   <none>           <none>
      kubesphere-system              ks-apiserver-5b548d7c5c-v45b2                  1/1     Running   0          7h25m   10.233.70.8     master   <none>           <none>
      kubesphere-system              ks-console-78bcf96dbf-zqq59                    1/1     Running   0          7h25m   10.233.70.11    master   <none>           <none>
      kubesphere-system              ks-controller-manager-696986f8d9-sndh2         1/1     Running   1          7h25m   10.233.70.10    master   <none>           <none>
      kubesphere-system              ks-installer-7d9fb945c7-dgxg5                  1/1     Running   0          7h36m   10.233.90.4     node1    <none>           <none>
      kubesphere-system              openldap-0                                     1/1     Running   0          7h26m   10.233.70.6     master   <none>           <none>
      kubesphere-system              redis-6fd6c6d6f9-pt5d8                         1/1     Running   0          7h26m   10.233.70.5     master   <none>           <none>

      pod都是running状态的,但是访问任何节点的30880都是无法访问,这到底是是咋了

      The push refers to repository [192.168.0.166:5000/kubesphere/elasticsearch-oss]
      c573321b5d86: Pushed
      46cd2571f1c6: Pushed
      fc56d8e86bb4: Pushed
      743117a68886: Pushed
      2e5badaeb57f: Pushed
      32b15aee3e49: Pushed
      9b0e1f384d5d: Retrying in 1 second
      d69483a6face: Pushed
      received unexpected HTTP status: 500 Internal Server Error
      Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json186586334: no space left on device
      192.168.0.166:5000/k8scsi/csi-attacher:v2.0.0
      Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json999432357: no space left on device
      The push refers to repository [192.168.0.166:5000/k8scsi/csi-attacher]
      94f49fb5c15d: Retrying in 1 second
      932da5156413: Retrying in 1 second
      received unexpected HTTP status: 500 Internal Server Error
      Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json514450280: no space left on device
      192.168.0.166:5000/k8scsi/csi-node-driver-registrar:v1.2.0
      Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json340647847: no space left on device
      The push refers to repository [192.168.0.166:5000/k8scsi/csi-node-driver-registrar]
      e242ebe3c0e7: Retrying in 1 second
      932da5156413: Retrying in 1 second
      received unexpected HTTP status: 500 Internal Server Error
      Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json173247938: no space left on device
      192.168.0.166:5000/kubesphere/cloud-controller-manager:v1.4.0
      Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json383248953: no space left on device
      The push refers to repository [192.168.0.166:5000/kubesphere/cloud-controller-manager]
      7371592b8bed: Retrying in 1 second
      68b0cbfdd0ed: Retrying in 1 second
      73046094a9b8: Retrying in 1 second
      received unexpected HTTP status: 500 Internal Server Error
      Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json114389572: no space left on device
      192.168.0.166:5000/google-containers/k8s-dns-node-cache:1.15.5
      Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json034605779: no space left on device
      The push refers to repository [192.168.0.166:5000/google-containers/k8s-dns-node-cache]
      5d024027846e: Retrying in 1 second
      a95807b0aa21: Retrying in 1 second
      fe9a8b4f1dcc: Retrying in 1 second
      received unexpected HTTP status: 500 Internal Server Error
      Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json765821462: no space left on device
      192.168.0.166:5000/library/redis:5.0.5-alpine
      Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json243390077: no space left on device
      The push refers to repository [192.168.0.166:5000/library/redis]
      76ff8be8279a: Retrying in 1 second
      9559709fdf7f: Retrying in 1 second
      b499b26b07f7: Retrying in 1 second
      1ac7839ac772: Retrying in 1 second
      b34cd2e3555a: Retrying in 1 second
      03901b4a2ea8: Waiting
      received unexpected HTTP status: 500 Internal Server Error
      Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json796660152: no space left on device
      192.168.0.166:5000/kubesphere/configmap-reload:v0.3.0
      Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json859707831: no space left on device
      The push refers to repository [192.168.0.166:5000/kubesphere/configmap-reload]
      f78d3758f4e1: Retrying in 2 seconds


      又报错了,我都快崩溃了,这到底是哪里有不对

        TAO 报错信息里写的很清楚了,你的存储空间不足

        • TAO 回复了此帖

          fnag_huna 这个确实是的,之前在root目录下,root目录只分配了50G,然后这个压缩包解压后差不多就占了60%的root空间,安装程序已启动直接拉满,但是后面我吧这个压缩包迁移到home目录下,这个目录有400G的空间,再次运行后显示ssh连接问题to use the 'ssh' connection type with passwords, you must install the sshpas

            TAO 之前root下安装的ks卸载了吗?提示你安装那你安装下试试

            • TAO 回复了此帖

              fnag_huna 我大概装k8s10次左右,就只有第一次安装的时候看到过登录界面,但是登录就报错,后面每一次从装k8s我都是重装系统,青云、本地搭集群的方式都试过

                TAO 装k8s还是kubesphere?这个离线安装包是连k8s一起安装的啊,只需要纯净环境就可以了。还有你的防火墙和selinux都关了吗?

                • TAO 回复了此帖

                  fnag_huna 刚开始我是先搭建k8s在在k8s的基础上安装kubesphere,然后就是Redis、Mysql的一些问题
                  报错如下

                  然后我在网上找不到解决方案,然后我就使用Multi-Node模式安装也是报错这上面的一些报错都是使用Multi-Node模式在线安装报的错

                  防火墙每次都是关闭的,全部确认过的,setenforce 0都关闭过了,而且都是永久关闭

                  Feynman Multi-Node 模式安装2.1.1的报的错,而且重试两三次都是这个情况,Pip 相关依赖怎么安装

                    TAO
                    上边那些ignoring的可以忽略

                    TAO

                    pod都running之后不要再重装了,登录不了的的话先检查环境的网络,防火墙、安全组……。

                    • TAO 回复了此帖