fatal: [ks-allinone]: FAILED! => {
“attempts”: 30,
“changed”: true,
“cmd”: “/usr/local/bin/kubectl get pod -n kubesphere-system | grep Running | grep ks-installer”,
“delta”: “0:00:00.085104″,
“end”: “2020-05-16 14:13:58.346256″,
“rc”: 1,
“start”: “2020-05-16 14:13:58.261152”
}

MSG:

non-zero return code

PLAY RECAP *************************************************************************************************************
ks-allinone : ok=135 changed=15 unreachable=0 failed=1
遇到各种问题,反复卸载安装,结果还不一样

Feynman 那就是了,在研究这个能不能在线安装使用本地局域网的harbor做加速,所以遇到了这个问题了

liuxw 离线 Installer 目前仅支持 CentOS 7.4 ~ 7.7 (64-bit)。如果你的 OS 版本不兼容,建议先自行安装好 k8s,然后参考官网文档 在 K8s 离线安装 KubeSphere

    TASK [Restart docker] *************************************************************************************************************************************************************************
    Wednesday 20 May 2020 10:58:55 +0800 (0:00:00.225) 0:00:01.440 *********
    fatal: [ks-allinone]: FAILED! => {
    “changed”: true,
    “cmd”: “systemctl daemon-reload && service docker restart”,
    “delta”: “0:00:00.248195”,
    “end”: “2020-05-20 10:58:56.004947”,
    “rc”: 1,
    “start”: “2020-05-20 10:58:55.756752″
    }

    STDERR:

    Redirecting to /bin/systemctl restart docker.service
    Job for docker.service failed because the control process exited with error code. See “systemctl status docker.service” and “journalctl -xe” for details.

    MSG:

    non-zero return code

    PLAY RECAP ************************************************************************************************************************************************************************************
    ks-allinone : ok=4 changed=3 unreachable=0 failed=1

    Wednesday 20 May 2020 10:58:56 +0800 (0:00:00.616) 0:00:02.057 *********
    Feynman 大佬帮忙看看

    ### ### 安装成功了,但是linux系统重启,30880端口没启动起来

    [root@ks-allinone ]# kubectl get pods -n kube-system
    Unable to connect to the server: net/http: TLS handshake timeout
    [root@ks-allinone ]# kubectl get pods -n kube-system
    The connection to the server 192.168.3.177:6443 was refused - did you specify the right host or port?
    [root@ks-allinone ]# kubectl get pods -n kube-system
    The connection to the server 192.168.3.177:6443 was refused - did you specify the right host or port?
    [root@ks-allinone ]# kubectl get pods -n kube-system
    The connection to the server 192.168.3.177:6443 was refused - did you specify the right host or port?

      Feynman docker### 有起来,现在初步怀疑跟linux ###操作系统时间有关系
      安装 好的时间是5月24日20点(实际时间是9:00),我把linux### 时间一调整,重启系统,30880端口就没起来

      20 天 后
      14 天 后

      安装成功,我的环境连接SSH有点问题,改成了免密登录解决。

      安装一直出现如下错误,还不停的重试,请问怎么解决

      received unexpected HTTP status: 500 Internal Server Error
      Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json818424021: no space left on device
      192.168.28.150:5000/k8scsi/csi-attacher:v2.0.0
      Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json314613296: no space left on device
      The push refers to repository [192.168.28.150:5000/k8scsi/csi-attacher]
      94f49fb5c15d: Retrying in 1 second
      932da5156413: Retrying in 1 second
      received unexpected HTTP status: 500 Internal Server Error
      Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json316807511: no space left on device
      192.168.28.150:5000/k8scsi/csi-node-driver-registrar:v1.2.0
      Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json160083146: no space left on device
      The push refers to repository [192.168.28.150:5000/k8scsi/csi-node-driver-registrar]
      e242ebe3c0e7: Retrying in 1 second
      932da5156413: Retrying in 1 second
      received unexpected HTTP status: 500 Internal Server Error
      Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json073381225: no space left on device
      192.168.28.150:5000/kubesphere/cloud-controller-manager:v1.4.0
      Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json696550324: no space left on device
      The push refers to repository [192.168.28.150:5000/kubesphere/cloud-controller-manager]
      7371592b8bed: Retrying in 1 second
      68b0cbfdd0ed: Retrying in 1 second
      73046094a9b8: Retrying in 1 second
      received unexpected HTTP status: 500 Internal Server Error
      Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json020405123: no space left on device
      192.168.28.150:5000/google-containers/k8s-dns-node-cache:1.15.5
      Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json249889030: no space left on device
      The push refers to repository [192.168.28.150:5000/google-containers/k8s-dns-node-cache]
      5d024027846e: Retrying in 1 second
      a95807b0aa21: Retrying in 1 second
      fe9a8b4f1dcc: Retrying in 1 second
      received unexpected HTTP status: 500 Internal Server Error
      Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json278284461: no space left on device
      192.168.28.150:5000/library/redis:5.0.5-alpine
      Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json442011176: no space left on device
      The push refers to repository [192.168.28.150:5000/library/redis]
      76ff8be8279a: Retrying in 1 second
      9559709fdf7f: Retrying in 1 second
      b499b26b07f7: Retrying in 1 second
      1ac7839ac772: Retrying in 1 second
      b34cd2e3555a: Retrying in 1 second
      03901b4a2ea8: Waiting
      received unexpected HTTP status: 500 Internal Server Error
      Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json718429543: no space left on device
      192.168.28.150:5000/kubesphere/configmap-reload:v0.3.0
      Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json499708314: no space left on device
      The push refers to repository [192.168.28.150:5000/kubesphere/configmap-reload]
      f78d3758f4e1: Retrying in 1 second
      0d315111b484: Retrying in 1 second
      received unexpected HTTP status: 500 Internal Server Error
      Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json578320433: no space left on device
      192.168.28.150:5000/library/haproxy:2.0.4
      Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json021125596: no space left on device
      The push refers to repository [192.168.28.150:5000/library/haproxy]
      ceb43b25ba94: Retrying in 1 second
      f7e0348535e3: Retrying in 1 second
      1c95c77433e8: Retrying in 1 second
      received unexpected HTTP status: 500 Internal Server Error
      Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json199333771: no space left on device
      192.168.28.150:5000/minio/mc:RELEASE.2019-08-07T23-14-43Z
      Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json619561326: no space left on device
      The push refers to repository [192.168.28.150:5000/minio/mc]

        zhoachenIt “no space left on device” 这么明显的提示,没空间了,需要清理一下,可以用docker
        system prune的方式,查查具体命令以及每个命令的后果是什么再处理

        6 天 后

        安装出错,错误描述如下:

        Monday 06 July 2020 20:44:44 +0800 (0:00:00.110) 0:08:10.217 ***********
        fatal: [node1 -> master]: FAILED! => {
        “changed”: true,
        “cmd”: [
        “bash”,
        “-x”,
        “/usr/local/bin/etcd-scripts/make-ssl-etcd.sh”,
        “-f”,
        “/etc/ssl/etcd/openssl.conf”,
        “-d”,
        “/etc/ssl/etcd/ssl”
        ],
        “delta”: “0:00:00.006139”,
        “end”: “2020-07-06 20:44:44.750848”,
        “rc”: 127,
        “start”: “2020-07-06 20:44:44.744709″
        }

        STDERR:

        bash: /usr/local/bin/etcd-scripts/make-ssl-etcd.sh: 没有那个文件或目录

        MSG:

        non-zero return code

        NO MORE HOSTS LEFT *******************************************************************************************************************************************************

        PLAY RECAP ***************************************************************************************************************************************************************
        localhost : ok=1 changed=0 unreachable=0 failed=0

        master : ok=93 changed=20 unreachable=0 failed=1

        node1 : ok=290 changed=42 unreachable=0 failed=1

        node2 : ok=258 changed=40 unreachable=0 failed=0

        Monday 06 July 2020 20:44:44 +0800 (0:00:00.230) 0:08:10.447 ***********

        container-engine/docker : Docker | reload docker —————————————————————————————————————- 128.14s
        container-engine/docker : ensure docker packages are installed ————————————————————————————————— 74.26s
        download : download_file | Download item ————————————————————————————————————————- 18.14s
        kubernetes/preinstall : Install packages requirements ———————————————————————————————————— 14.68s
        bootstrap-os : Install libselinux python package —————————————————————————————————————– 12.28s
        download : download_file | Download item ————————————————————————————————————————- 10.71s
        download : check_pull_required | Generate a list of information about the images on a node ———————————————————————– 5.61s
        download : download_file | Download item ————————————————————————————————————————– 4.77s
        download : download_file | Download item ————————————————————————————————————————– 4.21s
        download : download_file | Download item ————————————————————————————————————————– 3.95s
        chrony : start chrony server ————————————————————————————————————————————– 3.47s
        bootstrap-os : Fetch /etc/os-release —————————————————————————————————————————— 3.44s
        download : download | Download files / images ——————————————————————————————————————— 3.39s
        download : download_file | Download item ————————————————————————————————————————– 2.98s
        download : download | Sync files / images from ansible host to nodes ———————————————————————————————- 2.42s
        download : download | Sync files / images from ansible host to nodes ———————————————————————————————- 2.29s
        adduser : User | Create User Group ——————————————————————————————————————————– 2.28s
        download : download | Download files / images ——————————————————————————————————————— 2.27s
        download : Register docker images info —————————————————————————————————————————- 2.23s
        kubernetes/preinstall : check swap ——————————————————————————————————————————– 2.19s
        failed!


        please refer to https://kubesphere.io/docs/v2.1/zh-CN/faq/faq-install/


        9 天 后

        您好,麻烦请教下如果kubesphere安装好之后,能否把 Docker Local Registry卸载掉,更换成harbor具体操作得怎么做?谢谢

          离线安装,最小化安装提示成功后,查看ks-apigateway的pod没正常启动

          6 天 后

          30880端口做负载后,浏览器访问虚拟后的IP地址打开提示:client send an http request to https server
          访问真实的IP地址正常,请问是哪里有问题?

          找到原因了 是 common.yaml nfsserver 相关 未配置

          我从2.2.0升级到2.2.1出现如下错误,我是用离线包来升级的
          FAILED - RETRYING: download_container | Download image if required ( 10.1.1.78:5000/google-containers/hyperkube:v1.16.7 ) (1 retries left).
          fatal: [node3 -> master1]: FAILED! => {
          “attempts”: 4,
          “changed”: true,
          “cmd”: [
          “/usr/bin/docker”,
          “pull”,
          “10.1.1.78:5000/google-containers/hyperkube:v1.16.7”
          ],
          “delta”: “0:00:00.062649″,
          “end”: “2020-07-24 16:09:46.198996”,
          “rc”: 1,
          “start”: “2020-07-24 16:09:46.136347”
          }

          STDERR:

          Error response from daemon: manifest for 10.1.1.78:5000/google-containers/hyperkube:v1.16.7 not found

          12 天 后

          基础环境需要哪些,ansible,docker? 望在文档中标识一下

          请问一下安装了一个多小时提示这个报错是哪里配置错了?新机器,环境干净,前面uninstall过两次

          FAILED - RETRYING: kubeadm | Initialize first master (3 retries left).
          FAILED - RETRYING: kubeadm | Initialize first master (2 retries left).
          FAILED - RETRYING: kubeadm | Initialize first master (1 retries left).
          fatal: [master1]: FAILED! => {
          “attempts”: 3,
          “changed”: true,
          “cmd”: [
          “timeout”,
          “-k”,
          “300s”,
          “300s”,
          “/usr/local/bin/kubeadm”,
          “init”,
          “–config=/etc/kubernetes/kubeadm-config.yaml”,
          “–ignore-preflight-errors=all”,
          “–skip-phases=addon/coredns”,
          “–upload-certs”
          ],
          “delta”: “0:05:00.037168″,
          “end”: “2020-08-07 14:50:01.396185”,
          “failed_when_result”: true,
          “rc”: 124,
          “start”: “2020-08-07 14:45:01.359017”
          }

          STDOUT:

          [init] Using Kubernetes version: v1.16.7
          [preflight] Running pre-flight checks
          [preflight] Pulling images required for setting up a Kubernetes cluster
          [preflight] This might take a minute or two, depending on the speed of your internet connection
          [preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
          [kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
          [kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
          [kubelet-start] Activating the kubelet service
          [certs] Using certificateDir folder “/etc/kubernetes/ssl”
          [certs] Using existing ca certificate authority
          [certs] Using existing apiserver certificate and key on disk
          [certs] Using existing apiserver-kubelet-client certificate and key on disk
          [certs] Using existing front-proxy-ca certificate authority
          [certs] Using existing front-proxy-client certificate and key on disk
          [certs] External etcd mode: Skipping etcd/ca certificate authority generation
          [certs] External etcd mode: Skipping etcd/server certificate generation
          [certs] External etcd mode: Skipping etcd/peer certificate generation
          [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
          [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
          [certs] Using the existing “sa” key
          [kubeconfig] Using kubeconfig folder “/etc/kubernetes”
          [kubeconfig] Using existing kubeconfig file: “/etc/kubernetes/admin.conf”
          [kubeconfig] Using existing kubeconfig file: “/etc/kubernetes/kubelet.conf”
          [kubeconfig] Using existing kubeconfig file: “/etc/kubernetes/controller-manager.conf”
          [kubeconfig] Using existing kubeconfig file: “/etc/kubernetes/scheduler.conf”
          [control-plane] Using manifest folder “/etc/kubernetes/manifests”
          [control-plane] Creating static Pod manifest for “kube-apiserver”
          [controlplane] Adding extra host path mount “etc-pki-tls” to “kube-apiserver”
          [controlplane] Adding extra host path mount “etc-pki-ca-trust” to “kube-apiserver”
          [control-plane] Creating static Pod manifest for “kube-controller-manager”
          [controlplane] Adding extra host path mount “etc-pki-tls” to “kube-apiserver”
          [controlplane] Adding extra host path mount “etc-pki-ca-trust” to “kube-apiserver”
          [control-plane] Creating static Pod manifest for “kube-scheduler”
          [controlplane] Adding extra host path mount “etc-pki-tls” to “kube-apiserver”
          [controlplane] Adding extra host path mount “etc-pki-ca-trust” to “kube-apiserver”
          [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 5m0s
          [kubelet-check] Initial timeout of 40s passed.

          STDERR:

          [WARNING Port-6443]: Port 6443 is in use
          [WARNING Port-10251]: Port 10251 is in use
          [WARNING Port-10252]: Port 10252 is in use
          [WARNING FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
          [WARNING FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
          [WARNING FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
          [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
          [WARNING Port-10250]: Port 10250 is in use

          MSG:

          non-zero return code

          NO MORE HOSTS LEFT *******************************************************************************************************

          PLAY RECAP ***************************************************************************************************************
          localhost : ok=1 changed=0 unreachable=0 failed=0

          master1 : ok=417 changed=76 unreachable=0 failed=1

          master2 : ok=406 changed=78 unreachable=0 failed=0

          master3 : ok=406 changed=78 unreachable=0 failed=0

          node1 : ok=330 changed=61 unreachable=0 failed=0

          Friday 07 August 2020 14:50:01 +0800 (0:20:16.326) 0:30:25.065 *********

          kubernetes/master : kubeadm | Initialize first master ———————————————————- 1216.33s
          container-engine/docker : ensure docker packages are installed ————————————————— 36.87s
          kubernetes/preinstall : Install packages requirements ———————————————————— 26.10s
          etcd : Gen_certs | Write etcd master certs ———————————————————————– 23.58s
          container-engine/docker : Docker | reload docker —————————————————————– 15.88s
          bootstrap-os : Install libselinux python package —————————————————————— 8.91s
          etcd : Gen_certs | Gather etcd master certs ———————————————————————– 6.01s
          etcd : Configure | Check if etcd cluster is healthy ————————————————————— 4.87s
          etcd : Install | Copy etcdctl binary from docker container ——————————————————– 4.78s
          download : download | Download files / images ——————————————————————— 4.45s
          download : download_file | Download item ————————————————————————– 4.26s
          download : download_file | Download item ————————————————————————– 4.09s
          etcd : wait for etcd up ——————————————————————————————- 3.50s
          etcd : reload etcd ———————————————————————————————— 3.20s
          kubernetes/node : install | Copy kubelet binary from download dir ————————————————- 3.13s
          download : download | Sync files / images from ansible host to nodes ———————————————- 3.08s
          etcd : Gen_certs | run cert generation script ——————————————————————— 2.77s
          download : download_file | Download item ————————————————————————– 2.62s
          chrony : start chrony server ————————————————————————————– 2.47s
          container-engine/docker : ensure service is started if docker packages are already present ———————— 2.45s
          failed!


          please refer to https://kubesphere.io/docs/v2.1/zh-CN/faq/faq-install/


          TASK [kubernetes/master : kubeadm | Initialize first master] *************************************************************
          Friday 07 August 2020 15:18:06 +0800 (0:00:00.461) 0:08:36.763 *********
          skipping: [master2]
          skipping: [master3]
          就是在初始化master1的时候报的错

          再贴一下日志
          [root@u-poc-k8s–4 conf]# journalctl -xeu kubelet
          Aug 07 16:56:31 master1 kubelet[1530]: E0807 16:56:31.697433 1530 kubelet.go:2267] node “master1″ not found
          Aug 07 16:56:31 master1 kubelet[1530]: E0807 16:56:31.816333 1530 kubelet.go:2267] node “master1” not found
          Aug 07 16:56:31 master1 kubelet[1530]: E0807 16:56:31.924361 1530 kubelet.go:2267] node “master1” not found
          Aug 07 16:56:32 master1 kubelet[1530]: E0807 16:56:32.024592 1530 kubelet.go:2267] node “master1” not found
          Aug 07 16:56:32 master1 kubelet[1530]: E0807 16:56:32.131409 1530 kubelet.go:2267] node “master1” not found
          Aug 07 16:56:32 master1 kubelet[1530]: E0807 16:56:32.232014 1530 kubelet.go:2267] node “master1″ not found
          Aug 07 16:56:32 master1 kubelet[1530]: I0807 16:56:32.318605 1530 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
          Aug 07 16:56:32 master1 kubelet[1530]: I0807 16:56:32.319038 1530 setters.go:73] Using node IP: “10...*”
          Aug 07 16:56:32 master1 kubelet[1530]: E0807 16:56:32.333196 1530 kubelet.go:2267] node “master1″ not found
          Aug 07 16:56:32 master1 kubelet[1530]: I0807 16:56:32.351645 1530 kubelet_node_status.go:472] Recording NodeHasSufficientMemory event message for node master1
          Aug 07 16:56:32 master1 kubelet[1530]: I0807 16:56:32.351691 1530 kubelet_node_status.go:472] Recording NodeHasNoDiskPressure event message for node master1
          Aug 07 16:56:32 master1 kubelet[1530]: I0807 16:56:32.351703 1530 kubelet_node_status.go:472] Recording NodeHasSufficientPID event message for node master1
          Aug 07 16:56:32 master1 kubelet[1530]: I0807 16:56:32.351745 1530 kubelet_node_status.go:72] Attempting to register node master1
          Aug 07 16:56:32 master1 kubelet[1530]: E0807 16:56:32.433454 1530 kubelet.go:2267] node “master1” not found
          Aug 07 16:56:32 master1 kubelet[1530]: E0807 16:56:32.541409 1530 kubelet.go:2267] node “master1″ not found
          Aug 07 16:56:32 master1 kubelet[1530]: E0807 16:56:32.660484 1530 kubelet.go:2267] node “master1” not found
          Aug 07 16:56:32 master1 kubelet[1530]: E0807 16:56:32.769372 1530 kubelet.go:2267] node “master1” not found
          Aug 07 16:56:32 master1 kubelet[1530]: E0807 16:56:32.869728 1530 kubelet.go:2267] node “master1” not found
          Aug 07 16:56:32 master1 kubelet[1530]: W0807 16:56:32.931402 1530 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
          Aug 07 16:56:32 master1 kubelet[1530]: E0807 16:56:32.972361 1530 kubelet.go:2267] node “master1” not found
          Aug 07 16:56:33 master1 kubelet[1530]: E0807 16:56:33.080490 1530 kubelet.go:2267] node “master1″ not found
          Aug 07 16:56:33 master1 kubelet[1530]: E0807 16:56:33.196307 1530 kubelet.go:2267] node “master1” not found
          Aug 07 16:56:33 master1 kubelet[1530]: E0807 16:56:33.296640 1530 kubelet.go:2267] node “master1” not found
          Aug 07 16:56:33 master1 kubelet[1530]: E0807 16:56:33.401035 1530 kubelet.go:2267] node “master1″ not found
          Aug 07 16:56:33 master1 kubelet[1530]: E0807 16:56:33.510520 1530 kubelet.go:2267] node “master1″ not found
          Aug 07 16:56:33 master1 kubelet[1530]: E0807 16:56:33.615747 1530 kubelet.go:2267] node “master1” not found
          Aug 07 16:56:33 master1 kubelet[1530]: E0807 16:56:33.656360 1530 eviction_manager.go:246] eviction manager: failed to get summary stats: failed to get node info: node “master1″ not found
          Aug 07 16:56:33 master1 kubelet[1530]: E0807 16:56:33.718428 1530 kubelet.go:2267] node “master1” not found
          Aug 07 16:56:33 master1 kubelet[1530]: E0807 16:56:33.819299 1530 kubelet.go:2267] node “master1” not found
          Aug 07 16:56:33 master1 kubelet[1530]: E0807 16:56:33.926540 1530 kubelet.go:2267] node “master1″ not found
          Aug 07 16:56:34 master1 kubelet[1530]: E0807 16:56:34.026857 1530 kubelet.go:2267] node “master1” not found
          Aug 07 16:56:34 master1 kubelet[1530]: E0807 16:56:34.127420 1530 kubelet.go:2267] node “master1” not found
          Aug 07 16:56:34 master1 kubelet[1530]: E0807 16:56:34.202858 1530 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
          Aug 07 16:56:34 master1 kubelet[1530]: E0807 16:56:34.230941 1530 kubelet.go:2267] node “master1” not found
          Aug 07 16:56:34 master1 kubelet[1530]: E0807 16:56:34.348815 1530 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.RuntimeClass: Get https://lb.kubesphere.local:6443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&res
          Aug 07 16:56:34 master1 kubelet[1530]: E0807 16:56:34.348958 1530 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list v1.Node: Get https://lb.kubesphere.local:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster1&limit=500&re
          Aug 07 16:56:34 master1 kubelet[1530]: E0807 16:56:34.349037 1530 kubelet_node_status.go:94] Unable to register node “master1″ with API server: Post https://lb.kubesphere.local:6443/api/v1/nodes: dial tcp 10.
          .*200:6443: connect: no route to host
          Aug 07 16:56:34 master1 kubelet[1530]: E0807 16:56:34.349153 1530 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://lb.kubesphere.local:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dmaster1&limit=
          Aug 07 16:56:34 master1 kubelet[1530]: E0807 16:56:34.349257 1530 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list v1.Service: Get https://lb.kubesphere.local:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10..*
          Aug 07 16:56:34 master1 kubelet[1530]: E0807 16:56:34.349358 1530 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSIDriver: Get https://lb.kubesphere.local:6443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourc
          Aug 07 16:56:34 master1 kubelet[1530]: E0807 16:56:34.349446 1530 controller.go:135] failed to ensure node lease exists, will retry in 7s, error: Get https://lb.kubesphere.local:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master1?ti
          Aug 07 16:56:34 master1 kubelet[1530]: E0807 16:56:34.349504 1530 kubelet.go:2267] node “master1” not found
          Aug 07 16:56:34 master1 kubelet[1530]: E0807 16:56:34.451423 1530 kubelet.go:2267] node “master1″ not found
          Aug 07 16:56:34 master1 systemd[1]: Stopping Kubernetes Kubelet Server…
          – Subject: Unit kubelet.service has begun shutting down
          – Defined-By: systemd

          – Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel

          – Unit kubelet.service has begun shutting down.
          Aug 07 16:56:34 master1 systemd[1]: Stopped Kubernetes Kubelet Server.
          – Subject: Unit kubelet.service has finished shutting down
          – Defined-By: systemd

          – Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel

          – Unit kubelet.service has finished shutting down.
          lines 957-1011/1011 (END)

          21 天 后



          离线安装报这个错误,麻烦帮忙看下什么问题?

          5 天 后

          fnag_huna
          看样子是pip源的问题
          如果没有用glusterfs的话,可以在kubesphere/roles/prepare/nodes/tasks/main.yaml中注释掉glusterfs相关的任务。

          v3.0.0已发布,可以试试

            Cauchy 感谢回复!但尝试注释以后得到了同样的结果。
            umount: /kubeinstaller/yum_repo/iso: mountpoint not found这行提示是否提示我未将镜像文件放到yum_repo中呢?

              fnag_huna

              这个只是说明umount的时候没有对应的挂载点。
              还是那个pip | Installing pip报错吗?注释掉的话应该就不会执行那个task了呀。

                Cauchy 我没有找到您说的这一行在哪里,我的报错信息一直是这样的