基础环境需要哪些,ansible,docker? 望在文档中标识一下

请问一下安装了一个多小时提示这个报错是哪里配置错了?新机器,环境干净,前面uninstall过两次

FAILED - RETRYING: kubeadm | Initialize first master (3 retries left).
FAILED - RETRYING: kubeadm | Initialize first master (2 retries left).
FAILED - RETRYING: kubeadm | Initialize first master (1 retries left).
fatal: [master1]: FAILED! => {
“attempts”: 3,
“changed”: true,
“cmd”: [
“timeout”,
“-k”,
“300s”,
“300s”,
“/usr/local/bin/kubeadm”,
“init”,
“–config=/etc/kubernetes/kubeadm-config.yaml”,
“–ignore-preflight-errors=all”,
“–skip-phases=addon/coredns”,
“–upload-certs”
],
“delta”: “0:05:00.037168″,
“end”: “2020-08-07 14:50:01.396185”,
“failed_when_result”: true,
“rc”: 124,
“start”: “2020-08-07 14:45:01.359017”
}

STDOUT:

[init] Using Kubernetes version: v1.16.7
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder “/etc/kubernetes/ssl”
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Using the existing “sa” key
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
[kubeconfig] Using existing kubeconfig file: “/etc/kubernetes/admin.conf”
[kubeconfig] Using existing kubeconfig file: “/etc/kubernetes/kubelet.conf”
[kubeconfig] Using existing kubeconfig file: “/etc/kubernetes/controller-manager.conf”
[kubeconfig] Using existing kubeconfig file: “/etc/kubernetes/scheduler.conf”
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
[controlplane] Adding extra host path mount “etc-pki-tls” to “kube-apiserver”
[controlplane] Adding extra host path mount “etc-pki-ca-trust” to “kube-apiserver”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
[controlplane] Adding extra host path mount “etc-pki-tls” to “kube-apiserver”
[controlplane] Adding extra host path mount “etc-pki-ca-trust” to “kube-apiserver”
[control-plane] Creating static Pod manifest for “kube-scheduler”
[controlplane] Adding extra host path mount “etc-pki-tls” to “kube-apiserver”
[controlplane] Adding extra host path mount “etc-pki-ca-trust” to “kube-apiserver”
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 5m0s
[kubelet-check] Initial timeout of 40s passed.

STDERR:

[WARNING Port-6443]: Port 6443 is in use
[WARNING Port-10251]: Port 10251 is in use
[WARNING Port-10252]: Port 10252 is in use
[WARNING FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
[WARNING FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
[WARNING FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Port-10250]: Port 10250 is in use

MSG:

non-zero return code

NO MORE HOSTS LEFT *******************************************************************************************************

PLAY RECAP ***************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0

master1 : ok=417 changed=76 unreachable=0 failed=1

master2 : ok=406 changed=78 unreachable=0 failed=0

master3 : ok=406 changed=78 unreachable=0 failed=0

node1 : ok=330 changed=61 unreachable=0 failed=0

Friday 07 August 2020 14:50:01 +0800 (0:20:16.326) 0:30:25.065 *********

kubernetes/master : kubeadm | Initialize first master ———————————————————- 1216.33s
container-engine/docker : ensure docker packages are installed ————————————————— 36.87s
kubernetes/preinstall : Install packages requirements ———————————————————— 26.10s
etcd : Gen_certs | Write etcd master certs ———————————————————————– 23.58s
container-engine/docker : Docker | reload docker —————————————————————– 15.88s
bootstrap-os : Install libselinux python package —————————————————————— 8.91s
etcd : Gen_certs | Gather etcd master certs ———————————————————————– 6.01s
etcd : Configure | Check if etcd cluster is healthy ————————————————————— 4.87s
etcd : Install | Copy etcdctl binary from docker container ——————————————————– 4.78s
download : download | Download files / images ——————————————————————— 4.45s
download : download_file | Download item ————————————————————————– 4.26s
download : download_file | Download item ————————————————————————– 4.09s
etcd : wait for etcd up ——————————————————————————————- 3.50s
etcd : reload etcd ———————————————————————————————— 3.20s
kubernetes/node : install | Copy kubelet binary from download dir ————————————————- 3.13s
download : download | Sync files / images from ansible host to nodes ———————————————- 3.08s
etcd : Gen_certs | run cert generation script ——————————————————————— 2.77s
download : download_file | Download item ————————————————————————– 2.62s
chrony : start chrony server ————————————————————————————– 2.47s
container-engine/docker : ensure service is started if docker packages are already present ———————— 2.45s
failed!


please refer to https://kubesphere.io/docs/v2.1/zh-CN/faq/faq-install/


TASK [kubernetes/master : kubeadm | Initialize first master] *************************************************************
Friday 07 August 2020 15:18:06 +0800 (0:00:00.461) 0:08:36.763 *********
skipping: [master2]
skipping: [master3]
就是在初始化master1的时候报的错

再贴一下日志
[root@u-poc-k8s–4 conf]# journalctl -xeu kubelet
Aug 07 16:56:31 master1 kubelet[1530]: E0807 16:56:31.697433 1530 kubelet.go:2267] node “master1″ not found
Aug 07 16:56:31 master1 kubelet[1530]: E0807 16:56:31.816333 1530 kubelet.go:2267] node “master1” not found
Aug 07 16:56:31 master1 kubelet[1530]: E0807 16:56:31.924361 1530 kubelet.go:2267] node “master1” not found
Aug 07 16:56:32 master1 kubelet[1530]: E0807 16:56:32.024592 1530 kubelet.go:2267] node “master1” not found
Aug 07 16:56:32 master1 kubelet[1530]: E0807 16:56:32.131409 1530 kubelet.go:2267] node “master1” not found
Aug 07 16:56:32 master1 kubelet[1530]: E0807 16:56:32.232014 1530 kubelet.go:2267] node “master1″ not found
Aug 07 16:56:32 master1 kubelet[1530]: I0807 16:56:32.318605 1530 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
Aug 07 16:56:32 master1 kubelet[1530]: I0807 16:56:32.319038 1530 setters.go:73] Using node IP: “10...*”
Aug 07 16:56:32 master1 kubelet[1530]: E0807 16:56:32.333196 1530 kubelet.go:2267] node “master1″ not found
Aug 07 16:56:32 master1 kubelet[1530]: I0807 16:56:32.351645 1530 kubelet_node_status.go:472] Recording NodeHasSufficientMemory event message for node master1
Aug 07 16:56:32 master1 kubelet[1530]: I0807 16:56:32.351691 1530 kubelet_node_status.go:472] Recording NodeHasNoDiskPressure event message for node master1
Aug 07 16:56:32 master1 kubelet[1530]: I0807 16:56:32.351703 1530 kubelet_node_status.go:472] Recording NodeHasSufficientPID event message for node master1
Aug 07 16:56:32 master1 kubelet[1530]: I0807 16:56:32.351745 1530 kubelet_node_status.go:72] Attempting to register node master1
Aug 07 16:56:32 master1 kubelet[1530]: E0807 16:56:32.433454 1530 kubelet.go:2267] node “master1” not found
Aug 07 16:56:32 master1 kubelet[1530]: E0807 16:56:32.541409 1530 kubelet.go:2267] node “master1″ not found
Aug 07 16:56:32 master1 kubelet[1530]: E0807 16:56:32.660484 1530 kubelet.go:2267] node “master1” not found
Aug 07 16:56:32 master1 kubelet[1530]: E0807 16:56:32.769372 1530 kubelet.go:2267] node “master1” not found
Aug 07 16:56:32 master1 kubelet[1530]: E0807 16:56:32.869728 1530 kubelet.go:2267] node “master1” not found
Aug 07 16:56:32 master1 kubelet[1530]: W0807 16:56:32.931402 1530 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
Aug 07 16:56:32 master1 kubelet[1530]: E0807 16:56:32.972361 1530 kubelet.go:2267] node “master1” not found
Aug 07 16:56:33 master1 kubelet[1530]: E0807 16:56:33.080490 1530 kubelet.go:2267] node “master1″ not found
Aug 07 16:56:33 master1 kubelet[1530]: E0807 16:56:33.196307 1530 kubelet.go:2267] node “master1” not found
Aug 07 16:56:33 master1 kubelet[1530]: E0807 16:56:33.296640 1530 kubelet.go:2267] node “master1” not found
Aug 07 16:56:33 master1 kubelet[1530]: E0807 16:56:33.401035 1530 kubelet.go:2267] node “master1″ not found
Aug 07 16:56:33 master1 kubelet[1530]: E0807 16:56:33.510520 1530 kubelet.go:2267] node “master1″ not found
Aug 07 16:56:33 master1 kubelet[1530]: E0807 16:56:33.615747 1530 kubelet.go:2267] node “master1” not found
Aug 07 16:56:33 master1 kubelet[1530]: E0807 16:56:33.656360 1530 eviction_manager.go:246] eviction manager: failed to get summary stats: failed to get node info: node “master1″ not found
Aug 07 16:56:33 master1 kubelet[1530]: E0807 16:56:33.718428 1530 kubelet.go:2267] node “master1” not found
Aug 07 16:56:33 master1 kubelet[1530]: E0807 16:56:33.819299 1530 kubelet.go:2267] node “master1” not found
Aug 07 16:56:33 master1 kubelet[1530]: E0807 16:56:33.926540 1530 kubelet.go:2267] node “master1″ not found
Aug 07 16:56:34 master1 kubelet[1530]: E0807 16:56:34.026857 1530 kubelet.go:2267] node “master1” not found
Aug 07 16:56:34 master1 kubelet[1530]: E0807 16:56:34.127420 1530 kubelet.go:2267] node “master1” not found
Aug 07 16:56:34 master1 kubelet[1530]: E0807 16:56:34.202858 1530 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 07 16:56:34 master1 kubelet[1530]: E0807 16:56:34.230941 1530 kubelet.go:2267] node “master1” not found
Aug 07 16:56:34 master1 kubelet[1530]: E0807 16:56:34.348815 1530 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.RuntimeClass: Get https://lb.kubesphere.local:6443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&res
Aug 07 16:56:34 master1 kubelet[1530]: E0807 16:56:34.348958 1530 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list v1.Node: Get https://lb.kubesphere.local:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster1&limit=500&re
Aug 07 16:56:34 master1 kubelet[1530]: E0807 16:56:34.349037 1530 kubelet_node_status.go:94] Unable to register node “master1″ with API server: Post https://lb.kubesphere.local:6443/api/v1/nodes: dial tcp 10.
.*200:6443: connect: no route to host
Aug 07 16:56:34 master1 kubelet[1530]: E0807 16:56:34.349153 1530 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://lb.kubesphere.local:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dmaster1&limit=
Aug 07 16:56:34 master1 kubelet[1530]: E0807 16:56:34.349257 1530 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list v1.Service: Get https://lb.kubesphere.local:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10..*
Aug 07 16:56:34 master1 kubelet[1530]: E0807 16:56:34.349358 1530 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSIDriver: Get https://lb.kubesphere.local:6443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourc
Aug 07 16:56:34 master1 kubelet[1530]: E0807 16:56:34.349446 1530 controller.go:135] failed to ensure node lease exists, will retry in 7s, error: Get https://lb.kubesphere.local:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master1?ti
Aug 07 16:56:34 master1 kubelet[1530]: E0807 16:56:34.349504 1530 kubelet.go:2267] node “master1” not found
Aug 07 16:56:34 master1 kubelet[1530]: E0807 16:56:34.451423 1530 kubelet.go:2267] node “master1″ not found
Aug 07 16:56:34 master1 systemd[1]: Stopping Kubernetes Kubelet Server…
– Subject: Unit kubelet.service has begun shutting down
– Defined-By: systemd

– Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel

– Unit kubelet.service has begun shutting down.
Aug 07 16:56:34 master1 systemd[1]: Stopped Kubernetes Kubelet Server.
– Subject: Unit kubelet.service has finished shutting down
– Defined-By: systemd

– Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel

– Unit kubelet.service has finished shutting down.
lines 957-1011/1011 (END)

21 天 后



离线安装报这个错误,麻烦帮忙看下什么问题?

5 天 后

fnag_huna
看样子是pip源的问题
如果没有用glusterfs的话,可以在kubesphere/roles/prepare/nodes/tasks/main.yaml中注释掉glusterfs相关的任务。

v3.0.0已发布,可以试试

    Cauchy 感谢回复!但尝试注释以后得到了同样的结果。
    umount: /kubeinstaller/yum_repo/iso: mountpoint not found这行提示是否提示我未将镜像文件放到yum_repo中呢?

      fnag_huna

      这个只是说明umount的时候没有对应的挂载点。
      还是那个pip | Installing pip报错吗?注释掉的话应该就不会执行那个task了呀。

        Cauchy 我没有找到您说的这一行在哪里,我的报错信息一直是这样的

          fnag_huna

          那应该是不支持的,可以尝试把Repos/centos-7.7-amd64.iso改成centos-7.8-amd64.iso试试。
          如果不行的话就装v3.0.0吧。

            Cauchy 改成7.8以后成功开始安装了,非常感谢!但是现在出现了这样的状况,是在拉镜像拉不下来吗?我用的代理,wget能通网,但不知道为什么这里的镜像无法拉取,望解惑
            又仔细看了一眼地址,感觉是在本地拉取的镜像?那为何无法拉取呢?

              fnag_huna
              可以执行docker info看下docker的配置,是不是这个本地仓库的地址没有加到docker的insecure_registries中。

                Cauchy 看了一下,里面是有本地仓库地址的:Insecure Registries: 192.168.2.32:5000 127.0.0.0/8这是什么原因呢

                  fnag_huna
                  docker ps | grep registry检查下这个仓库服务是否正常,然后push个镜像试下是否能成功。

                    Cauchy 顺带一提,我现在安装的是 all-in-one 单节点模式的,之前尝试安装多节点时出现了这样的情况
                    我的hosts.ini配置是这样的:
                    请问是哪里出问题了?