腾讯云CentOS 7.8 安装失败 不知道什么原因

[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /et c/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the “iptables” command.

If your cluster was setup to utilize IPVS, run ipvsadm –clear (or similar)
to reset your system’s IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
ERRO[19:19:58 CST] Failed to init kubernetes cluster: Failed to exec command: sudo -E /bin/sh -c “/usr/local/bin/kubeadm init –config=/etc/kubernetes/kubeadm-conf ig.yaml”
W1228 19:15:28.146005 13471 utils.go:26] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.233.0.10]; the provided value is: [169.254.25.1 0]
W1228 19:15:28.146106 13471 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.6
[preflight] Running pre-flight checks
[WARNING Service-Docker]: docker service is not enabled, please run ‘systemctl enable docker.service’
[WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”. Please follow the guide at https://ku bernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.1. Latest validated version: 19.03
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder “/etc/kubernetes/pki”
[certs] Generating “ca” certificate and key
[certs] Generating “apiserver” certificate and key
[certs] apiserver serving cert is signed for DNS names [master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesp here.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost lb.kubesphere.local master1 master1.cluster.local ma ster2 master2.cluster.local master3 master3.cluster.local node1 node1.cluster.local node2 node2.cluster.local node3 node3.cluster.local] and IPs [10.233.0.1 172.16 .1.9 127.0.0.1 172.16.1.0 172.16.1.9 172.16.1.8 172.16.1.7 172.16.1.5 172.16.1.4 172.16.1.3 10.233.0.1]
[certs] Generating “apiserver-kubelet-client” certificate and key
[certs] Generating “front-proxy-ca” certificate and key
[certs] Generating “front-proxy-client” certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating “sa” key and public key
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
[kubeconfig] Writing “admin.conf” kubeconfig file
[kubeconfig] Writing “kubelet.conf” kubeconfig file
[kubeconfig] Writing “controller-manager.conf” kubeconfig file
[kubeconfig] Writing “scheduler.conf” kubeconfig file
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
W1228 19:15:30.231227 13471 manifests.go:225] the default kube-apiserver authorization-mode is “Node,RBAC”; using “Node,RBAC”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
W1228 19:15:30.237004 13471 manifests.go:225] the default kube-apiserver authorization-mode is “Node,RBAC”; using “Node,RBAC”
[control-plane] Creating static Pod manifest for “kube-scheduler”
W1228 19:15:30.237799 13471 manifests.go:225] the default kube-apiserver authorization-mode is “Node,RBAC”; using “Node,RBAC”
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

    Unfortunately, an error has occurred:
            timed out waiting for the condition

    This error is likely caused by:
            - The kubelet is not running
            - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
            - 'systemctl status kubelet'
            - 'journalctl -xeu kubelet'

    Additionally, a control plane component may have crashed or exited when started by the container runtime.
    To troubleshoot, list all containers using your preferred container runtimes CLI.

    Here is one example how you may list all Kubernetes containers running in docker:
            - 'docker ps -a | grep kube | grep -v pause'
            Once you have found the failing container, you can inspect its logs with:
            - 'docker logs CONTAINERID'

error execution phase wait-control-plane: couldn’t initialize a Kubernetes cluster
To see the stack trace of this error execute with –v=5 or higher: Process exited with status 1 node=172.16.1.9
WARN[19:19:58 CST] Task failed …
WARN[19:19:58 CST] error: interrupted by error
Error: Failed to init kubernetes cluster: interrupted by error
Usage:
kk create cluster [flags]

Flags:
-f, –filename string Path to a configuration file
-h, –help help for cluster
–skip-pull-images Skip pre pull images
–with-kubernetes string Specify a supported version of kubernetes
–with-kubesphere Deploy a specific version of kubesphere (default v3.0.0)
-y, –yes Skip pre-check of the installation

Global Flags:
–debug Print detailed information (default true)

Failed to init kubernetes cluster: interrupted by error
[root@master1 ~]# ./kk create cluster -f config-sample.yaml
+———+——+——+———+———-+——-+——-+———–+——–+————+————-+——————+————–+
| name | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time |
+———+——+——+———+———-+——-+——-+———–+——–+————+————-+——————+————–+
| node2 | y | y | y | y | y | y | y | y | | | | CST 19:22:47 |
| node1 | y | y | y | y | y | y | y | y | | | | CST 19:22:46 |
| master1 | y | y | y | y | y | y | y | y | | | | CST 19:22:46 |
| node3 | y | y | y | y | y | y | y | y | | | | CST 19:22:46 |
| master2 | y | y | y | y | y | y | y | y | | | | CST 19:22:47 |
| master3 | y | y | y | y | y | y | y | y | | | | CST 19:22:47 |
+———+——+——+———+———-+——-+——-+———–+——–+————+————-+——————+————–+

This is a simple check of your environment.
Before installation, you should ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations

Continue this installation? [yes/no]: yes
INFO[19:22:49 CST] Downloading Installation Files
INFO[19:22:49 CST] Downloading kubeadm …
INFO[19:22:49 CST] Downloading kubelet …
INFO[19:22:49 CST] Downloading kubectl …
INFO[19:22:49 CST] Downloading helm …
INFO[19:22:49 CST] Downloading kubecni …
INFO[19:22:49 CST] Configurating operating system …
[master2 172.16.1.8] MSG:
net.ipv4.ip_forward = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
net.ipv4.conf.all.promote_secondaries = 1
net.ipv4.conf.default.promote_secondaries = 1
net.ipv6.neigh.default.gc_thresh3 = 4096
net.ipv4.neigh.default.gc_thresh3 = 4096
kernel.softlockup_panic = 1
kernel.sysrq = 1
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
kernel.numa_balancing = 0
kernel.shmmax = 68719476736
kernel.printk = 5
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
[node3 172.16.1.3] MSG:
net.ipv4.ip_forward = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
net.ipv4.conf.all.promote_secondaries = 1
net.ipv4.conf.default.promote_secondaries = 1
net.ipv6.neigh.default.gc_thresh3 = 4096
net.ipv4.neigh.default.gc_thresh3 = 4096
kernel.softlockup_panic = 1
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
kernel.numa_balancing = 0
kernel.shmmax = 68719476736
kernel.printk = 5
kernel.sysrq = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
[node1 172.16.1.5] MSG:
net.ipv4.ip_forward = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
net.ipv4.conf.all.promote_secondaries = 1
net.ipv4.conf.default.promote_secondaries = 1
net.ipv6.neigh.default.gc_thresh3 = 4096
net.ipv4.neigh.default.gc_thresh3 = 4096
kernel.softlockup_panic = 1
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
kernel.numa_balancing = 0
kernel.shmmax = 68719476736
kernel.printk = 5
kernel.sysrq = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
[node2 172.16.1.4] MSG:
net.ipv4.ip_forward = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
net.ipv4.conf.all.promote_secondaries = 1
net.ipv4.conf.default.promote_secondaries = 1
net.ipv6.neigh.default.gc_thresh3 = 4096
net.ipv4.neigh.default.gc_thresh3 = 4096
kernel.softlockup_panic = 1
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
kernel.numa_balancing = 0
kernel.shmmax = 68719476736
kernel.printk = 5
kernel.sysrq = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
[master1 172.16.1.9] MSG:
net.ipv4.ip_forward = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
net.ipv4.conf.all.promote_secondaries = 1
net.ipv4.conf.default.promote_secondaries = 1
net.ipv6.neigh.default.gc_thresh3 = 4096
net.ipv4.neigh.default.gc_thresh3 = 4096
kernel.softlockup_panic = 1
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
kernel.numa_balancing = 0
kernel.shmmax = 68719476736
kernel.printk = 5
kernel.sysrq = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
[master3 172.16.1.7] MSG:
net.ipv4.ip_forward = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
net.ipv4.conf.all.promote_secondaries = 1
net.ipv4.conf.default.promote_secondaries = 1
net.ipv6.neigh.default.gc_thresh3 = 4096
net.ipv4.neigh.default.gc_thresh3 = 4096
kernel.softlockup_panic = 1
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
kernel.numa_balancing = 0
kernel.shmmax = 68719476736
kernel.printk = 5
kernel.sysrq = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
INFO[19:22:50 CST] Installing docker …
INFO[19:22:51 CST] Start to download images on all nodes
[node3] Downloading image: kubesphere/pause:3.2
[master2] Downloading image: kubesphere/etcd:v3.3.12
[node1] Downloading image: kubesphere/pause:3.2
[master1] Downloading image: kubesphere/etcd:v3.3.12
[node2] Downloading image: kubesphere/pause:3.2
[master3] Downloading image: kubesphere/etcd:v3.3.12
[master2] Downloading image: kubesphere/pause:3.2
[node1] Downloading image: kubesphere/kube-proxy:v1.18.6
[master1] Downloading image: kubesphere/pause:3.2
[master3] Downloading image: kubesphere/pause:3.2
[node3] Downloading image: kubesphere/kube-proxy:v1.18.6
[node2] Downloading image: kubesphere/kube-proxy:v1.18.6
[master2] Downloading image: kubesphere/kube-apiserver:v1.18.6
[master1] Downloading image: kubesphere/kube-apiserver:v1.18.6
[node1] Downloading image: coredns/coredns:1.6.9
[master3] Downloading image: kubesphere/kube-apiserver:v1.18.6
[node1] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[master3] Downloading image: kubesphere/kube-controller-manager:v1.18.6
[node1] Downloading image: calico/kube-controllers:v3.15.1
[node2] Downloading image: coredns/coredns:1.6.9
[node3] Downloading image: coredns/coredns:1.6.9
[master2] Downloading image: kubesphere/kube-controller-manager:v1.18.6
[master1] Downloading image: kubesphere/kube-controller-manager:v1.18.6
[master3] Downloading image: kubesphere/kube-scheduler:v1.18.6
[node1] Downloading image: calico/cni:v3.15.1
[master2] Downloading image: kubesphere/kube-scheduler:v1.18.6
[node3] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[node2] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[master1] Downloading image: kubesphere/kube-scheduler:v1.18.6
[master3] Downloading image: kubesphere/kube-proxy:v1.18.6
[node1] Downloading image: calico/node:v3.15.1
[node3] Downloading image: calico/kube-controllers:v3.15.1
[master1] Downloading image: kubesphere/kube-proxy:v1.18.6
[master2] Downloading image: kubesphere/kube-proxy:v1.18.6
[master3] Downloading image: coredns/coredns:1.6.9
[node2] Downloading image: calico/kube-controllers:v3.15.1
[node1] Downloading image: calico/pod2daemon-flexvol:v3.15.1
[master1] Downloading image: coredns/coredns:1.6.9
[master3] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[master1] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[master3] Downloading image: calico/kube-controllers:v3.15.1
[master3] Downloading image: calico/cni:v3.15.1
[node3] Downloading image: calico/cni:v3.15.1
[master1] Downloading image: calico/kube-controllers:v3.15.1
[master2] Downloading image: coredns/coredns:1.6.9
[node2] Downloading image: calico/cni:v3.15.1
[master3] Downloading image: calico/node:v3.15.1
[master2] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[node2] Downloading image: calico/node:v3.15.1
[master3] Downloading image: calico/pod2daemon-flexvol:v3.15.1
[master1] Downloading image: calico/cni:v3.15.1
[node3] Downloading image: calico/node:v3.15.1
[master2] Downloading image: calico/kube-controllers:v3.15.1
[node2] Downloading image: calico/pod2daemon-flexvol:v3.15.1
[node3] Downloading image: calico/pod2daemon-flexvol:v3.15.1
[master1] Downloading image: calico/node:v3.15.1
[master2] Downloading image: calico/cni:v3.15.1
[master1] Downloading image: calico/pod2daemon-flexvol:v3.15.1
[master2] Downloading image: calico/node:v3.15.1
[master2] Downloading image: calico/pod2daemon-flexvol:v3.15.1
INFO[19:24:35 CST] Generating etcd certs
INFO[19:24:37 CST] Synchronizing etcd certs
INFO[19:24:37 CST] Creating etcd service
INFO[19:24:48 CST] Starting etcd cluster
[master1 172.16.1.9] MSG:
Configuration file already exists
Waiting for etcd to start
[master2 172.16.1.8] MSG:
Configuration file already exists
[master3 172.16.1.7] MSG:
Configuration file already exists
INFO[19:24:55 CST] Refreshing etcd configuration
INFO[19:24:56 CST] Backup etcd data regularly
INFO[19:24:56 CST] Get cluster status
[master1 172.16.1.9] MSG:
Cluster already exists.
[master1 172.16.1.9] MSG:
v1.18.6
WARN[19:39:17 CST] Task failed …
WARN[19:39:17 CST] error: Failed to upload kubeadm certs: Failed to exec command: sudo -E /bin/sh -c “/usr/local/bin/kubeadm init phase upload-certs –upload-certs”
W1228 19:36:57.074360 23527 version.go:102] could not fetch a Kubernetes version from the internet: unable to get URL “https://dl.k8s.io/release/stable-1.txt”: Get https://storage.googleapis.com/kubernetes-release/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W1228 19:36:57.074407 23527 version.go:103] falling back to the local client version: v1.18.6
W1228 19:36:57.074532 23527 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[upload-certs] Storing the certificates in Secret “kubeadm-certs” in the “kube-system” Namespace
error execution phase upload-certs: error uploading certs: error creating token: timed out waiting for the condition
To see the stack trace of this error execute with –v=5 or higher: Process exited with status 1
Error: Failed to get cluster status: Failed to upload kubeadm certs: Failed to exec command: sudo -E /bin/sh -c “/usr/local/bin/kubeadm init phase upload-certs –upload-certs”
W1228 19:36:57.074360 23527 version.go:102] could not fetch a Kubernetes version from the internet: unable to get URL “https://dl.k8s.io/release/stable-1.txt”: Get https://storage.googleapis.com/kubernetes-release/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W1228 19:36:57.074407 23527 version.go:103] falling back to the local client version: v1.18.6
W1228 19:36:57.074532 23527 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[upload-certs] Storing the certificates in Secret “kubeadm-certs” in the “kube-system” Namespace
error execution phase upload-certs: error uploading certs: error creating token: timed out waiting for the condition
To see the stack trace of this error execute with –v=5 or higher: Process exited with status 1
Usage:
kk create cluster [flags]

Flags:
-f, –filename string Path to a configuration file
-h, –help help for cluster
–skip-pull-images Skip pre pull images
–with-kubernetes string Specify a supported version of kubernetes
–with-kubesphere Deploy a specific version of kubesphere (default v3.0.0)
-y, –yes Skip pre-check of the installation

Global Flags:
–debug Print detailed information (default true)

Failed to get cluster status: Failed to upload kubeadm certs: Failed to exec command: sudo -E /bin/sh -c “/usr/local/bin/kubeadm init phase upload-certs –upload-certs”
W1228 19:36:57.074360 23527 version.go:102] could not fetch a Kubernetes version from the internet: unable to get URL “https://dl.k8s.io/release/stable-1.txt”: Get https://storage.googleapis.com/kubernetes-release/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W1228 19:36:57.074407 23527 version.go:103] falling back to the local client version: v1.18.6
W1228 19:36:57.074532 23527 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[upload-certs] Storing the certificates in Secret “kubeadm-certs” in the “kube-system” Namespace
error execution phase upload-certs: error uploading certs: error creating token: timed out waiting for the condition
To see the stack trace of this error execute with –v=5 or higher: Process exited with status 1

    7158798 ``KKZONE=cn ./kk create cluster -f config.yaml` 试下,加下环境变量设置

    7158798

    我看日志里报

    Storing the certificates in Secret “kubeadm-certs” in the “kube-system” Namespace
    error execution phase upload-certs: error uploading certs: error creating token: timed out waiting for the condition

    这个kubeadm会链接kube-apiserver,应该是kube-apiserver不通导致的,第一,机器及机器之间是否开了防火墙或者安全组,第二,是否用了腾讯云的lb,如果用了,应该需要用外网lb,而且要配置下lb挂的安全组。

      Cauchy
      controlPlaneEndpoint:
      domain: lb.kubesphere.local
      address: “172.16.1.0”
      都是在一个内网 不存在不通吧

      • Jeff 回复了此帖

        关掉防火墙 负载均衡用的内网模式 跟服务器都在内网

        [root@master1 ~]# sudo mkdir -p /etc/docker
        [root@master1 ~]# sudo tee /etc/docker/daemon.json <<-‘EOF’

        {
        “registry-mirrors”: [“https://r500ha9l.mirror.aliyuncs.com”]
        }
        EOF
        {
        “registry-mirrors”: [“https://r500ha9l.mirror.aliyuncs.com”]
        }
        [root@master1 ~]# sudo systemctl daemon-reload
        [root@master1 ~]# sudo systemctl restart docker
        [root@master1 ~]# sudo service docker start
        Redirecting to /bin/systemctl start docker.service
        [root@master1 ~]# export KKZONE=cn
        [root@master1 ~]# curl -sfL https://get-kk.kubesphere.io | VERSION=v1.0.1 sh -

        Downloading kubekey v1.0.1 from https://kubernetes.pek3b.qingstor.com/kubekey/releases/download/v1.0.1/kubekey-v1.0.1-linux-amd64.tar.gz

        Kubekey v1.0.1 Download Complete!

        [root@master1 ~]# chmod +x kk
        [root@master1 ~]# ./kk create config –with-kubesphere v3.0.0 –with-kubernetes v1.18.6 -f config-sample.yaml
        [root@master1 ~]# ./kk create cluster -f config-sample.yaml
        +———+——+——+———+———-+——-+——-+———–+——–+————+————-+——————+————–+
        | name | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time |
        +———+——+——+———+———-+——-+——-+———–+——–+————+————-+——————+————–+
        | node2 | y | y | y | y | y | y | y | | | | | CST 21:52:37 |
        | master3 | y | y | y | y | y | y | y | | | | | CST 21:52:37 |
        | node3 | y | y | y | y | y | y | y | | | | | CST 21:52:37 |
        | master1 | y | y | y | y | y | y | y | y | | | | CST 21:52:37 |
        | node1 | y | y | y | y | y | y | y | | | | | CST 21:52:37 |
        | master2 | y | y | y | y | y | y | y | | | | | CST 21:52:37 |
        +———+——+——+———+———-+——-+——-+———–+——–+————+————-+——————+————–+

        This is a simple check of your environment.
        Before installation, you should ensure that your machines meet all requirements specified at
        https://github.com/kubesphere/kubekey#requirements-and-recommendations

        Continue this installation? [yes/no]: yes
        INFO[21:52:43 CST] Downloading Installation Files
        INFO[21:52:43 CST] Downloading kubeadm …
        INFO[21:53:21 CST] Downloading kubelet …
        INFO[21:55:12 CST] Downloading kubectl …
        INFO[21:55:56 CST] Downloading helm …
        INFO[21:56:37 CST] Downloading kubecni …
        INFO[21:57:12 CST] Configurating operating system …
        [node3 172.16.1.3] MSG:
        net.ipv4.ip_forward = 1
        net.ipv4.conf.default.rp_filter = 1
        net.ipv4.conf.default.accept_source_route = 0
        kernel.core_uses_pid = 1
        net.ipv4.tcp_syncookies = 1
        kernel.msgmnb = 65536
        kernel.msgmax = 65536
        net.ipv4.conf.all.promote_secondaries = 1
        net.ipv4.conf.default.promote_secondaries = 1
        net.ipv6.neigh.default.gc_thresh3 = 4096
        net.ipv4.neigh.default.gc_thresh3 = 4096
        kernel.softlockup_panic = 1
        net.ipv6.conf.all.disable_ipv6 = 0
        net.ipv6.conf.default.disable_ipv6 = 0
        net.ipv6.conf.lo.disable_ipv6 = 0
        kernel.numa_balancing = 0
        kernel.shmmax = 68719476736
        kernel.printk = 5
        kernel.sysrq = 1
        vm.swappiness = 0
        net.bridge.bridge-nf-call-arptables = 1
        net.bridge.bridge-nf-call-ip6tables = 1
        net.bridge.bridge-nf-call-iptables = 1
        net.ipv4.ip_local_reserved_ports = 30000-32767
        [node2 172.16.1.4] MSG:
        net.ipv4.ip_forward = 1
        net.ipv4.conf.default.rp_filter = 1
        net.ipv4.conf.default.accept_source_route = 0
        kernel.core_uses_pid = 1
        net.ipv4.tcp_syncookies = 1
        kernel.msgmnb = 65536
        kernel.msgmax = 65536
        net.ipv4.conf.all.promote_secondaries = 1
        net.ipv4.conf.default.promote_secondaries = 1
        net.ipv6.neigh.default.gc_thresh3 = 4096
        net.ipv4.neigh.default.gc_thresh3 = 4096
        kernel.softlockup_panic = 1
        net.ipv6.conf.all.disable_ipv6 = 0
        net.ipv6.conf.default.disable_ipv6 = 0
        net.ipv6.conf.lo.disable_ipv6 = 0
        kernel.numa_balancing = 0
        kernel.shmmax = 68719476736
        kernel.printk = 5
        kernel.sysrq = 1
        vm.swappiness = 0
        net.bridge.bridge-nf-call-arptables = 1
        net.bridge.bridge-nf-call-ip6tables = 1
        net.bridge.bridge-nf-call-iptables = 1
        net.ipv4.ip_local_reserved_ports = 30000-32767
        [node1 172.16.1.5] MSG:
        net.ipv4.ip_forward = 1
        net.ipv4.conf.default.rp_filter = 1
        net.ipv4.conf.default.accept_source_route = 0
        kernel.core_uses_pid = 1
        net.ipv4.tcp_syncookies = 1
        kernel.msgmnb = 65536
        kernel.msgmax = 65536
        net.ipv4.conf.all.promote_secondaries = 1
        net.ipv4.conf.default.promote_secondaries = 1
        net.ipv6.neigh.default.gc_thresh3 = 4096
        net.ipv4.neigh.default.gc_thresh3 = 4096
        kernel.softlockup_panic = 1
        net.ipv6.conf.all.disable_ipv6 = 0
        net.ipv6.conf.default.disable_ipv6 = 0
        net.ipv6.conf.lo.disable_ipv6 = 0
        kernel.numa_balancing = 0
        kernel.shmmax = 68719476736
        kernel.printk = 5
        kernel.sysrq = 1
        vm.swappiness = 0
        net.bridge.bridge-nf-call-arptables = 1
        net.bridge.bridge-nf-call-ip6tables = 1
        net.bridge.bridge-nf-call-iptables = 1
        net.ipv4.ip_local_reserved_ports = 30000-32767
        [master2 172.16.1.8] MSG:
        net.ipv4.ip_forward = 1
        net.ipv4.conf.default.rp_filter = 1
        net.ipv4.conf.default.accept_source_route = 0
        kernel.core_uses_pid = 1
        net.ipv4.tcp_syncookies = 1
        kernel.msgmnb = 65536
        kernel.msgmax = 65536
        net.ipv4.conf.all.promote_secondaries = 1
        net.ipv4.conf.default.promote_secondaries = 1
        net.ipv6.neigh.default.gc_thresh3 = 4096
        net.ipv4.neigh.default.gc_thresh3 = 4096
        kernel.softlockup_panic = 1
        kernel.sysrq = 1
        net.ipv6.conf.all.disable_ipv6 = 0
        net.ipv6.conf.default.disable_ipv6 = 0
        net.ipv6.conf.lo.disable_ipv6 = 0
        kernel.numa_balancing = 0
        kernel.shmmax = 68719476736
        kernel.printk = 5
        vm.swappiness = 0
        net.bridge.bridge-nf-call-arptables = 1
        net.bridge.bridge-nf-call-ip6tables = 1
        net.bridge.bridge-nf-call-iptables = 1
        net.ipv4.ip_local_reserved_ports = 30000-32767
        [master1 172.16.1.9] MSG:
        net.ipv4.ip_forward = 1
        net.ipv4.conf.default.rp_filter = 1
        net.ipv4.conf.default.accept_source_route = 0
        kernel.core_uses_pid = 1
        net.ipv4.tcp_syncookies = 1
        kernel.msgmnb = 65536
        kernel.msgmax = 65536
        net.ipv4.conf.all.promote_secondaries = 1
        net.ipv4.conf.default.promote_secondaries = 1
        net.ipv6.neigh.default.gc_thresh3 = 4096
        net.ipv4.neigh.default.gc_thresh3 = 4096
        kernel.softlockup_panic = 1
        net.ipv6.conf.all.disable_ipv6 = 0
        net.ipv6.conf.default.disable_ipv6 = 0
        net.ipv6.conf.lo.disable_ipv6 = 0
        kernel.numa_balancing = 0
        kernel.shmmax = 68719476736
        kernel.printk = 5
        kernel.sysrq = 1
        vm.swappiness = 0
        net.bridge.bridge-nf-call-arptables = 1
        net.bridge.bridge-nf-call-ip6tables = 1
        net.bridge.bridge-nf-call-iptables = 1
        net.ipv4.ip_local_reserved_ports = 30000-32767
        [master3 172.16.1.7] MSG:
        net.ipv4.ip_forward = 1
        net.ipv4.conf.default.rp_filter = 1
        net.ipv4.conf.default.accept_source_route = 0
        kernel.core_uses_pid = 1
        net.ipv4.tcp_syncookies = 1
        kernel.msgmnb = 65536
        kernel.msgmax = 65536
        net.ipv4.conf.all.promote_secondaries = 1
        net.ipv4.conf.default.promote_secondaries = 1
        net.ipv6.neigh.default.gc_thresh3 = 4096
        net.ipv4.neigh.default.gc_thresh3 = 4096
        kernel.softlockup_panic = 1
        net.ipv6.conf.all.disable_ipv6 = 0
        net.ipv6.conf.default.disable_ipv6 = 0
        net.ipv6.conf.lo.disable_ipv6 = 0
        kernel.numa_balancing = 0
        kernel.shmmax = 68719476736
        kernel.printk = 5
        kernel.sysrq = 1
        vm.swappiness = 0
        net.bridge.bridge-nf-call-arptables = 1
        net.bridge.bridge-nf-call-ip6tables = 1
        net.bridge.bridge-nf-call-iptables = 1
        net.ipv4.ip_local_reserved_ports = 30000-32767
        INFO[21:57:13 CST] Installing docker …
        INFO[21:58:56 CST] Start to download images on all nodes
        [node3] Downloading image: kubesphere/pause:3.2
        [node1] Downloading image: kubesphere/pause:3.2
        [master2] Downloading image: kubesphere/etcd:v3.3.12
        [master3] Downloading image: kubesphere/etcd:v3.3.12
        [master1] Downloading image: kubesphere/etcd:v3.3.12
        [node2] Downloading image: kubesphere/pause:3.2
        [node3] Downloading image: kubesphere/kube-proxy:v1.18.6
        [node1] Downloading image: kubesphere/kube-proxy:v1.18.6
        [node2] Downloading image: kubesphere/kube-proxy:v1.18.6
        [node3] Downloading image: coredns/coredns:1.6.9
        [node2] Downloading image: coredns/coredns:1.6.9
        [node3] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
        [master2] Downloading image: kubesphere/pause:3.2
        [master1] Downloading image: kubesphere/pause:3.2
        [node2] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
        [master1] Downloading image: kubesphere/kube-apiserver:v1.18.6
        [node3] Downloading image: calico/kube-controllers:v3.15.1
        [master3] Downloading image: kubesphere/pause:3.2
        [master2] Downloading image: kubesphere/kube-apiserver:v1.18.6
        [master3] Downloading image: kubesphere/kube-apiserver:v1.18.6
        [node2] Downloading image: calico/kube-controllers:v3.15.1
        [node3] Downloading image: calico/cni:v3.15.1
        [node2] Downloading image: calico/cni:v3.15.1
        [node3] Downloading image: calico/node:v3.15.1
        [node3] Downloading image: calico/pod2daemon-flexvol:v3.15.1
        [node2] Downloading image: calico/node:v3.15.1
        [node1] Downloading image: coredns/coredns:1.6.9
        [node2] Downloading image: calico/pod2daemon-flexvol:v3.15.1
        [master1] Downloading image: kubesphere/kube-controller-manager:v1.18.6
        [node1] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
        [master2] Downloading image: kubesphere/kube-controller-manager:v1.18.6
        [master3] Downloading image: kubesphere/kube-controller-manager:v1.18.6
        [master1] Downloading image: kubesphere/kube-scheduler:v1.18.6
        [master2] Downloading image: kubesphere/kube-scheduler:v1.18.6
        [master3] Downloading image: kubesphere/kube-scheduler:v1.18.6
        [node1] Downloading image: calico/kube-controllers:v3.15.1
        [master1] Downloading image: kubesphere/kube-proxy:v1.18.6
        [master2] Downloading image: kubesphere/kube-proxy:v1.18.6
        [master3] Downloading image: kubesphere/kube-proxy:v1.18.6
        [node1] Downloading image: calico/cni:v3.15.1
        [master1] Downloading image: coredns/coredns:1.6.9
        [master2] Downloading image: coredns/coredns:1.6.9
        [master3] Downloading image: coredns/coredns:1.6.9
        [master2] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
        [master1] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
        [master3] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
        [master2] Downloading image: calico/kube-controllers:v3.15.1
        [master3] Downloading image: calico/kube-controllers:v3.15.1
        [master1] Downloading image: calico/kube-controllers:v3.15.1
        [node1] Downloading image: calico/node:v3.15.1
        [master2] Downloading image: calico/cni:v3.15.1
        [master3] Downloading image: calico/cni:v3.15.1
        [master1] Downloading image: calico/cni:v3.15.1
        [node1] Downloading image: calico/pod2daemon-flexvol:v3.15.1
        [master2] Downloading image: calico/node:v3.15.1
        [master3] Downloading image: calico/node:v3.15.1
        [master1] Downloading image: calico/node:v3.15.1
        [master2] Downloading image: calico/pod2daemon-flexvol:v3.15.1
        [master3] Downloading image: calico/pod2daemon-flexvol:v3.15.1
        [master1] Downloading image: calico/pod2daemon-flexvol:v3.15.1
        INFO[22:04:50 CST] Generating etcd certs
        INFO[22:04:51 CST] Synchronizing etcd certs
        INFO[22:04:51 CST] Creating etcd service
        [master2 172.16.1.8] MSG:
        Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /etc/systemd/system/etcd.service.
        [master3 172.16.1.7] MSG:
        Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /etc/systemd/system/etcd.service.
        [master1 172.16.1.9] MSG:
        Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /etc/systemd/system/etcd.service.
        INFO[22:04:53 CST] Starting etcd cluster
        [master1 172.16.1.9] MSG:
        Configuration file will be created
        [master2 172.16.1.8] MSG:
        Configuration file will be created
        [master3 172.16.1.7] MSG:
        Configuration file will be created
        INFO[22:04:53 CST] Refreshing etcd configuration
        Waiting for etcd to start
        Waiting for etcd to start
        Waiting for etcd to start
        INFO[22:04:58 CST] Backup etcd data regularly
        INFO[22:04:58 CST] Get cluster status
        [master1 172.16.1.9] MSG:
        Cluster will be created.
        [master2 172.16.1.8] MSG:
        Cluster will be created.
        [master3 172.16.1.7] MSG:
        Cluster will be created.
        INFO[22:04:59 CST] Installing kube binaries
        Push /root/kubekey/v1.18.6/amd64/kubeadm to 172.16.1.3:/tmp/kubekey/kubeadm Done
        Push /root/kubekey/v1.18.6/amd64/kubeadm to 172.16.1.5:/tmp/kubekey/kubeadm Done
        Push /root/kubekey/v1.18.6/amd64/kubeadm to 172.16.1.4:/tmp/kubekey/kubeadm Done
        Push /root/kubekey/v1.18.6/amd64/kubeadm to 172.16.1.8:/tmp/kubekey/kubeadm Done
        Push /root/kubekey/v1.18.6/amd64/kubeadm to 172.16.1.7:/tmp/kubekey/kubeadm Done
        Push /root/kubekey/v1.18.6/amd64/kubeadm to 172.16.1.9:/tmp/kubekey/kubeadm Done
        Push /root/kubekey/v1.18.6/amd64/kubelet to 172.16.1.9:/tmp/kubekey/kubelet Done
        Push /root/kubekey/v1.18.6/amd64/kubectl to 172.16.1.9:/tmp/kubekey/kubectl Done
        Push /root/kubekey/v1.18.6/amd64/helm to 172.16.1.9:/tmp/kubekey/helm Done
        Push /root/kubekey/v1.18.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.16.1.9:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
        Push /root/kubekey/v1.18.6/amd64/kubelet to 172.16.1.8:/tmp/kubekey/kubelet Done
        Push /root/kubekey/v1.18.6/amd64/kubelet to 172.16.1.7:/tmp/kubekey/kubelet Done
        Push /root/kubekey/v1.18.6/amd64/kubelet to 172.16.1.3:/tmp/kubekey/kubelet Done
        Push /root/kubekey/v1.18.6/amd64/kubelet to 172.16.1.5:/tmp/kubekey/kubelet Done
        Push /root/kubekey/v1.18.6/amd64/kubectl to 172.16.1.8:/tmp/kubekey/kubectl Done
        Push /root/kubekey/v1.18.6/amd64/kubectl to 172.16.1.3:/tmp/kubekey/kubectl Done
        Push /root/kubekey/v1.18.6/amd64/kubectl to 172.16.1.5:/tmp/kubekey/kubectl Done
        Push /root/kubekey/v1.18.6/amd64/kubectl to 172.16.1.7:/tmp/kubekey/kubectl Done
        Push /root/kubekey/v1.18.6/amd64/kubelet to 172.16.1.4:/tmp/kubekey/kubelet Done
        Push /root/kubekey/v1.18.6/amd64/helm to 172.16.1.8:/tmp/kubekey/helm Done
        Push /root/kubekey/v1.18.6/amd64/helm to 172.16.1.5:/tmp/kubekey/helm Done
        Push /root/kubekey/v1.18.6/amd64/helm to 172.16.1.3:/tmp/kubekey/helm Done
        Push /root/kubekey/v1.18.6/amd64/kubectl to 172.16.1.4:/tmp/kubekey/kubectl Done
        Push /root/kubekey/v1.18.6/amd64/helm to 172.16.1.7:/tmp/kubekey/helm Done
        Push /root/kubekey/v1.18.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.16.1.3:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
        Push /root/kubekey/v1.18.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.16.1.8:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
        Push /root/kubekey/v1.18.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.16.1.5:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
        Push /root/kubekey/v1.18.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.16.1.7:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
        Push /root/kubekey/v1.18.6/amd64/helm to 172.16.1.4:/tmp/kubekey/helm Done
        Push /root/kubekey/v1.18.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.16.1.4:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
        INFO[22:05:06 CST] Initializing kubernetes cluster
        [master1 172.16.1.9] MSG:
        [reset] Reading configuration from the cluster…
        [reset] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -oyaml’
        W1228 22:10:00.313504 18428 reset.go:99] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s: context deadline exceeded
        [preflight] Running pre-flight checks
        W1228 22:10:00.313595 18428 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
        [reset] No etcd config found. Assuming external etcd
        [reset] Please, manually reset etcd to prevent further issues
        [reset] Stopping the kubelet service
        [reset] Unmounting mounted directories in “/var/lib/kubelet”
        [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
        [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
        [reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

        The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

        The reset process does not reset or clean up iptables rules or IPVS tables.
        If you wish to reset iptables, you must do so manually by using the “iptables” command.

        If your cluster was setup to utilize IPVS, run ipvsadm –clear (or similar)
        to reset your system’s IPVS tables.

        The reset process does not clean your kubeconfig files and you must remove them manually.
        Please, check the contents of the $HOME/.kube/config file.
        [master1 172.16.1.9] MSG:
        [reset] Reading configuration from the cluster…
        [reset] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -oyaml’
        W1228 22:15:11.814047 21333 reset.go:99] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
        [preflight] Running pre-flight checks
        W1228 22:15:11.814152 21333 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
        [reset] No etcd config found. Assuming external etcd
        [reset] Please, manually reset etcd to prevent further issues
        [reset] Stopping the kubelet service
        [reset] Unmounting mounted directories in “/var/lib/kubelet”
        [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
        [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
        [reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

        The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

        The reset process does not reset or clean up iptables rules or IPVS tables.
        If you wish to reset iptables, you must do so manually by using the “iptables” command.

        If your cluster was setup to utilize IPVS, run ipvsadm –clear (or similar)
        to reset your system’s IPVS tables.

        The reset process does not clean your kubeconfig files and you must remove them manually.
        Please, check the contents of the $HOME/.kube/config file.
        ERRO[22:19:43 CST] Failed to init kubernetes cluster: Failed to exec command: sudo -E /bin/sh -c “/usr/local/bin/kubeadm init –config=/etc/kubernetes/kubeadm-config.yaml”
        W1228 22:15:13.231443 21732 utils.go:26] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.233.0.10]; the provided value is: [169.254.25.10]
        W1228 22:15:13.231545 21732 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
        [init] Using Kubernetes version: v1.18.6
        [preflight] Running pre-flight checks
        [WARNING Service-Docker]: docker service is not enabled, please run ‘systemctl enable docker.service’
        [WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”. Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.1. Latest validated version: 19.03
        [preflight] Pulling images required for setting up a Kubernetes cluster
        [preflight] This might take a minute or two, depending on the speed of your internet connection
        [preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
        [kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
        [kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
        [kubelet-start] Starting the kubelet
        [certs] Using certificateDir folder “/etc/kubernetes/pki”
        [certs] Generating “ca” certificate and key
        [certs] Generating “apiserver” certificate and key
        [certs] apiserver serving cert is signed for DNS names [master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost lb.kubesphere.local master1 master1.cluster.local master2 master2.cluster.local master3 master3.cluster.local node1 node1.cluster.local node2 node2.cluster.local node3 node3.cluster.local] and IPs [10.233.0.1 172.16.1.9 127.0.0.1 172.16.1.0 172.16.1.9 172.16.1.8 172.16.1.7 172.16.1.5 172.16.1.4 172.16.1.3 10.233.0.1]
        [certs] Generating “apiserver-kubelet-client” certificate and key
        [certs] Generating “front-proxy-ca” certificate and key
        [certs] Generating “front-proxy-client” certificate and key
        [certs] External etcd mode: Skipping etcd/ca certificate authority generation
        [certs] External etcd mode: Skipping etcd/server certificate generation
        [certs] External etcd mode: Skipping etcd/peer certificate generation
        [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
        [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
        [certs] Generating “sa” key and public key
        [kubeconfig] Using kubeconfig folder “/etc/kubernetes”
        [kubeconfig] Writing “admin.conf” kubeconfig file
        [kubeconfig] Writing “kubelet.conf” kubeconfig file
        [kubeconfig] Writing “controller-manager.conf” kubeconfig file
        [kubeconfig] Writing “scheduler.conf” kubeconfig file
        [control-plane] Using manifest folder “/etc/kubernetes/manifests”
        [control-plane] Creating static Pod manifest for “kube-apiserver”
        W1228 22:15:15.596719 21732 manifests.go:225] the default kube-apiserver authorization-mode is “Node,RBAC”; using “Node,RBAC”
        [control-plane] Creating static Pod manifest for “kube-controller-manager”
        W1228 22:15:15.602721 21732 manifests.go:225] the default kube-apiserver authorization-mode is “Node,RBAC”; using “Node,RBAC”
        [control-plane] Creating static Pod manifest for “kube-scheduler”
        W1228 22:15:15.603480 21732 manifests.go:225] the default kube-apiserver authorization-mode is “Node,RBAC”; using “Node,RBAC”
        [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s
        [kubelet-check] Initial timeout of 40s passed.

            Unfortunately, an error has occurred:
                    timed out waiting for the condition
        
            This error is likely caused by:
                    - The kubelet is not running
                    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
        
            If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                    - 'systemctl status kubelet'
                    - 'journalctl -xeu kubelet'
        
            Additionally, a control plane component may have crashed or exited when started by the container runtime.
            To troubleshoot, list all containers using your preferred container runtimes CLI.
        
            Here is one example how you may list all Kubernetes containers running in docker:
                    - 'docker ps -a | grep kube | grep -v pause'
                    Once you have found the failing container, you can inspect its logs with:
                    - 'docker logs CONTAINERID'

        error execution phase wait-control-plane: couldn’t initialize a Kubernetes cluster
        To see the stack trace of this error execute with –v=5 or higher: Process exited with status 1 node=172.16.1.9
        WARN[22:19:43 CST] Task failed …
        WARN[22:19:43 CST] error: interrupted by error
        Error: Failed to init kubernetes cluster: interrupted by error
        Usage:
        kk create cluster [flags]

        Flags:
        -f, –filename string Path to a configuration file
        -h, –help help for cluster
        –skip-pull-images Skip pre pull images
        –with-kubernetes string Specify a supported version of kubernetes
        –with-kubesphere Deploy a specific version of kubesphere (default v3.0.0)
        -y, –yes Skip pre-check of the installation

        Global Flags:
        –debug Print detailed information (default true)

        Failed to init kubernetes cluster: interrupted by error

        不知道什么原因 试了很多次 就成功了一次

        • Jeff 回复了此帖

          7158798

          W1228 22:15:11.814047 21333 reset.go:99] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

          访问不了 apiserver ,检查下防火墙,看下 lb 的防火墙是否关了

          已合并主题「安装失败」下的 6 条回复。

          Jeff 是LB 掩码172.16.0.0/16 开启LB跟服务器互通了

          启用默认放通
          启用后,CLB 和 CVM 之间默认放通,来自 CLB 的流量只需通过 CLB 上安全组的校验;不启用,来自 CLB 的流量则需同时通过 CLB 和 CVM 上安全组的校验。当 CLB 不绑定安全组时,其监听端口默认对所有 IP 放通。

          已经点了启用

          Jeff

          [root@master2 ~]# curl -k https://172.16.1.0:6443
          {
          “kind”: “Status”,
          “apiVersion”: “v1”,
          “metadata”: {

          },
          “status”: “Failure”,
          “message”: "forbidden: User \“system:anonymous\” cannot get path \“/\”",
          “reason”: “Forbidden”,
          “details”: {

          },
          “code”: 403
          }[root@master2 ~]#

          是通的

            7158798
            参考:https://cloud.tencent.com/document/product/214/5411#11

            关于内网回环问题的说明
            内网负载均衡不支持同一个内网 IP 既作为客户端又作为服务器,此时 CLB 看到的 Client IP 和 Server IP 是一样的,会导致访问不通。
            当您的客户端需要同时作为服务器时,请至少绑定两个后端服务器。CLB 有自动避免回环的策略,当 Client A 访问负载均衡时,负载均衡会自动调度到非 Client A 的后端服务器上。

            在这种情况下可能导致 master1 上的kubelet 无法访问 kube-apiserver, 但是master2 上的可以访问master1上的apiserver。

              RolandMa1986
              你这个说法我感觉有问题,刚刚创建了一个CLB,是在192.168.0.0/16这个上面,然后去绑定服务器,发现不在一个内网无法绑定,CLB肯定要跟云服务器在一个内网段,不然根本用不了

                7158798 这个跟腾讯文档上的说法不冲突。 文档上说是说172.16.0.0/16 段内的CVM与CLB不能回环到CVM自己。 并没有说CLB可以使用不同的子网。也就是CVM上如果有一个服务需要通过clb访问自己是不可以的。

                  RolandMa1986

                  关于内网回环问题的说明
                  内网负载均衡不支持同一个内网 IP 既作为客户端又作为服务器,此时 CLB 看到的 Client IP 和 Server IP 是一样的,会导致访问不通。
                  当您的客户端需要同时作为服务器时,请至少绑定两个后端服务器。CLB 有自动避免回环的策略,当 Client A 访问负载均衡时,负载均衡会自动调度到非 Client A 的后端服务器上。

                  当您的客户端需要同时作为服务器时,请至少绑定两个后端服务器 我绑定了三台

                    7158798 上面说法只是针对一般服务器而言的。你需要结合K8S具体情况。
                    在你的例子中,我看你是在master1 上执行的安装,假设master1是第一台服务,那么kubeadm和kubectl都是无法和kube-apiserver通讯的。
                    还有其他的情况,比如安装好第一台master1后需要安装网络插件,但是这个时候,kubelet 和API无法通讯,这个可能会导致你网络插件无法安装成功。
                    因此,如果遇到LB回环不能访问的情况,基本是无法保证安装成功的,而且无法高可用。
                    这个情况在Azure云也有类似情况。但是Azure提供了高级版本的LB解决了这个问题。
                    对于腾讯云CLB, 目前的解决方案就是使用公网CLB,使用公网IP作为lb地址。