安装版本: v2.1.1
系统版本 centos7.7
安装方式: 离线部署
部署环境:阿里ECS
部署架构 : 高可用安装 3master+1node(后期增加node)
以下为脚本安装部署报错信息
TASK [kubernetes/master : kubeadm | Initialize first master] *******************************************************************************************************************************************************
Friday 04 September 2020 08:44:18 +0800 (0:00:00.169) 0:04:33.713 ******
skipping: [master2]
skipping: [master3]
FAILED - RETRYING: kubeadm | Initialize first master (3 retries left).
FAILED - RETRYING: kubeadm | Initialize first master (2 retries left).
FAILED - RETRYING: kubeadm | Initialize first master (1 retries left).
fatal: [master1]: FAILED! => {
"attempts": 3,
"changed": true,
"cmd": [
"timeout",
"-k",
"300s",
"300s",
"/usr/local/bin/kubeadm",
"init",
"--config=/etc/kubernetes/kubeadm-config.yaml",
"--ignore-preflight-errors=all",
"--skip-phases=addon/coredns",
"--upload-certs"
],
"delta": "0:05:00.032045",
"end": "2020-09-04 09:04:33.829405",
"failed_when_result": true,
"rc": 124,
"start": "2020-09-04 08:59:33.797360"
}
STDOUT:
[init] Using Kubernetes version: v1.16.7
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/ssl"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[controlplane] Adding extra host path mount "etc-pki-tls" to "kube-apiserver"
[controlplane] Adding extra host path mount "etc-pki-ca-trust" to "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[controlplane] Adding extra host path mount "etc-pki-tls" to "kube-apiserver"
[controlplane] Adding extra host path mount "etc-pki-ca-trust" to "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[controlplane] Adding extra host path mount "etc-pki-tls" to "kube-apiserver"
[controlplane] Adding extra host path mount "etc-pki-ca-trust" to "kube-apiserver"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 5m0s
[kubelet-check] Initial timeout of 40s passed.
STDERR:
[WARNING Port-6443]: Port 6443 is in use
[WARNING Port-10251]: Port 10251 is in use
[WARNING Port-10252]: Port 10252 is in use
[WARNING FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
[WARNING FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
[WARNING FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Port-10250]: Port 10250 is in use
MSG:
non-zero return code
NO MORE HOSTS LEFT *************************************************************************************************************************************************************************************************
PLAY RECAP *********************************************************************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0
master1 : ok=420 changed=49 unreachable=0 failed=1
master2 : ok=405 changed=45 unreachable=0 failed=0
master3 : ok=405 changed=45 unreachable=0 failed=0
node1 : ok=329 changed=29 unreachable=0 failed=0
Friday 04 September 2020 09:04:33 +0800 (0:20:15.523) 0:24:49.236 ******
===============================================================================
kubernetes/master : kubeadm | Initialize first master ---------------------------------------------------------------------------------------------------------------------------------------------------- 1215.52s
etcd : reload etcd ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 10.54s
kubernetes/preinstall : Install packages requirements ------------------------------------------------------------------------------------------------------------------------------------------------------- 9.10s
etcd : Gen_certs | Write etcd master certs ------------------------------------------------------------------------------------------------------------------------------------------------------------------ 7.08s
etcd : Configure | Check if etcd cluster is healthy --------------------------------------------------------------------------------------------------------------------------------------------------------- 6.47s
etcd : wait for etcd up ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 5.40s
download : download | Download files / images --------------------------------------------------------------------------------------------------------------------------------------------------------------- 2.58s
container-engine/docker : Docker | reload docker ------------------------------------------------------------------------------------------------------------------------------------------------------------ 2.43s
etcd : Gen_certs | Gather etcd master certs ----------------------------------------------------------------------------------------------------------------------------------------------------------------- 2.42s
download : download | Sync files / images from ansible host to nodes ---------------------------------------------------------------------------------------------------------------------------------------- 1.75s
etcd : Backup etcd v3 data ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.61s
download : download | Download files / images --------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.42s
download : download | Sync files / images from ansible host to nodes ---------------------------------------------------------------------------------------------------------------------------------------- 1.37s
etcd : Refresh Time Fact ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 1.27s
download : download | Download files / images --------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.26s
download : download | Sync files / images from ansible host to nodes ---------------------------------------------------------------------------------------------------------------------------------------- 1.26s
download : download | Sync files / images from ansible host to nodes ---------------------------------------------------------------------------------------------------------------------------------------- 1.25s
download : download | Download files / images --------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.25s
download : download | Download files / images --------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.25s
download : download | Sync files / images from ansible host to nodes ---------------------------------------------------------------------------------------------------------------------------------------- 1.23s
failed!
**********************************
please refer to https://kubesphere.io/docs/v2.1/zh-CN/faq/faq-install/
**********************************
经过论坛查询都提示为SLB 配置错误,以下为SLB 信息截图

![[upl-image-preview url=https://kubesphere.com.cn/forum/assets/files/2020-09-04/1599181052-908137-6443.png]](src)

以下为hosts.ini配置
[all]
master1 ansible_connection=local ip=192.168.0.78
master2 ansible_host=192.168.0.79 ip=192.168.0.79 ansible_ssh_pass=123456
master3 ansible_host=192.168.0.80 ip=192.168.0.80 ansible_ssh_pass=123456
node1 ansible_host=192.168.0.81 ip=192.168.0.81 ansible_ssh_pass=123456
[local-registry]
master1
[kube-master]
master1
master2
master3
[kube-node]
node1
[etcd]
master1
master2
master3
[k8s-cluster:children]
kube-node
kube-master
请大家帮忙看看哪里出了问题。