前言
kubekey已经发布了v1.2.0的alpha版本,该版本能够支持高可用k8s集群部署,虽然不是正式版,但是K粉已经迫不及待地尝鲜了一把。
kubekey通过在所有worker节点部署haproxy pod实现本地负载均衡机制,与kubespray及sealer等部署工具类似。

kubekey简介
kubeykey是KubeSphere基于Go 语言开发的kubernetes集群部署工具,使用 KubeKey,您可以轻松、高效、灵活地单独或整体安装 Kubernetes 和 KubeSphere。
有三种情况可以使用 KubeKey。
- 仅安装 Kubernetes
- 使用一个命令安装 Kubernetes 和 KubeSphere
- 首先安装 Kubernetes,然后使用ks-installer 在其上部署 KubeSphere

项目地址:https://github.com/kubesphere/kubekey
安装kubekey
kubekey下载地址:https://github.com/kubesphere/kubekey/releases
下载测试版kubekey-v1.2.0-alpha.2
wget https://github.com/kubesphere/kubekey/releases/download/v1.2.0-alpha.2/kubekey-v1.2.0-alpha.2-linux-amd64.tar.gz
tar -zxvf kubekey-v1.2.0-alpha.2-linux-amd64.tar.gz
mv kk /usr/local/bin/
查看kubekey版本
kk version
所有节点安装依赖(kubesphere要求及推荐的依赖包)
yum install -y socat conntrack ebtables ipset
所有节点关闭selinux和firewalld
setenforce 0 && sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config
systemctl disable --now firewalld
所有节点时间同步
yum install -y chrony
systemctl enable --now chronyd
timedatectl set-timezone Asia/Shanghai
节点无需配置主机名,kubekey会自动纠正主机名。
部署all-in-one单节点
部署kubernetes单节点
kk create cluster
同时部署kubernetes和kubesphere,可指定kubernetes版本或kubesphere版本
kk create cluster --with-kubernetes v1.20.4 --with-kubesphere v3.1.0
部署多节点集群
准备6个节点,CentOS7.8 2C4G配置,部署3master、3node kubernetes v1.20.6高可用集群,以下所有操作在第一个节点执行:
当前目录创建示例配置文件
kk create config
根据您的环境修改配置文件 config-sample.yaml,以下示例以部署3个master节点和3个node节点为例(不执行kubesphere部署,仅搭建kubernetes集群):
cat > config-sample.yaml <<EOF
apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: k8s-master1, address: 192.168.93.60, internalAddress: 192.168.93.60,user: root, password: 123456}
- {name: k8s-master2, address: 192.168.93.61, internalAddress: 192.168.93.61,user: root, password: 123456}
- {name: k8s-master3, address: 192.168.93.62, internalAddress: 192.168.93.62,user: root, password: 123456}
- {name: k8s-node1, address: 192.168.93.63, internalAddress: 192.168.93.63,user: root, password: 123456}
- {name: k8s-node2, address: 192.168.93.64, internalAddress: 192.168.93.64,user: root, password: 123456}
- {name: k8s-node3, address: 192.168.93.65, internalAddress: 192.168.93.65,user: root, password: 123456}
roleGroups:
etcd:
- k8s-master[1:3]
master:
- k8s-master[1:3]
worker:
- k8s-node[1:3]
controlPlaneEndpoint:
##Internal loadbalancer for apiservers
internalLoadbalancer: haproxy
domain: lb.kubesphere.local
address: "" # The IP address of your load balancer.
port: 6443
kubernetes:
version: v1.20.6
imageRepo: kubesphere
clusterName: cluster.local
network:
plugin: calico
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
registry:
registryMirrors: []
insecureRegistries: []
addons: []
EOF
使用配置文件创建集群
export KKZONE=cn
kk create cluster -f config-sample.yaml | tee kk.log
创建完成后查看节点状态,3个master运行正常
[root@k8s-master1 ~]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master1 Ready control-plane,master 4m58s v1.20.6 192.168.93.60 <none> CentOS Linux 7 (Core) 3.10.0-1127.el7.x86_64 docker://20.10.7
k8s-master2 Ready control-plane,master 3m58s v1.20.6 192.168.93.61 <none> CentOS Linux 7 (Core) 3.10.0-1127.el7.x86_64 docker://20.10.7
k8s-master3 Ready control-plane,master 3m58s v1.20.6 192.168.93.62 <none> CentOS Linux 7 (Core) 3.10.0-1127.el7.x86_64 docker://20.10.7
k8s-node1 Ready worker 4m13s v1.20.6 192.168.93.63 <none> CentOS Linux 7 (Core) 3.10.0-1127.el7.x86_64 docker://20.10.7
k8s-node2 Ready worker 3m59s v1.20.6 192.168.93.64 <none> CentOS Linux 7 (Core) 3.10.0-1127.el7.x86_64 docker://20.10.7
k8s-node3 Ready worker 3m59s v1.20.6 192.168.93.65 <none> CentOS Linux 7 (Core) 3.10.0-1127.el7.x86_64 docker://20.10.7
查看所有pod状态,3个node都运行一个haproy pod,node节点组件与master通信时负载到kube-apiserver:
[root@k8s-master1 ~]# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-8545b68dd4-rbshc 1/1 Running 2 3m48s
kube-system calico-node-5k7b5 1/1 Running 1 3m48s
kube-system calico-node-6cv8z 1/1 Running 1 3m48s
kube-system calico-node-8rbjs 1/1 Running 0 3m48s
kube-system calico-node-d6wkc 1/1 Running 0 3m48s
kube-system calico-node-q8qp8 1/1 Running 0 3m48s
kube-system calico-node-rvqpj 1/1 Running 0 3m48s
kube-system coredns-7f87749d6c-66wqb 1/1 Running 0 4m58s
kube-system coredns-7f87749d6c-htqww 1/1 Running 0 4m58s
kube-system haproxy-k8s-node1 1/1 Running 0 4m3s
kube-system haproxy-k8s-node2 1/1 Running 0 4m3s
kube-system haproxy-k8s-node3 1/1 Running 0 2m47s
kube-system kube-apiserver-k8s-master1 1/1 Running 0 5m13s
kube-system kube-apiserver-k8s-master2 1/1 Running 0 4m10s
kube-system kube-apiserver-k8s-master3 1/1 Running 0 4m16s
kube-system kube-controller-manager-k8s-master1 1/1 Running 0 5m13s
kube-system kube-controller-manager-k8s-master2 1/1 Running 0 4m10s
kube-system kube-controller-manager-k8s-master3 1/1 Running 0 4m16s
kube-system kube-proxy-2t5l6 1/1 Running 0 3m55s
kube-system kube-proxy-b8q6g 1/1 Running 0 3m56s
kube-system kube-proxy-dsz5g 1/1 Running 0 3m55s
kube-system kube-proxy-g2gxz 1/1 Running 0 3m55s
kube-system kube-proxy-p6gb7 1/1 Running 0 3m57s
kube-system kube-proxy-q44jp 1/1 Running 0 3m56s
kube-system kube-scheduler-k8s-master1 1/1 Running 0 5m13s
kube-system kube-scheduler-k8s-master2 1/1 Running 0 4m10s
kube-system kube-scheduler-k8s-master3 1/1 Running 0 4m16s
kube-system nodelocaldns-l958t 1/1 Running 0 4m19s
kube-system nodelocaldns-n7vkn 1/1 Running 0 4m18s
kube-system nodelocaldns-q6wjc 1/1 Running 0 4m33s
kube-system nodelocaldns-sfmcc 1/1 Running 0 4m58s
kube-system nodelocaldns-tvdbh 1/1 Running 0 4m18s
kube-system nodelocaldns-vg5t7 1/1 Running 0 4m19s
3个node节点的haproxy以static pod方式部署,使用hostNetwork模式
[root@k8s-node1 ~]# cat /etc/kubernetes/manifests/haproxy.yaml
apiVersion: v1
kind: Pod
metadata:
name: haproxy
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: Reconcile
k8s-app: kube-haproxy
annotations:
cfg-checksum: "4fa7a0eadadb692da91d941237ce4f0c"
spec:
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-node-critical
containers:
- name: haproxy
image: registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.3
imagePullPolicy: Always
resources:
requests:
cpu: 25m
memory: 32M
livenessProbe:
httpGet:
path: /healthz
port: 8081
readinessProbe:
httpGet:
path: /healthz
port: 8081
volumeMounts:
- mountPath: /usr/local/etc/haproxy/
name: etc-haproxy
readOnly: true
volumes:
- name: etc-haproxy
hostPath:
path: /etc/kubekey/haproxy
查看haproxy配置文件,在node节点访问127.0.0.1:6443将负载均衡到后端3个master节点6443端口:
[root@k8s-node1 ~]# cat /etc/kubekey/haproxy/haproxy.cfg
global
maxconn 4000
log 127.0.0.1 local0
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option redispatch
retries 5
timeout http-request 5m
timeout queue 5m
timeout connect 30s
timeout client 30s
timeout server 15m
timeout http-keep-alive 30s
timeout check 30s
maxconn 4000
frontend healthz
bind *:8081
mode http
monitor-uri /healthz
frontend kube_api_frontend
bind 127.0.0.1:6443
mode tcp
option tcplog
default_backend kube_api_backend
backend kube_api_backend
mode tcp
balance leastconn
default-server inter 15s downinter 15s rise 2 fall 2 slowstart 60s maxconn 1000 maxqueue 256 weight 100
option httpchk GET /healthz
http-check expect status 200
server k8s-master1 192.168.93.60:6443 check check-ssl verify none
server k8s-master2 192.168.93.61:6443 check check-ssl verify none
server k8s-master3 192.168.93.62:6443 check check-ssl verify none
整个部署过程很丝滑,完整日志如下,有助于了解命令执行过程中执行了哪些操作:
[root@k8s-master1 ~]# cat kk.log
+-------------+------+------+---------+----------+-------+-------+-----------+---------+------------+-------------+------------------+--------------+
| name | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time |
+-------------+------+------+---------+----------+-------+-------+-----------+---------+------------+-------------+------------------+--------------+
| k8s-master2 | y | y | y | y | y | y | y | 20.10.7 | | | | CST 12:50:29 |
| k8s-node1 | y | y | y | y | y | y | y | 20.10.7 | | | | CST 12:50:29 |
| k8s-node2 | y | y | y | y | y | y | y | 20.10.7 | | | | CST 12:50:29 |
| k8s-node3 | y | y | y | y | y | y | y | 20.10.7 | | | | CST 12:50:29 |
| k8s-master3 | y | y | y | y | y | y | y | 20.10.7 | | | | CST 12:50:29 |
| k8s-master1 | y | y | y | y | y | y | y | 20.10.7 | | | | CST 12:50:29 |
+-------------+------+------+---------+----------+-------+-------+-----------+---------+------------+-------------+------------------+--------------+
This is a simple check of your environment.
Before installation, you should ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations
Continue this installation? [yes/no]: [k8s-node1 192.168.93.63] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
[k8s-node3 192.168.93.65] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
[k8s-master2 192.168.93.61] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
[k8s-node2 192.168.93.64] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
[k8s-master3 192.168.93.62] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
[k8s-master1 192.168.93.60] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
[k8s-node3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2
[k8s-master3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/etcd:v3.4.13
[k8s-master1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/etcd:v3.4.13
[k8s-master2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/etcd:v3.4.13
[k8s-node1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2
[k8s-node2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2
[k8s-master3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2
[k8s-master1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2
[k8s-master2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2
[k8s-node1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.20.6
[k8s-node2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.20.6
[k8s-master3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.20.6
[k8s-master2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.20.6
[k8s-master1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.20.6
[k8s-node2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.6.9
[k8s-node1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.6.9
[k8s-node2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
[k8s-master3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.20.6
[k8s-master2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.20.6
[k8s-node1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
[k8s-master1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.20.6
[k8s-node2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.16.3
[k8s-master2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.20.6
[k8s-master1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.20.6
[k8s-node1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.16.3
[k8s-node2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.16.3
[k8s-master2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.20.6
[k8s-node1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.16.3
[k8s-master1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.20.6
[k8s-node2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.16.3
[k8s-master1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.6.9
[k8s-master2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.6.9
[k8s-node1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.16.3
[k8s-node2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.16.3
[k8s-master1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
[k8s-master2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
[k8s-node3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.20.6
[k8s-node1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.16.3
[k8s-node2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.3
[k8s-master1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.16.3
[k8s-master2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.16.3
[k8s-node3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.6.9
[k8s-node1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.3
[k8s-master1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.16.3
[k8s-master2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.16.3
[k8s-node3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
[k8s-master2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.16.3
[k8s-master1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.16.3
[k8s-node3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.16.3
[k8s-master2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.16.3
[k8s-master1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.16.3
[k8s-node3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.16.3
[k8s-node3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.16.3
[k8s-node3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.16.3
[k8s-node3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.3
[k8s-master3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.20.6
[k8s-master3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.20.6
[k8s-master3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.6.9
[k8s-master3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
[k8s-master3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.16.3
[k8s-master3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.16.3
[k8s-master3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.16.3
[k8s-master3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.16.3
[k8s-master1 192.168.93.60] MSG:
Configuration file will be created
[k8s-master2 192.168.93.61] MSG:
Configuration file will be created
[k8s-master3 192.168.93.62] MSG:
Configuration file will be created
[k8s-master2 192.168.93.61] MSG:
etcd will be installed
[k8s-master1 192.168.93.60] MSG:
etcd will be installed
[k8s-master3 192.168.93.62] MSG:
etcd will be installed
[k8s-master3 192.168.93.62] MSG:
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /etc/systemd/system/etcd.service.
[k8s-master2 192.168.93.61] MSG:
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /etc/systemd/system/etcd.service.
Waiting for etcd to start
[k8s-master1 192.168.93.60] MSG:
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /etc/systemd/system/etcd.service.
Waiting for etcd to start
Waiting for etcd to start
[k8s-master1 192.168.93.60] MSG:
Cluster will be created.
[k8s-master2 192.168.93.61] MSG:
Cluster will be created.
[k8s-master3 192.168.93.62] MSG:
Cluster will be created.
Push /root/kubekey/v1.20.6/amd64/kubeadm to 192.168.93.60:/tmp/kubekey/kubeadm Done
Push /root/kubekey/v1.20.6/amd64/kubeadm to 192.168.93.63:/tmp/kubekey/kubeadm Done
Push /root/kubekey/v1.20.6/amd64/kubeadm to 192.168.93.65:/tmp/kubekey/kubeadm Done
Push /root/kubekey/v1.20.6/amd64/kubeadm to 192.168.93.64:/tmp/kubekey/kubeadm Done
Push /root/kubekey/v1.20.6/amd64/kubeadm to 192.168.93.61:/tmp/kubekey/kubeadm Done
Push /root/kubekey/v1.20.6/amd64/kubeadm to 192.168.93.62:/tmp/kubekey/kubeadm Done
Push /root/kubekey/v1.20.6/amd64/kubelet to 192.168.93.65:/tmp/kubekey/kubelet Done
Push /root/kubekey/v1.20.6/amd64/kubelet to 192.168.93.64:/tmp/kubekey/kubelet Done
Push /root/kubekey/v1.20.6/amd64/kubelet to 192.168.93.63:/tmp/kubekey/kubelet Done
Push /root/kubekey/v1.20.6/amd64/kubelet to 192.168.93.61:/tmp/kubekey/kubelet Done
Push /root/kubekey/v1.20.6/amd64/kubelet to 192.168.93.60:/tmp/kubekey/kubelet Done
Push /root/kubekey/v1.20.6/amd64/kubectl to 192.168.93.65:/tmp/kubekey/kubectl Done
Push /root/kubekey/v1.20.6/amd64/kubectl to 192.168.93.63:/tmp/kubekey/kubectl Done
Push /root/kubekey/v1.20.6/amd64/kubectl to 192.168.93.64:/tmp/kubekey/kubectl Done
Push /root/kubekey/v1.20.6/amd64/kubectl to 192.168.93.61:/tmp/kubekey/kubectl Done
Push /root/kubekey/v1.20.6/amd64/kubectl to 192.168.93.60:/tmp/kubekey/kubectl Done
Push /root/kubekey/v1.20.6/amd64/helm to 192.168.93.63:/tmp/kubekey/helm Done
Push /root/kubekey/v1.20.6/amd64/helm to 192.168.93.64:/tmp/kubekey/helm Done
Push /root/kubekey/v1.20.6/amd64/helm to 192.168.93.65:/tmp/kubekey/helm Done
Push /root/kubekey/v1.20.6/amd64/helm to 192.168.93.61:/tmp/kubekey/helm Done
Push /root/kubekey/v1.20.6/amd64/helm to 192.168.93.60:/tmp/kubekey/helm Done
Push /root/kubekey/v1.20.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.93.65:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
Push /root/kubekey/v1.20.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.93.64:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
Push /root/kubekey/v1.20.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.93.63:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
Push /root/kubekey/v1.20.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.93.60:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
Push /root/kubekey/v1.20.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.93.61:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
Push /root/kubekey/v1.20.6/amd64/kubelet to 192.168.93.62:/tmp/kubekey/kubelet Done
Push /root/kubekey/v1.20.6/amd64/kubectl to 192.168.93.62:/tmp/kubekey/kubectl Done
Push /root/kubekey/v1.20.6/amd64/helm to 192.168.93.62:/tmp/kubekey/helm Done
Push /root/kubekey/v1.20.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.93.62:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
[k8s-master1 192.168.93.60] MSG:
W0729 12:53:36.913976 9108 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.20.6
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master1 k8s-master1.cluster.local k8s-master2 k8s-master2.cluster.local k8s-master3 k8s-master3.cluster.local k8s-node1 k8s-node1.cluster.local k8s-node2 k8s-node2.cluster.local k8s-node3 k8s-node3.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost] and IPs [10.233.0.1 192.168.93.60 127.0.0.1 192.168.93.61 192.168.93.62 192.168.93.63 192.168.93.64 192.168.93.65]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 51.506830 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master1 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node k8s-master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: d1u2p8.7dwt14rghk6zwcc5
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join lb.kubesphere.local:6443 --token d1u2p8.7dwt14rghk6zwcc5 \
--discovery-token-ca-cert-hash sha256:61bc947886bc3596c37f0d8595e6fc1cc4bffa63a2cff55ad6d18301dad915f5 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join lb.kubesphere.local:6443 --token d1u2p8.7dwt14rghk6zwcc5 \
--discovery-token-ca-cert-hash sha256:61bc947886bc3596c37f0d8595e6fc1cc4bffa63a2cff55ad6d18301dad915f5
[k8s-master1 192.168.93.60] MSG:
service "kube-dns" deleted
[k8s-master1 192.168.93.60] MSG:
service/coredns created
[k8s-master1 192.168.93.60] MSG:
serviceaccount/nodelocaldns created
daemonset.apps/nodelocaldns created
[k8s-master1 192.168.93.60] MSG:
configmap/nodelocaldns created
[k8s-master1 192.168.93.60] MSG:
I0729 12:55:03.680214 10836 version.go:254] remote version is much newer: v1.21.3; falling back to: stable-1.20
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
cdc67cacb3460d32a92d94548b67e4a7b5db5d29dad6304ebf0b2442dce02785
[k8s-master1 192.168.93.60] MSG:
secret/kubeadm-certs patched
[k8s-master1 192.168.93.60] MSG:
secret/kubeadm-certs patched
[k8s-master1 192.168.93.60] MSG:
secret/kubeadm-certs patched
[k8s-master1 192.168.93.60] MSG:
kubeadm join lb.kubesphere.local:6443 --token h0hkd2.igsowb70b0h6kjs3 --discovery-token-ca-cert-hash sha256:61bc947886bc3596c37f0d8595e6fc1cc4bffa63a2cff55ad6d18301dad915f5
[k8s-master1 192.168.93.60] MSG:
k8s-master1 v1.20.6 [map[address:192.168.93.60 type:InternalIP] map[address:k8s-master1 type:Hostname]]
[k8s-node1 192.168.93.63] MSG:
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0729 12:55:13.818109 7628 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[k8s-node1 192.168.93.63] MSG:
node/k8s-node1 labeled
[k8s-node3 192.168.93.65] MSG:
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0729 12:55:14.324789 7807 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[k8s-node2 192.168.93.64] MSG:
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0729 12:55:14.405503 7639 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[k8s-node3 192.168.93.65] MSG:
node/k8s-node3 labeled
[k8s-node2 192.168.93.64] MSG:
node/k8s-node2 labeled
[k8s-master2 192.168.93.61] MSG:
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0729 12:55:13.793985 8825 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master1 k8s-master1.cluster.local k8s-master2 k8s-master2.cluster.local k8s-master3 k8s-master3.cluster.local k8s-node1 k8s-node1.cluster.local k8s-node2 k8s-node2.cluster.local k8s-node3 k8s-node3.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost] and IPs [10.233.0.1 192.168.93.61 127.0.0.1 192.168.93.60 192.168.93.62 192.168.93.63 192.168.93.64 192.168.93.65]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Skipping etcd check in external mode
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[control-plane-join] using external etcd - no local stacked instance added
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node k8s-master2 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node k8s-master2 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
This node has joined the cluster and a new control plane instance was created:
* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
To start administering your cluster from this node, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Run 'kubectl get nodes' to see this node join the cluster.
[k8s-master3 192.168.93.62] MSG:
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0729 12:55:26.526823 8746 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master1 k8s-master1.cluster.local k8s-master2 k8s-master2.cluster.local k8s-master3 k8s-master3.cluster.local k8s-node1 k8s-node1.cluster.local k8s-node2 k8s-node2.cluster.local k8s-node3 k8s-node3.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost] and IPs [10.233.0.1 192.168.93.62 127.0.0.1 192.168.93.60 192.168.93.61 192.168.93.63 192.168.93.64 192.168.93.65]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Skipping etcd check in external mode
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[control-plane-join] using external etcd - no local stacked instance added
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node k8s-master3 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node k8s-master3 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
This node has joined the cluster and a new control plane instance was created:
* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
To start administering your cluster from this node, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Run 'kubectl get nodes' to see this node join the cluster.
[k8s-node3] generate haproxy manifest.
[k8s-node1] generate haproxy manifest.
[k8s-node2] generate haproxy manifest.
[k8s-node3 192.168.93.65] MSG:
kubelet.conf is exists.
[k8s-master1 192.168.93.60] MSG:
kubelet.conf is exists.
[k8s-master2 192.168.93.61] MSG:
kubelet.conf is exists.
[k8s-node1 192.168.93.63] MSG:
kubelet.conf is exists.
[k8s-node2 192.168.93.64] MSG:
kubelet.conf is exists.
[k8s-master3 192.168.93.62] MSG:
kubelet.conf is exists.
[k8s-master1 192.168.93.60] MSG:
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created