ks-installer安装失败日志如下:
`TASK [common : debug] **********************************************************
ok: [localhost] => {
“msg”: [
“1. check the storage configuration and storage server”,
“2. make sure the DNS address in /etc/resolv.conf is available”,
“3. execute ‘kubectl logs -n kubesphere-system -l job-name=minio-make-bucket-job’ to watch logs”,
“4. execute ‘helm -n kubesphere-system uninstall ks-minio && kubectl -n kubesphere-system delete job minio-make-bucket-job’”,
“5. Restart the installer pod in kubesphere-system namespace”
]
}
TASK [common : fail] ***********************************************************
fatal: [localhost]: FAILED! => {“changed”: false, “msg”: “It is suggested to refer to the above methods for troubleshooting problems .”}
PLAY RECAP *********************************************************************
localhost : ok=47 changed=40 unreachable=0 failed=1 skipped=74 rescued=0 ignored=0`
安装过程的完整日志如下:
`[root@master1 KubeKey]# ./kk create cluster -f config-sample.yaml
+———+——+——+———+———-+——-+——-+———–+——–+————+————-+——————+————–+
| name | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time |
+———+——+——+———+———-+——-+——-+———–+——–+————+————-+——————+————–+
| worker2 | y | y | y | y | y | y | y | | y | | | CST 10:15:59 |
| master3 | y | y | y | y | y | y | y | | y | | | CST 10:15:59 |
| worker1 | y | y | y | y | y | y | y | | y | | | CST 10:15:59 |
| worker3 | y | y | y | y | y | y | y | | y | | | CST 10:15:58 |
| master2 | y | y | y | y | y | y | y | | y | | | CST 10:15:58 |
| master1 | y | y | y | y | y | y | y | | y | | | CST 10:15:59 |
+———+——+——+———+———-+——-+——-+———–+——–+————+————-+——————+————–+
This is a simple check of your environment.
Before installation, you should ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations
Continue this installation? [yes/no]: yes
INFO[10:22:26 CST] Downloading Installation Files
INFO[10:22:26 CST] Downloading kubeadm …
INFO[10:23:02 CST] Downloading kubelet …
INFO[10:24:49 CST] Downloading kubectl …
INFO[10:25:26 CST] Downloading helm …
INFO[10:26:03 CST] Downloading kubecni …
INFO[10:26:37 CST] Configuring operating system …
[worker3 172.18.30.159] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
no crontab for root
[master2 172.18.30.155] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
no crontab for root
[worker1 172.18.30.157] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
no crontab for root
[master3 172.18.30.156] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
no crontab for root
[worker2 172.18.30.158] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
no crontab for root
[master1 172.18.30.154] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
no crontab for root
INFO[10:26:40 CST] Installing docker …
INFO[10:28:48 CST] Start to download images on all nodes
[worker3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2
[master2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/etcd:v3.4.13
[master1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/etcd:v3.4.13
[worker1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2
[master3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/etcd:v3.4.13
[worker2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2
[worker3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.20.4
[worker1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.20.4
[worker2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.20.4
[master1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2
[master2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2
[master3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2
[master1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.20.4
[master2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.20.4
[master3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.20.4
[worker3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.6.9
[worker1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.6.9
[worker2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.6.9
[worker3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
[worker1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
[master2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.20.4
[master1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.20.4
[master3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.20.4
[worker2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
[worker3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.16.3
[master1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.20.4
[worker1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.16.3
[master3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.20.4
[worker2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.16.3
[master2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.20.4
[worker3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.16.3
[master1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.20.4
[master3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.20.4
[worker1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.16.3
[worker2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.16.3
[master2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.20.4
[worker3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.16.3
[master3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.6.9
[master1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.6.9
[worker1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.16.3
[master3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
[master1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
[master2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.6.9
[worker2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.16.3
[master2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
[master3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.16.3
[worker3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.16.3
[master1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.16.3
[worker1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.16.3
[master2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.16.3
[master3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.16.3
[master1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.16.3
[worker2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.16.3
[master2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.16.3
[master3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.16.3
[master1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.16.3
[master2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.16.3
[master3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.16.3
[master1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.16.3
[master2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.16.3
INFO[10:29:58 CST] Generating etcd certs
INFO[10:30:04 CST] Synchronizing etcd certs
INFO[10:30:04 CST] Creating etcd service
[master3 172.18.30.156] MSG:
etcd will be installed
[master2 172.18.30.155] MSG:
etcd will be installed
[master1 172.18.30.154] MSG:
etcd will be installed
INFO[10:30:09 CST] Starting etcd cluster
[master1 172.18.30.154] MSG:
Configuration file will be created
[master2 172.18.30.155] MSG:
Configuration file will be created
[master3 172.18.30.156] MSG:
Configuration file will be created
INFO[10:30:11 CST] Refreshing etcd configuration
[master3 172.18.30.156] MSG:
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /etc/systemd/system/etcd.service.
[master1 172.18.30.154] MSG:
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /etc/systemd/system/etcd.service.
Waiting for etcd to start
[master2 172.18.30.155] MSG:
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /etc/systemd/system/etcd.service.
Waiting for etcd to start
Waiting for etcd to start
INFO[10:30:17 CST] Backup etcd data regularly
INFO[10:30:25 CST] Get cluster status
[master1 172.18.30.154] MSG:
Cluster will be created.
INFO[10:30:25 CST] Installing kube binaries
Push /opt/kubesphere_installing/KubeKey/kubekey/v1.20.4/amd64/kubeadm to 172.18.30.159:/tmp/kubekey/kubeadm Done
Push /opt/kubesphere_installing/KubeKey/kubekey/v1.20.4/amd64/kubeadm to 172.18.30.155:/tmp/kubekey/kubeadm Done
Push /opt/kubesphere_installing/KubeKey/kubekey/v1.20.4/amd64/kubeadm to 172.18.30.154:/tmp/kubekey/kubeadm Done
Push /opt/kubesphere_installing/KubeKey/kubekey/v1.20.4/amd64/kubeadm to 172.18.30.157:/tmp/kubekey/kubeadm Done
Push /opt/kubesphere_installing/KubeKey/kubekey/v1.20.4/amd64/kubeadm to 172.18.30.156:/tmp/kubekey/kubeadm Done
Push /opt/kubesphere_installing/KubeKey/kubekey/v1.20.4/amd64/kubeadm to 172.18.30.158:/tmp/kubekey/kubeadm Done
Push /opt/kubesphere_installing/KubeKey/kubekey/v1.20.4/amd64/kubelet to 172.18.30.159:/tmp/kubekey/kubelet Done
Push /opt/kubesphere_installing/KubeKey/kubekey/v1.20.4/amd64/kubelet to 172.18.30.155:/tmp/kubekey/kubelet Done
Push /opt/kubesphere_installing/KubeKey/kubekey/v1.20.4/amd64/kubelet to 172.18.30.157:/tmp/kubekey/kubelet Done
Push /opt/kubesphere_installing/KubeKey/kubekey/v1.20.4/amd64/kubelet to 172.18.30.158:/tmp/kubekey/kubelet Done
Push /opt/kubesphere_installing/KubeKey/kubekey/v1.20.4/amd64/kubelet to 172.18.30.156:/tmp/kubekey/kubelet Done
Push /opt/kubesphere_installing/KubeKey/kubekey/v1.20.4/amd64/kubelet to 172.18.30.154:/tmp/kubekey/kubelet Done
Push /opt/kubesphere_installing/KubeKey/kubekey/v1.20.4/amd64/kubectl to 172.18.30.155:/tmp/kubekey/kubectl Done
Push /opt/kubesphere_installing/KubeKey/kubekey/v1.20.4/amd64/kubectl to 172.18.30.158:/tmp/kubekey/kubectl Done
Push /opt/kubesphere_installing/KubeKey/kubekey/v1.20.4/amd64/kubectl to 172.18.30.157:/tmp/kubekey/kubectl Done
Push /opt/kubesphere_installing/KubeKey/kubekey/v1.20.4/amd64/kubectl to 172.18.30.159:/tmp/kubekey/kubectl Done
Push /opt/kubesphere_installing/KubeKey/kubekey/v1.20.4/amd64/kubectl to 172.18.30.154:/tmp/kubekey/kubectl Done
Push /opt/kubesphere_installing/KubeKey/kubekey/v1.20.4/amd64/helm to 172.18.30.155:/tmp/kubekey/helm Done
Push /opt/kubesphere_installing/KubeKey/kubekey/v1.20.4/amd64/kubectl to 172.18.30.156:/tmp/kubekey/kubectl Done
Push /opt/kubesphere_installing/KubeKey/kubekey/v1.20.4/amd64/helm to 172.18.30.157:/tmp/kubekey/helm Done
Push /opt/kubesphere_installing/KubeKey/kubekey/v1.20.4/amd64/helm to 172.18.30.159:/tmp/kubekey/helm Done
Push /opt/kubesphere_installing/KubeKey/kubekey/v1.20.4/amd64/helm to 172.18.30.158:/tmp/kubekey/helm Done
Push /opt/kubesphere_installing/KubeKey/kubekey/v1.20.4/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.18.30.155:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
Push /opt/kubesphere_installing/KubeKey/kubekey/v1.20.4/amd64/helm to 172.18.30.156:/tmp/kubekey/helm Done
Push /opt/kubesphere_installing/KubeKey/kubekey/v1.20.4/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.18.30.158:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
Push /opt/kubesphere_installing/KubeKey/kubekey/v1.20.4/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.18.30.159:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
Push /opt/kubesphere_installing/KubeKey/kubekey/v1.20.4/amd64/helm to 172.18.30.154:/tmp/kubekey/helm Done
Push /opt/kubesphere_installing/KubeKey/kubekey/v1.20.4/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.18.30.157:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
Push /opt/kubesphere_installing/KubeKey/kubekey/v1.20.4/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.18.30.156:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
Push /opt/kubesphere_installing/KubeKey/kubekey/v1.20.4/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.18.30.154:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
INFO[10:30:33 CST] Initializing kubernetes cluster
[master1 172.18.30.154] MSG:
W0604 10:30:33.958580 4585 utils.go:69] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.20.4
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
[certs] Using certificateDir folder “/etc/kubernetes/pki”
[certs] Generating “ca” certificate and key
[certs] Generating “apiserver” certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost master1 master1.cluster.local master2 master2.cluster.local master3 master3.cluster.local worker1 worker1.cluster.local worker2 worker2.cluster.local worker3 worker3.cluster.local] and IPs [10.233.0.1 172.18.30.154 127.0.0.1 172.18.30.150 172.18.30.155 172.18.30.156 172.18.30.157 172.18.30.158 172.18.30.159]
[certs] Generating “apiserver-kubelet-client” certificate and key
[certs] Generating “front-proxy-ca” certificate and key
[certs] Generating “front-proxy-client” certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating “sa” key and public key
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
[kubeconfig] Writing “admin.conf” kubeconfig file
[kubeconfig] Writing “kubelet.conf” kubeconfig file
[kubeconfig] Writing “controller-manager.conf” kubeconfig file
[kubeconfig] Writing “scheduler.conf” kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
[control-plane] Creating static Pod manifest for “kube-scheduler”
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 63.028200 seconds
[upload-config] Storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace
[kubelet] Creating a ConfigMap “kubelet-config-1.20” in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see –upload-certs
[mark-control-plane] Marking the node master1 as control-plane by adding the labels “node-role.kubernetes.io/master='‘” and “node-role.kubernetes.io/control-plane=’' (deprecated)”
[mark-control-plane] Marking the node master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: exb0fe.n7u1gkkxzjm3jiad
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the “cluster-info” ConfigMap in the “kube-public” namespace
[kubelet-finalize] Updating “/etc/kubernetes/kubelet.conf” to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join lb.kubesphere.local:6443 –token exb0fe.n7u1gkkxzjm3jiad \
–discovery-token-ca-cert-hash sha256:88ab53a1dcea09265d2368f8854d5b119ddd82cb957c80d72efd4bf7de780739 \
–control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join lb.kubesphere.local:6443 –token exb0fe.n7u1gkkxzjm3jiad \
–discovery-token-ca-cert-hash sha256:88ab53a1dcea09265d2368f8854d5b119ddd82cb957c80d72efd4bf7de780739
[master1 172.18.30.154] MSG:
service “kube-dns” deleted
[master1 172.18.30.154] MSG:
service/coredns created
[master1 172.18.30.154] MSG:
serviceaccount/nodelocaldns created
daemonset.apps/nodelocaldns created
[master1 172.18.30.154] MSG:
configmap/nodelocaldns created
[master1 172.18.30.154] MSG:
I0604 10:32:08.521307 6743 version.go:254] remote version is much newer: v1.21.1; falling back to: stable-1.20
[upload-certs] Storing the certificates in Secret “kubeadm-certs” in the “kube-system” Namespace
[upload-certs] Using certificate key:
7a02efbdfa75bdd804b534e97c09ef75b4b75c8c09d641ae6fd58ef28b4d6a4b
[master1 172.18.30.154] MSG:
secret/kubeadm-certs patched
[master1 172.18.30.154] MSG:
secret/kubeadm-certs patched
[master1 172.18.30.154] MSG:
secret/kubeadm-certs patched
[master1 172.18.30.154] MSG:
kubeadm join lb.kubesphere.local:6443 –token lvp0z9.h8971kgoqwq4zxds –discovery-token-ca-cert-hash sha256:88ab53a1dcea09265d2368f8854d5b119ddd82cb957c80d72efd4bf7de780739
[master1 172.18.30.154] MSG:
master1 v1.20.4 [map[address:172.18.30.154 type:InternalIP] map[address:master1 type:Hostname]]
INFO[10:32:10 CST] Joining nodes to cluster
[worker1 172.18.30.157] MSG:
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
[preflight] Reading configuration from the cluster…
[preflight] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -o yaml’
W0604 10:32:12.250891 3209 utils.go:69] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.233.0.10]; the provided value is: [169.254.25.10]
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap…
This node has joined the cluster:
- Certificate signing request was sent to apiserver and a response was received.
- The Kubelet was informed of the new secure connection details.
Run ‘kubectl get nodes’ on the control-plane to see this node join the cluster.
[worker1 172.18.30.157] MSG:
node/worker1 labeled
[worker3 172.18.30.159] MSG:
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
[preflight] Reading configuration from the cluster…
[preflight] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -o yaml’
W0604 10:32:11.954473 3204 utils.go:69] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.233.0.10]; the provided value is: [169.254.25.10]
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap…
This node has joined the cluster:
- Certificate signing request was sent to apiserver and a response was received.
- The Kubelet was informed of the new secure connection details.
Run ‘kubectl get nodes’ on the control-plane to see this node join the cluster.
[worker2 172.18.30.158] MSG:
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
[preflight] Reading configuration from the cluster…
[preflight] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -o yaml’
W0604 10:32:12.474253 3200 utils.go:69] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.233.0.10]; the provided value is: [169.254.25.10]
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap…
This node has joined the cluster:
- Certificate signing request was sent to apiserver and a response was received.
- The Kubelet was informed of the new secure connection details.
Run ‘kubectl get nodes’ on the control-plane to see this node join the cluster.
[worker3 172.18.30.159] MSG:
node/worker3 labeled
[worker2 172.18.30.158] MSG:
node/worker2 labeled
[master2 172.18.30.155] MSG:
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
[preflight] Reading configuration from the cluster…
[preflight] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -o yaml’
W0604 10:32:12.190792 4423 utils.go:69] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.233.0.10]; the provided value is: [169.254.25.10]
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
[download-certs] Downloading the certificates in Secret “kubeadm-certs” in the “kube-system” Namespace
[certs] Using certificateDir folder “/etc/kubernetes/pki”
[certs] Generating “front-proxy-client” certificate and key
[certs] Generating “apiserver” certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost master1 master1.cluster.local master2 master2.cluster.local master3 master3.cluster.local worker1 worker1.cluster.local worker2 worker2.cluster.local worker3 worker3.cluster.local] and IPs [10.233.0.1 172.18.30.155 127.0.0.1 172.18.30.150 172.18.30.154 172.18.30.156 172.18.30.157 172.18.30.158 172.18.30.159]
[certs] Generating “apiserver-kubelet-client” certificate and key
[certs] Valid certificates and keys now exist in “/etc/kubernetes/pki”
[certs] Using the existing “sa” key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
[kubeconfig] Writing “admin.conf” kubeconfig file
[kubeconfig] Writing “controller-manager.conf” kubeconfig file
[kubeconfig] Writing “scheduler.conf” kubeconfig file
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
[control-plane] Creating static Pod manifest for “kube-scheduler”
[check-etcd] Skipping etcd check in external mode
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap…
[control-plane-join] using external etcd - no local stacked instance added
[upload-config] Storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace
[mark-control-plane] Marking the node master2 as control-plane by adding the labels “node-role.kubernetes.io/master='‘” and “node-role.kubernetes.io/control-plane=’' (deprecated)”
[mark-control-plane] Marking the node master2 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
This node has joined the cluster and a new control plane instance was created:
- Certificate signing request was sent to apiserver and approval was received.
- The Kubelet was informed of the new secure connection details.
- Control plane (master) label and taint were applied to the new node.
- The Kubernetes control plane instances scaled up.
To start administering your cluster from this node, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Run ‘kubectl get nodes’ to see this node join the cluster.
[master3 172.18.30.156] MSG:
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
[preflight] Reading configuration from the cluster…
[preflight] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -o yaml’
W0604 10:32:12.074217 4417 utils.go:69] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.233.0.10]; the provided value is: [169.254.25.10]
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
[download-certs] Downloading the certificates in Secret “kubeadm-certs” in the “kube-system” Namespace
[certs] Using certificateDir folder “/etc/kubernetes/pki”
[certs] Generating “apiserver-kubelet-client” certificate and key
[certs] Generating “apiserver” certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost master1 master1.cluster.local master2 master2.cluster.local master3 master3.cluster.local worker1 worker1.cluster.local worker2 worker2.cluster.local worker3 worker3.cluster.local] and IPs [10.233.0.1 172.18.30.156 127.0.0.1 172.18.30.150 172.18.30.154 172.18.30.155 172.18.30.157 172.18.30.158 172.18.30.159]
[certs] Generating “front-proxy-client” certificate and key
[certs] Valid certificates and keys now exist in “/etc/kubernetes/pki”
[certs] Using the existing “sa” key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
[kubeconfig] Writing “admin.conf” kubeconfig file
[kubeconfig] Writing “controller-manager.conf” kubeconfig file
[kubeconfig] Writing “scheduler.conf” kubeconfig file
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
[control-plane] Creating static Pod manifest for “kube-scheduler”
[check-etcd] Skipping etcd check in external mode
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap…
[control-plane-join] using external etcd - no local stacked instance added
[upload-config] Storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace
[mark-control-plane] Marking the node master3 as control-plane by adding the labels “node-role.kubernetes.io/master='‘” and “node-role.kubernetes.io/control-plane=’' (deprecated)”
[mark-control-plane] Marking the node master3 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
This node has joined the cluster and a new control plane instance was created:
- Certificate signing request was sent to apiserver and approval was received.
- The Kubelet was informed of the new secure connection details.
- Control plane (master) label and taint were applied to the new node.
- The Kubernetes control plane instances scaled up.
To start administering your cluster from this node, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Run ‘kubectl get nodes’ to see this node join the cluster.
INFO[10:32:24 CST] Deploying network plugin …
[master1 172.18.30.154] MSG:
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
INFO[10:32:26 CST] Installing addon [1-1]: nfs-client
Release “nfs-client” does not exist. Installing it now.
NAME: nfs-client
LAST DEPLOYED: Fri Jun 4 10:32:28 2021
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
INFO[10:32:30 CST] Deploying KubeSphere …
v3.1.0
[master1 172.18.30.154] MSG:
namespace/kubesphere-system created
namespace/kubesphere-monitoring-system created
[master1 172.18.30.154] MSG:
secret/kube-etcd-client-certs created
[master1 172.18.30.154] MSG:
namespace/kubesphere-system unchanged
serviceaccount/ks-installer unchanged
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io unchanged
clusterrole.rbac.authorization.k8s.io/ks-installer unchanged
clusterrolebinding.rbac.authorization.k8s.io/ks-installer unchanged
deployment.apps/ks-installer unchanged
clusterconfiguration.installer.kubesphere.io/ks-installer created
WARN[11:04:05 CST] Task failed …
WARN[11:04:05 CST] error: KubeSphere startup timeout.
Error: Failed to deploy kubesphere: KubeSphere startup timeout.
Usage:
kk create cluster [flags]
Flags:
–download-cmd string The user defined command to download the necessary binary files. The first param ‘%s’ is output path, the second param ‘%s’, is the URL (default “curl -L -o %s %s”)
-f, –filename string Path to a configuration file
-h, –help help for cluster
–skip-pull-images Skip pre pull images
–with-kubernetes string Specify a supported version of kubernetes (default “v1.19.8”)
–with-kubesphere Deploy a specific version of kubesphere (default v3.1.0)
–with-local-storage Deploy a local PV provisioner
-y, –yes Skip pre-check of the installation
Global Flags:
–debug Print detailed information (default true)
–in-cluster Running inside the cluster
Failed to deploy kubesphere: KubeSphere startup timeout.
You have mail in /var/spool/mail/root
[root@master1 KubeKey]#`