安装成功,查看pods状态,发现calico的node一直在重启,卡了好几天了
[root@node1 ~]# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-677cbc8557-zdsst 1/1 Running 5 9d 10.233.90.18 node1 <none> <none>
calico-node-6947s 0/1 CrashLoopBackOff 109 9d 192.168.56.110 node3 <none> <none>
calico-node-pkm5b 1/1 Running 5 9d 192.168.56.108 node1 <none> <none>
calico-node-xljh2 0/1 CrashLoopBackOff 109 9d 192.168.56.109 node2 <none> <none>
coredns-79878cb9c9-g9cfk 1/1 Running 5 9d 10.233.90.16 node1 <none> <none>
coredns-79878cb9c9-hvpc8 1/1 Running 5 9d 10.233.90.17 node1 <none> <none>
kube-apiserver-node1 1/1 Running 5 9d 192.168.56.108 node1 <none> <none>
kube-controller-manager-node1 1/1 Running 6 9d 192.168.56.108 node1 <none> <none>
kube-proxy-2m2n8 1/1 Running 10 9d 192.168.56.108 node1 <none> <none>
kube-proxy-7nft6 1/1 Running 10 9d 192.168.56.109 node2 <none> <none>
kube-proxy-j8vs8 1/1 Running 1 9d 192.168.56.110 node3 <none> <none>
kube-scheduler-node1 1/1 Running 6 9d 192.168.56.108 node1 <none> <none>
nodelocaldns-jsq9w 1/1 Running 5 9d 192.168.56.108 node1 <none> <none>
nodelocaldns-pmqlq 1/1 Running 4 9d 192.168.56.110 node3 <none> <none>
nodelocaldns-zxkjb 1/1 Running 5 9d 192.168.56.109 node2 <none> <none>
查看两个calico-node日志,发现与地址10.233.0.1:443/api/v1/nodes/foo不能联通
[root@node1 ~]# kubectl logs calico-node-6947s -n kube-system
2020-10-08 14:16:26.014 [INFO][8] startup/startup.go 299: Early log level set to info
2020-10-08 14:16:26.015 [INFO][8] startup/startup.go 315: Using NODENAME environment for node name
2020-10-08 14:16:26.015 [INFO][8] startup/startup.go 327: Determined node name: node3
2020-10-08 14:16:26.017 [INFO][8] startup/startup.go 359: Checking datastore connection
2020-10-08 14:16:56.018 [INFO][8] startup/startup.go 374: Hit error connecting to datastore - retry error=Get https://10.233.0.1:443/api/v1/nodes/foo: dial tcp 10.233.0.1:443: i/o timeout
2020-10-08 14:17:27.021 [INFO][8] startup/startup.go 374: Hit error connecting to datastore - retry error=Get https://10.233.0.1:443/api/v1/nodes/foo: dial tcp 10.233.0.1:443: i/o timeout
[root@node1 ~]# kubectl logs calico-node-xljh2 -n kube-system
2020-10-12 01:08:37.085 [INFO][8] startup/startup.go 299: Early log level set to info
2020-10-12 01:08:37.086 [INFO][8] startup/startup.go 315: Using NODENAME environment for node name
2020-10-12 01:08:37.086 [INFO][8] startup/startup.go 327: Determined node name: node2
2020-10-12 01:08:37.154 [INFO][8] startup/startup.go 359: Checking datastore connection
2020-10-12 01:09:07.155 [INFO][8] startup/startup.go 374: Hit error connecting to datastore - retry error=Get https://10.233.0.1:443/api/v1/nodes/foo: dial tcp 10.233.0.1:443: i/o timeout
2020-10-12 01:09:38.158 [INFO][8] startup/startup.go 374: Hit error connecting to datastore - retry error=Get https://10.233.0.1:443/api/v1/nodes/foo: dial tcp 10.233.0.1:443: i/o timeout
确认firewalld和selinux都是关闭的。
附:
config-sample.yaml 配置文件
[root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]# cat config-sample.yaml
apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: node1, address: 192.168.56.108, internalAddress: 192.168.56.108, user: root, password: kkroot}
- {name: node2, address: 192.168.56.109, internalAddress: 192.168.56.109, user: root, password: kkroot}
- {name: node3, address: 192.168.56.110, internalAddress: 192.168.56.110, user: root, password: kkroot}
roleGroups:
etcd:
- node1
master:
- node1
worker:
- node1
- node2
- node3
controlPlaneEndpoint:
domain: lb.kubesphere.local
address: ""
port: "6443"
kubernetes:
version: v1.17.9
imageRepo: kubesphere
clusterName: cluster.local
network:
plugin: calico
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
registry:
registryMirrors: []
insecureRegistries: []
privateRegistry: dockerhub.kubekey.local
addons: []
./kk create cluster 安装日志
[root@node1 kubesphere-all-v3.0.0-offline-linux-amd64]# ./kk create cluster -f config-sample.yaml
+-------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
| name | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time |
+-------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
| node3 | y | y | y | y | y | y | y | y | y | y | y | EDT 10:53:47 |
| node1 | y | y | y | y | y | y | y | y | y | y | y | EDT 10:53:47 |
| node2 | y | y | y | y | y | y | y | y | y | y | y | EDT 10:53:47 |
+-------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
This is a simple check of your environment.
Before installation, you should ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations
Continue this installation? [yes/no]: yes
INFO[10:53:49 EDT] Downloading Installation Files
INFO[10:53:49 EDT] Downloading kubeadm ...
INFO[10:53:49 EDT] Downloading kubelet ...
INFO[10:53:50 EDT] Downloading kubectl ...
INFO[10:53:50 EDT] Downloading kubecni ...
INFO[10:53:50 EDT] Downloading helm ...
INFO[10:53:51 EDT] Configurating operating system ...
[node2 192.168.56.109] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
[node1 192.168.56.108] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
[node3 192.168.56.110] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
INFO[10:53:54 EDT] Installing docker ...
INFO[10:53:55 EDT] Start to download images on all nodes
[node1] Downloading image: dockerhub.kubekey.local/kubesphere/etcd:v3.3.12
[node3] Downloading image: dockerhub.kubekey.local/kubesphere/pause:3.1
[node2] Downloading image: dockerhub.kubekey.local/kubesphere/pause:3.1
[node1] Downloading image: dockerhub.kubekey.local/kubesphere/pause:3.1
[node3] Downloading image: dockerhub.kubekey.local/coredns/coredns:1.6.9
[node2] Downloading image: dockerhub.kubekey.local/coredns/coredns:1.6.9
[node1] Downloading image: dockerhub.kubekey.local/kubesphere/kube-apiserver:v1.17.9
[node3] Downloading image: dockerhub.kubekey.local/kubesphere/k8s-dns-node-cache:1.15.12
[node2] Downloading image: dockerhub.kubekey.local/kubesphere/k8s-dns-node-cache:1.15.12
[node3] Downloading image: dockerhub.kubekey.local/calico/kube-controllers:v3.15.1
[node1] Downloading image: dockerhub.kubekey.local/kubesphere/kube-controller-manager:v1.17.9
[node2] Downloading image: dockerhub.kubekey.local/calico/kube-controllers:v3.15.1
[node2] Downloading image: dockerhub.kubekey.local/calico/cni:v3.15.1
[node1] Downloading image: dockerhub.kubekey.local/kubesphere/kube-scheduler:v1.17.9
[node3] Downloading image: dockerhub.kubekey.local/calico/cni:v3.15.1
[node2] Downloading image: dockerhub.kubekey.local/calico/node:v3.15.1
[node1] Downloading image: dockerhub.kubekey.local/kubesphere/kube-proxy:v1.17.9
[node3] Downloading image: dockerhub.kubekey.local/calico/node:v3.15.1
[node2] Downloading image: dockerhub.kubekey.local/calico/pod2daemon-flexvol:v3.15.1
[node3] Downloading image: dockerhub.kubekey.local/calico/pod2daemon-flexvol:v3.15.1
[node1] Downloading image: dockerhub.kubekey.local/coredns/coredns:1.6.9
[node1] Downloading image: dockerhub.kubekey.local/kubesphere/k8s-dns-node-cache:1.15.12
[node1] Downloading image: dockerhub.kubekey.local/calico/kube-controllers:v3.15.1
[node1] Downloading image: dockerhub.kubekey.local/calico/cni:v3.15.1
[node1] Downloading image: dockerhub.kubekey.local/calico/node:v3.15.1
[node1] Downloading image: dockerhub.kubekey.local/calico/pod2daemon-flexvol:v3.15.1
INFO[10:53:59 EDT] Generating etcd certs
INFO[10:54:01 EDT] Synchronizing etcd certs
INFO[10:54:01 EDT] Creating etcd service
INFO[10:54:05 EDT] Starting etcd cluster
[node1 192.168.56.108] MSG:
Configuration file already exists
Waiting for etcd to start
INFO[10:54:13 EDT] Refreshing etcd configuration
INFO[10:54:13 EDT] Backup etcd data regularly
INFO[10:54:14 EDT] Get cluster status
[node1 192.168.56.108] MSG:
Cluster will be created.
INFO[10:54:14 EDT] Installing kube binaries
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubeadm to 192.168.56.108:/tmp/kubekey/kubeadm Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubeadm to 192.168.56.110:/tmp/kubekey/kubeadm Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubeadm to 192.168.56.109:/tmp/kubekey/kubeadm Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubelet to 192.168.56.108:/tmp/kubekey/kubelet Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubectl to 192.168.56.108:/tmp/kubekey/kubectl Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/helm to 192.168.56.108:/tmp/kubekey/helm Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubelet to 192.168.56.110:/tmp/kubekey/kubelet Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.56.108:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubelet to 192.168.56.109:/tmp/kubekey/kubelet Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubectl to 192.168.56.110:/tmp/kubekey/kubectl Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubectl to 192.168.56.109:/tmp/kubekey/kubectl Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/helm to 192.168.56.110:/tmp/kubekey/helm Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/helm to 192.168.56.109:/tmp/kubekey/helm Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.56.109:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.56.110:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
INFO[10:54:32 EDT] Initializing kubernetes cluster
[node1 192.168.56.108] MSG:
W1002 10:54:33.546978 7304 defaults.go:186] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
W1002 10:54:33.547575 7304 validation.go:28] Cannot validate kube-proxy config - no validator is available
W1002 10:54:33.547601 7304 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.9
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [node1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost lb.kubesphere.local node1 node1.cluster.local node2 node2.cluster.local node3 node3.cluster.local] and IPs [10.233.0.1 10.0.2.15 127.0.0.1 192.168.56.108 192.168.56.109 192.168.56.110 10.233.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[controlplane] Adding extra host path mount "host-time" to "kube-controller-manager"
W1002 10:54:39.078002 7304 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[controlplane] Adding extra host path mount "host-time" to "kube-controller-manager"
W1002 10:54:39.089428 7304 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[controlplane] Adding extra host path mount "host-time" to "kube-controller-manager"
W1002 10:54:39.091411 7304 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 26.007113 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node node1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node node1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: rajfez.t9320hox3sddbowz
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join lb.kubesphere.local:6443 --token rajfez.t9320hox3sddbowz \
--discovery-token-ca-cert-hash sha256:99f5f95e912acb458719c9cbaa6d4acb5d36ca0e38dccb00c56d69c2f0ef7fa2 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join lb.kubesphere.local:6443 --token rajfez.t9320hox3sddbowz \
--discovery-token-ca-cert-hash sha256:99f5f95e912acb458719c9cbaa6d4acb5d36ca0e38dccb00c56d69c2f0ef7fa2
[node1 192.168.56.108] MSG:
node/node1 untainted
[node1 192.168.56.108] MSG:
node/node1 labeled
[node1 192.168.56.108] MSG:
service "kube-dns" deleted
[node1 192.168.56.108] MSG:
service/coredns created
[node1 192.168.56.108] MSG:
serviceaccount/nodelocaldns created
daemonset.apps/nodelocaldns created
[node1 192.168.56.108] MSG:
configmap/nodelocaldns created
[node1 192.168.56.108] MSG:
I1002 10:55:34.720063 9901 version.go:251] remote version is much newer: v1.19.2; falling back to: stable-1.17
W1002 10:55:36.884062 9901 validation.go:28] Cannot validate kube-proxy config - no validator is available
W1002 10:55:36.884090 9901 validation.go:28] Cannot validate kubelet config - no validator is available
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
a9a0daeedbefb4b9a014f4b258b9916403f7136bea20d28ec03aa926c41fcb3e
[node1 192.168.56.108] MSG:
secret/kubeadm-certs patched
[node1 192.168.56.108] MSG:
secret/kubeadm-certs patched
[node1 192.168.56.108] MSG:
secret/kubeadm-certs patched
[node1 192.168.56.108] MSG:
W1002 10:55:37.738867 10303 validation.go:28] Cannot validate kube-proxy config - no validator is available
W1002 10:55:37.738964 10303 validation.go:28] Cannot validate kubelet config - no validator is available
kubeadm join lb.kubesphere.local:6443 --token 025byf.2t2mvldlr9wm1ycx --discovery-token-ca-cert-hash sha256:99f5f95e912acb458719c9cbaa6d4acb5d36ca0e38dccb00c56d69c2f0ef7fa2
[node1 192.168.56.108] MSG:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node1 NotReady master,worker 34s v1.17.9 192.168.56.108 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://19.3.4
INFO[10:55:38 EDT] Deploying network plugin ...
[node1 192.168.56.108] MSG:
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
INFO[10:55:40 EDT] Joining nodes to cluster
[node3 192.168.56.110] MSG:
W1002 10:55:41.544472 12557 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
W1002 10:55:43.067290 12557 defaults.go:186] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[node2 192.168.56.109] MSG:
W1002 10:55:41.963749 8533 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
W1002 10:55:43.520053 8533 defaults.go:186] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[node3 192.168.56.110] MSG:
node/node3 labeled
[node2 192.168.56.109] MSG:
node/node2 labeled
INFO[10:55:54 EDT] Congradulations! Installation is successful.