多节点环境安装的,大神们帮看看怎么破
头大3.0安装报错
rayzhou2017K零SK壹S
你K8s装的是1.19.0?
rayzhou2017 [root@kube-master1 soft]# kubectl version
Client Version: version.Info{Major:“1″, Minor:“17”, GitVersion:“v1.17.9″, GitCommit:“4fb7ed12476d57b8437ada90b4f93b17ffaeed99”, GitTreeState:“clean”, BuildDate:“2020-07-15T16:18:16Z”, GoVersion:“go1.13.9”, Compiler:“gc”, Platform:“linux/amd64”}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
CauchyK零SK壹S
mkdir -p /root/.kube && mkdir -p $HOME/.kube && cp -f /etc/kubernetes/admin.conf /root/.kube/config && cp -f /etc/kubernetes/admin.conf $HOME/.kube/config && chown $(id -u):$(id -g) $HOME/.kube/config
执行这个试下
- 已编辑
Cauchy 现在提示了另外的错误
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
W0910 09:49:36.942295 13570 cleanupnode.go:99] [reset] Failed to evaluate the "/var/lib/kubelet" directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[master3 172.20.248.106] MSG:
[preflight] Running pre-flight checks
W0910 09:49:35.990748 14762 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
W0910 09:49:35.996435 14762 cleanupnode.go:99] [reset] Failed to evaluate the "/var/lib/kubelet" directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
- 已编辑
还是没装成功,提示:
ERRO[15:09:03 CST] Failed to add worker to cluster: Failed to exec command: sudo -E /bin/sh -c “/usr/local/bin/kubeadm join lb.kubesphere.local:6443 –token 8guu01.nkfg1ex4w58mzxe5 –discovery-token-ca-cert-hash sha256:7e33a5507fb2419a8b39d9093096ff03bfa0596308974e7f96e198a44388eb3d”
W0914 15:04:03.536994 2979 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”. Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: couldn’t validate the identity of the API Server: abort connecting to API servers after timeout of 5m0s
To see the stack trace of this error execute with –v=5 or higher: Process exited with status 1 node=172.20.248.109
我好像跟你一样问题的,为啥安装过程中,不会自动将/etc/kubernetes/admin.conf 拷贝到$HOME/.kube/config里呢?好奇怪啊