创建部署问题时,请参考下面模板,你提供的信息越多,越容易及时获得解答。如果未按模板创建问题,管理员有权关闭问题。
确保帖子格式清晰易读,用 markdown code block 语法格式化代码块。
你只花一分钟创建的问题,不能指望别人花上半个小时给你解答。
操作系统信息
例如:虚拟机/物理机,Centos7.5/Ubuntu18.04,4C/8G
虚拟机 centos 7 2c/8G
Kubernetes版本信息
将 kubectl version
命令执行结果贴在下方
v1.20.4
容器运行时
将 docker version
/ crictl version
/ nerdctl version
结果贴在下方
19.03.8
KubeSphere版本信息
例如:v2.1.1/v3.0.0。离线安装还是在线安装。在已有K8s上安装还是使用kk安装。
v3.11 在线安装,使用kk安装
问题是什么
原来的架构,k8s-master1 和k8s-master2可以时间高可用。停k8s-master1或者k8s-master2.不影响服务。
k8s-master1 Ready control-plane,master 9d v1.20.4
k8s-master2 Ready control-plane,master 9d v1.20.4
k8s-node1 Ready worker 9d v1.20.4
k8s-node2 Ready worker 9d v1.20.4报错日志是什么,最好有截图。
先有的架构。 etcd的集群是 k8s-master1 ,k8s-master2和 k8s-node1 3个节点
[root@k8s-master2 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master
k8s-master1 Ready control-plane,master 9d v1.20.4
k8s-master2 Ready control-plane,master 9d v1.20.4
k8s-node1 Ready worker 9d v1.20.4
k8s-node2 Ready worker 9d v1.20.4
k8s-node3 Ready worker 16h v1.20.4
现在要新加个 k8s-master3 的节点
sample.yaml 文件如下
[root@k8s-master2 ~]# cat sample.yaml
apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
name: sample
spec:
hosts:
You should complete the ssh information of the hosts
{name: k8s-master1, address: 192.168.1.237, internalAddress: 192.168.1.237}
{name: k8s-master2, address: 192.168.1.231, internalAddress: 192.168.1.231}
{name: k8s-master3, address: 192.168.1.236, internalAddress: 192.168.1.236}
{name: k8s-node1, address: 192.168.1.235, internalAddress: 192.168.1.235}
{name: k8s-node2, address: 192.168.1.251, internalAddress: 192.168.1.251}
{name: k8s-node3, address: 192.168.1.232, internalAddress: 192.168.1.232}
roleGroups:
etcd:
k8s-master1
k8s-master2
k8s-node1
master:
k8s-master1
k8s-master2
k8s-master3
worker:
k8s-node1
k8s-node2
k8s-node3
controlPlaneEndpoint:
If loadbalancer was used, ‘address’ should be set to loadbalancer’s ip.
domain: lb.kubesphere.local
address: “192.168.1.239”
port: 16443
kubernetes:
version: v1.20.4
clusterName: cluster.local
proxyMode: ipvs
masqueradeAll: false
maxPods: 110
nodeCidrMaskSize: 24
network:
plugin: calico
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
registry:
privateRegistry: ""
报错如下
[root@k8s-master2 ~]# ./kk add nodes -f sample.yaml
Failed to start etcd cluster: Failed to start etcd cluster: Failed to exec command: sudo -E /bin/sh -c “export ETCDCTL_API=2;export ETCDCTL_CERT_FILE=‘/etc/ssl/etcd/ssl/admin-k8s-master1.pem’;export ETCDCTL_KEY_FILE=‘/etc/ssl/etcd/ssl/admin-k8s-master1-key.pem’;export ETCDCTL_CA_FILE=‘/etc/ssl/etcd/ssl/ca.pem’;/usr/local/bin/etcdctl –endpoints=https://192.168.1.237:2379,https://192.168.1.231:2379,https://192.168.1.235:2379 cluster-health | grep -q ‘cluster is healthy’”
: Process exited with status 1