我用kubekey安装kubesphere高可用模式时遇到了一个问题,HA是用keepalived和haproxy实现的。我的服务器配置是这样的:
k8s-lb1 192.168.2.3 keepalived,haproxy
k8s-lb2 192.168.2.4 keepalived,haproxy
k8s-master 192.168.2.5 master,etcd
k8s-master 192.168.2.6 master,etcd
k8s-master 192.168.2.7 master,etcd
k8s-node 192.168.2.8 node
k8s-node 192.168.2.9 node
vip 192.168.2.10
keepalived和haproxy都已经配置好,vip工作正常,其中haproxy.cfg里面设置了监听的端口:
frontend kube-apiserver
bind *:6443
mode http
default_backend kube-apiserver
backend kube-apiserver
mode http
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256
server k8s-titan1 192.168.2.5:6443 check
server k8s-titan2 192.168.2.6:6443 check
server k8s-titan3 192.168.2.7:6443 check
现在,使用kubekey安装kubesphere,config文件的配置如下(lb部分的配置):
controlPlaneEndpoint:
domain: lb.kubesphere.local
address: “192.168.2.10”
port: 6443
开始安装:
./kk create cluster –with-kubernetes v1.20.6 –with-kubesphere -f config-sample.yaml
碰到错误:
INFO[14:35:33 CST] Initializing kubernetes cluster
[reset] Reading configuration from the cluster…
[reset] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -o yaml’
W1110 14:39:42.589801 28398 reset.go:99] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get “https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s”: http: server gave HTTP response to HTTPS client
[preflight] Running pre-flight checks
W1110 14:39:42.591091 28398 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in “/var/lib/kubelet”
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
我现在的尝试是在/etc/docker/daemon.json添加了一行
insecure-registries: [“lb.kubesphere.local:6443”]
之后,重启了docker,重新安装kubesphere,但是没起作用,还是报上面的错误。
想问问大家,有谁碰到过这样的问题吗?如何解决呢?