`INFO[11:55:20 CST] Initializing kubernetes cluster
[k8s-dev-master01 192.168.1.10] MSG:
[preflight] Running pre-flight checks
W0525 11:55:30.929049 1840 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in “/var/lib/kubelet”
W0525 11:55:30.936773 1840 cleanupnode.go:99] [reset] Failed to evaluate the “/var/lib/kubelet” directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the “iptables” command.

If your cluster was setup to utilize IPVS, run ipvsadm –clear (or similar)
to reset your system’s IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[k8s-dev-master01 192.168.1.10] MSG:
[preflight] Running pre-flight checks
W0525 11:55:31.967990 2011 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in “/var/lib/kubelet”
W0525 11:55:31.974805 2011 cleanupnode.go:99] [reset] Failed to evaluate the “/var/lib/kubelet” directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the “iptables” command.

If your cluster was setup to utilize IPVS, run ipvsadm –clear (or similar)
to reset your system’s IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
ERRO[11:55:32 CST] Failed to init kubernetes cluster: Failed to exec command: sudo env PATH=$PATH /bin/sh -c “/usr/local/bin/kubeadm init –config=/etc/kubernetes/kubeadm-config.yaml –ignore-preflight-errors=FileExisting-crictl”
W0525 11:55:32.159665 2040 utils.go:26] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.233.0.10]; the provided value is: [169.254.25.10]
W0525 11:55:32.159904 2040 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.8
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.6. Latest validated version: 19.03
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-6443]: Port 6443 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
To see the stack trace of this error execute with –v=5 or higher: Process exited with status 1 node=192.168.1.10
WARN[11:55:32 CST] Task failed …
WARN[11:55:32 CST] error: interrupted by error
Error: Failed to init kubernetes cluster: interrupted by error
Usage:
kk create cluster [flags]

Flags:
–download-cmd string The user defined command to download the necessary binary files. The first param ‘%s’ is output path, the second param ‘%s’, is the URL (default “curl -L -o %s %s”)
-f, –filename string Path to a configuration file
-h, –help help for cluster
–skip-pull-images Skip pre pull images
–with-kubernetes string Specify a supported version of kubernetes (default “v1.19.8”)
–with-kubesphere Deploy a specific version of kubesphere (default v3.1.0)
–with-local-storage Deploy a local PV provisioner
-y, –yes Skip pre-check of the installation

Global Flags:
–debug Print detailed information (default true)
–in-cluster Running inside the cluster

Failed to init kubernetes cluster: interrupted by error
`
安装HA集群的时候报错 求大佬看下 【已安装conntrack keepalived和HAproxy都装了】

好像是DNS问题
network:
plugin: calico
calico:
ipipMode: Always
vxlanMode: Never
vethMTU: 1440
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18

kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
这两个值应该怎么配置

    6443端口是HAproxy的端口 我先kill了 试试

    zhu733756 大佬目前问题解决了 确实是端口占用 因为我这边节省机器HAproxy和master坐一起了 我爸haproxy的端口改了就好了 然后又遇到一个问题解决了 目前现在是搭建成功了 但是无法访问
    `
    #####################################################

    Welcome to KubeSphere!

    #####################################################

    Console: http://192.168.1.10:30880
    Account: admin
    Password: P@88w0rd

    NOTES:

    1. After logging into the console, please check the
      monitoring status of service components in
      the “Cluster Management”. If any service is not
      ready, please wait patiently until all components
      are ready.
    2. Please modify the default password after login.

    #####################################################
    https://kubesphere.io 2021-05-25 15:20:27
    #####################################################

    [root@k8s-dev-master01 ~]# kubectl get pod,svc -A -o wide|grep console
    kubesphere-system pod/ks-console-786b9846d4-b2c44 1/1 Running 0 26m 10.55.116.4 k8s-dev-master03 <none> <none>
    kubesphere-system pod/ks-console-786b9846d4-lcq2r 1/1 Running 0 26m 10.55.89.5 k8s-dev-master02 <none> <none>
    kubesphere-system pod/ks-console-786b9846d4-m94k8 1/1 Running 0 26m 10.55.124.3 k8s-dev-master01 <none> <none>
    kubesphere-system service/ks-console NodePort 10.55.44.203 <none> 80:30880/TCP 26m app=ks-console,tier=frontend,version=v3.0.0
    [root@k8s-dev-master01 ~]# curl http://192.168.1.10:30880
    C
    [root@k8s-dev-master01 ~]# curl http://10.55.44.203
    Redirecting to <a href=“/login”>/login</a>.[root@k8s-dev-master01 ~]#
    [root@k8s-dev-master01 ~]# kubectl describe service/ks-console -n kubesphere-system
    Name: ks-console
    Namespace: kubesphere-system
    Labels: app=ks-console
    tier=frontend
    version=v3.0.0
    Annotations: Selector: app=ks-console,tier=frontend,version=v3.0.0
    Type: NodePort
    IP: 10.55.44.203
    Port: nginx 80/TCP
    TargetPort: 8000/TCP
    NodePort: nginx 30880/TCP
    Endpoints: 10.55.116.4:8000,10.55.124.3:8000,10.55.89.5:8000
    Session Affinity: None
    External Traffic Policy: Cluster
    Events: <none>
    [root@k8s-dev-master01 ~]# curl http://10.55.116.4:8000
    Redirecting to <a href=“/login”>/login</a>.[root@k8s-dev-master01 ~]#
    [root@k8s-dev-master01 ~]# getenforce
    Permissive
    [root@k8s-dev-master01 ~]# systemctl status firewalld
    ● firewalld.service - firewalld - dynamic firewall daemon
    Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
    Active: inactive (dead)
    Docs: man:firewalld(1)

    5月 24 20:17:33 k8s-dev-master01 systemd[1]: Starting firewalld - dynamic firewall daemon…
    5月 24 20:17:34 k8s-dev-master01 systemd[1]: Started firewalld - dynamic firewall daemon.
    5月 24 21:09:08 k8s-dev-master01 systemd[1]: Stopping firewalld - dynamic firewall daemon…
    5月 24 21:09:09 k8s-dev-master01 systemd[1]: Stopped firewalld - dynamic firewall daemon.
    [root@k8s-dev-master01 ~]# iptables –flush&iptables -tnat –flush & iptables -P FORWARD ACCEPT
    [1] 26362
    [2] 26363
    Another app is currently holding the xtables lock. Perhaps you want to use the -w option?
    Another app is currently holding the xtables lock. Perhaps you want to use the -w option?
    [2]+ 退出 4 iptables -tnat –flush
    [1]+ 完成 iptables –flush
    [root@k8s-dev-master01 ~]# curl http://192.168.1.10:30880
    C
    [root@k8s-dev-master01 ~]# netstat -nlp|grep 30880
    tcp 17 0 0.0.0.0:30880 0.0.0.0:* LISTEN 20699/kube-proxy
    [root@k8s-dev-master01 ~]# iptables -P FORWARD ACCEPT
    [root@k8s-dev-master01 ~]# curl http://192.168.1.10:30880
    C
    [root@k8s-dev-master01 ~]# kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    k8s-dev-master01 Ready master 43m v1.18.8
    k8s-dev-master02 Ready master 42m v1.18.8
    k8s-dev-master03 Ready master 42m v1.18.8
    k8s-dev-node01 Ready worker 42m v1.18.8
    [root@k8s-dev-master01 ~]#`

      coffee curl 10.55.116.4:8000,10.55.124.3:8000,10.55.89.5:8000 这三个都能访问吗?有可能负载到其中一个不能访问的console,检查一下每个master上的对应kube-proxy正确?如果正常,检查下dns解析有没有错误,重启试试,或者能查到不能访问的console看下有没有错误日志。

        zhu733756
        [root@k8s-dev-master01 ~]# curl 10.55.116.4:8000
        Redirecting to <a href=“/login”>/login</a>.
        [root@k8s-dev-master01 ~]# curl 10.55.89.5:8000
        Redirecting to <a href=“/login”>/login</a>.
        [root@k8s-dev-master01 ~]# curl 10.55.124.3:8000
        Redirecting to <a href=“/login”>/login</a>.

        [root@k8s-dev-master03 ~]# kubectl get pod -A -o wide|grep proxy
        kube-system kube-proxy-5xcbd 1/1 Running 0 78m 192.168.1.10 k8s-dev-master01 <none> <none>
        kube-system kube-proxy-6bnwr 1/1 Running 0 78m 192.168.1.13 k8s-dev-node01 <none> <none>
        kube-system kube-proxy-dmsdb 1/1 Running 0 78m 192.168.1.11 k8s-dev-master02 <none> <none>
        kube-system kube-proxy-wrdk4 1/1 Running 0 78m 192.168.1.12 k8s-dev-master03 <none> <none>
        kubesphere-system redis-ha-haproxy-5c6559d588-9kbpq 1/1 Running 0 73m 10.55.116.1 k8s-dev-master03 <none> <none>
        kubesphere-system redis-ha-haproxy-5c6559d588-tsxx4 1/1 Running 0 73m 10.55.124.1 k8s-dev-master01 <none> <none>
        kubesphere-system redis-ha-haproxy-5c6559d588-tzg2g 1/1 Running 1 73m 10.55.89.2 k8s-dev-master02 <none> <none>

        看上去服务好象都是正常的 是不是网络上的问题。我这个机器是kvm虚拟出来的

        zhu733756
        尴尬 三个master节点 就01节点访问不了
        [root@k8s-dev-master03 ~]# curl 192.168.1.12:30880
        Redirecting to <a href=“/login”>/login</a>.
        [root@k8s-dev-master03 ~]# curl 192.168.1.11:30880
        Redirecting to <a href=“/login”>/login</a>.

        @coffee 找到问题就好,排查以下coredns有没有报错,每个node用lsof -i:30880, 查看下kubeproxy代理进程转发规则正确么?

          3 年 后

          zhu733756 ksp监听不是30880吧,我用netstat查看服务器并没有监听30880端口。但是网页可以正常访问30880的web端。

          商业产品与合作咨询