INFO[12:16:28 CST] Generating etcd certs
INFO[12:16:32 CST] Synchronizing etcd certs
INFO[12:16:32 CST] Creating etcd service
[k8s-dev2 192.168.8.181] MSG:
etcd will be installed
[k8s-dev3 192.168.8.182] MSG:
etcd will be installed
[k8s-dev1 192.168.8.180] MSG:
etcd will be installed
INFO[12:16:35 CST] Starting etcd cluster
[k8s-dev1 192.168.8.180] MSG:
Configuration file will be created
[k8s-dev2 192.168.8.181] MSG:
Configuration file will be created
[k8s-dev3 192.168.8.182] MSG:
Configuration file will be created
INFO[12:16:36 CST] Refreshing etcd configuration
Waiting for etcd to start
Waiting for etcd to start
Waiting for etcd to start
INFO[12:16:43 CST] Backup etcd data regularly
INFO[12:16:49 CST] Get cluster status
[k8s-dev1 192.168.8.180] MSG:
Cluster will be created.
INFO[12:16:49 CST] Installing kube binaries
Push /root/kubekey/v1.20.4/amd64/kubeadm to 192.168.8.180:/tmp/kubekey/kubeadm Done
Push /root/kubekey/v1.20.4/amd64/kubeadm to 192.168.8.182:/tmp/kubekey/kubeadm Done
Push /root/kubekey/v1.20.4/amd64/kubeadm to 192.168.8.181:/tmp/kubekey/kubeadm Done
Push /root/kubekey/v1.20.4/amd64/kubelet to 192.168.8.180:/tmp/kubekey/kubelet Done
Push /root/kubekey/v1.20.4/amd64/kubectl to 192.168.8.180:/tmp/kubekey/kubectl Done
Push /root/kubekey/v1.20.4/amd64/kubelet to 192.168.8.181:/tmp/kubekey/kubelet Done
Push /root/kubekey/v1.20.4/amd64/kubelet to 192.168.8.182:/tmp/kubekey/kubelet Done
Push /root/kubekey/v1.20.4/amd64/helm to 192.168.8.180:/tmp/kubekey/helm Done
Push /root/kubekey/v1.20.4/amd64/kubectl to 192.168.8.181:/tmp/kubekey/kubectl Done
Push /root/kubekey/v1.20.4/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.8.180:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
Push /root/kubekey/v1.20.4/amd64/kubectl to 192.168.8.182:/tmp/kubekey/kubectl Done
Push /root/kubekey/v1.20.4/amd64/helm to 192.168.8.181:/tmp/kubekey/helm Done
Push /root/kubekey/v1.20.4/amd64/helm to 192.168.8.182:/tmp/kubekey/helm Done
Push /root/kubekey/v1.20.4/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.8.181:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
Push /root/kubekey/v1.20.4/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.8.182:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
INFO[12:16:56 CST] Initializing kubernetes cluster
[k8s-dev1 192.168.8.180] MSG:
[preflight] Running pre-flight checks
W0722 12:16:59.288730 15721 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in “/var/lib/kubelet”
W0722 12:16:59.294171 15721 cleanupnode.go:99] [reset] Failed to evaluate the “/var/lib/kubelet” directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the “iptables” command.

If your cluster was setup to utilize IPVS, run ipvsadm –clear (or similar)
to reset your system’s IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[k8s-dev1 192.168.8.180] MSG:
[preflight] Running pre-flight checks
W0722 12:17:00.319840 16062 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in “/var/lib/kubelet”
W0722 12:17:00.324187 16062 cleanupnode.go:99] [reset] Failed to evaluate the “/var/lib/kubelet” directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the “iptables” command.

If your cluster was setup to utilize IPVS, run ipvsadm –clear (or similar)
to reset your system’s IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
ERRO[12:17:01 CST] Failed to init kubernetes cluster: Failed to exec command: sudo env PATH=$PATH /bin/sh -c “/usr/local/bin/kubeadm init –config=/etc/kubernetes/kubeadm-config.yaml –ignore-preflight-errors=FileExisting-crictl”
W0722 12:17:00.483486 16116 utils.go:69] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.20.4
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”. Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-6443]: Port 6443 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
To see the stack trace of this error execute with –v=5 or higher: Process exited with status 1 node=192.168.8.180
WARN[12:17:01 CST] Task failed …
WARN[12:17:01 CST] error: interrupted by error
Error: Failed to init kubernetes cluster: interrupted by error
Usage:
kk create cluster [flags]

Flags:
–download-cmd string The user defined command to download the necessary binary files. The first param ‘%s’ is output path, the second param ‘%s’, is the URL (default “curl -L -o %s %s”)
-f, –filename string Path to a configuration file
-h, –help help for cluster
–skip-pull-images Skip pre pull images
–with-kubernetes string Specify a supported version of kubernetes (default “v1.19.8”)
–with-kubesphere Deploy a specific version of kubesphere (default v3.1.0)
–with-local-storage Deploy a local PV provisioner
-y, –yes Skip pre-check of the installation

Global Flags:
–debug Print detailed information (default true)
–in-cluster Running inside the cluster

Failed to init kubernetes cluster: interrupted by error

配置文件是
apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
name: sample
spec:
hosts:

  • {name: k8s-dev1, address: 192.168.8.180, internalAddress: 192.168.8.180, privateKeyPath: “~/.ssh/id_rsa”}
  • {name: k8s-dev2, address: 192.168.8.181, internalAddress: 192.168.8.181, privateKeyPath: “~/.ssh/id_rsa”}
  • {name: k8s-dev3, address: 192.168.8.182, internalAddress: 192.168.8.182, privateKeyPath: “~/.ssh/id_rsa”}
    roleGroups:
    etcd:
    • k8s-dev1
    • k8s-dev2
    • k8s-dev3
      master:
    • k8s-dev1
    • k8s-dev2
    • k8s-dev3
      worker:
    • k8s-dev1
    • k8s-dev2
    • k8s-dev3
      controlPlaneEndpoint:
      domain: lb.kubesphere.local
      address: “192.168.8.179”
      port: 6443
      kubernetes:
      version: v1.20.4
      imageRepo: kubesphere
      clusterName: cluster.local
      masqueradeAll: false # masqueradeAll tells kube-proxy to SNAT everything if using the pure iptables proxy mode. [Default: false]
      maxPods: 200 # maxPods is the number of pods that can run on this Kubelet. [Default: 110]
      nodeCidrMaskSize: 24 # internal network node size allocation. This is the size allocated to each node on your network. [Default: 24]
      proxyMode: ipvs # mode specifies which proxy mode to use. [Default: ipvs]
      network:
      plugin: calico
      calico:
      ipipMode: Always # IPIP Mode to use for the IPv4 POOL created at start up. If set to a value other than Never, vxlanMode should be set to “Never”. [Always | CrossSubnet | Never] [Default: Always]
      vxlanMode: Never # VXLAN Mode to use for the IPv4 POOL created at start up. If set to a value other than Never, ipipMode should be set to “Never”. [Always | CrossSubnet | Never] [Default: Never]
      vethMTU: 1440 # The maximum transmission unit (MTU) setting determines the largest packet size that can be transmitted through your network. [Default: 1440]
      kubePodsCIDR: 10.233.64.0/18
      kubeServiceCIDR: 10.233.0.0/18
      registry:
      registryMirrors: [“https://*.mirror.aliyuncs.com”] # # input your registryMirrors
      insecureRegistries: []
      privateRegistry: ""
      addons:
  • name: nfs-client
    namespace: kube-system
    sources:
    chart:
    name: nfs-client-provisioner
    repo: https://charts.kubesphere.io/main
    valuesFile: /root/nfs-client.yaml # Use the path of your own NFS-client configuration file.


apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
name: ks-installer
namespace: kubesphere-system
labels:
version: v3.1.1
spec:
persistence:
storageClass: ""
authentication:
jwtSecret: ""
zone: ""
local_registry: ""
etcd:
monitoring: false
endpointIps: ""
port: 2379
tlsEnable: true
common:
redis:
enabled: false
redisVolumSize: 2Gi
openldap:
enabled: false
openldapVolumeSize: 2Gi
minioVolumeSize: 20Gi
monitoring:
endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
es:
elasticsearchMasterVolumeSize: 40Gi
elasticsearchDataVolumeSize: 200Gi
logMaxAge: 60
elkPrefix: logstash
basicAuth:
enabled: false
username: ""
password: ""
externalElasticsearchUrl: ""
externalElasticsearchPort: ""
console:
enableMultiLogin: true
port: 30880
alerting:
enabled: false

thanosruler:

replicas: 1

resources: {}

auditing:
enabled: false
devops:
enabled: false
jenkinsMemoryLim: 2Gi
jenkinsMemoryReq: 1500Mi
jenkinsVolumeSize: 8Gi
jenkinsJavaOpts_Xms: 512m
jenkinsJavaOpts_Xmx: 512m
jenkinsJavaOpts_MaxRAM: 2g
events:
enabled: false
ruler:
enabled: true
replicas: 2
logging:
enabled: false
logsidecar:
enabled: true
replicas: 2
metrics_server:
enabled: false
monitoring:
storageClass: ""
prometheusMemoryRequest: 400Mi
prometheusVolumeSize: 20Gi
multicluster:
clusterRole: none
network:
networkpolicy:
enabled: false
ippool:
type: none
topology:
type: none
openpitrix:
store:
enabled: false
servicemesh:
enabled: false
kubeedge:
enabled: false
cloudCore:
nodeSelector: {“node-role.kubernetes.io/worker”: ""}
tolerations: []
cloudhubPort: “10000”
cloudhubQuicPort: “10001”
cloudhubHttpsPort: “10002”
cloudstreamPort: “10003”
tunnelPort: “10004”
cloudHub:
advertiseAddress:
- ""
nodeLimit: “100”
service:
cloudhubNodePort: “30000”
cloudhubQuicNodePort: “30001”
cloudhubHttpsNodePort: “30002”
cloudstreamNodePort: “30003”
tunnelNodePort: “30004”
edgeWatcher:
nodeSelector: {“node-role.kubernetes.io/worker”: ""}
tolerations: []
edgeWatcherAgent:
nodeSelector: {“node-role.kubernetes.io/worker”: ""}
tolerations: []

etcd集群是部署好的,麻烦大佬看下是什么原因,我把
master:
k8s-dev1
#k8s-dev2
#k8s-dev3
#address: “192.168.8.179”
注释了,也是这个报错

问题找到了,HA部署到master节点服务器上占用了6443

请问下 高可用部署,如果只有3台服务器,用作HA、K8S master、K8S node,能实现么

    5 天 后