创建部署问题时,请参考下面模板,你提供的信息越多,越容易及时获得解答。如果未按模板创建问题,管理员有权关闭问题。
确保帖子格式清晰易读,用 markdown code block 语法格式化代码块。
你只花一分钟创建的问题,不能指望别人花上半个小时给你解答。

操作系统信息
例如:虚拟机 Ubuntu18.04,4C/8G

使用vagrant创建虚拟机,使用 `vagrant-proxyconf` 配置代理

    if Vagrant.has_plugin?("vagrant-proxyconf")
	    config.proxy.http     = "http://192.168.0.100:7890/"
	    config.proxy.https    = "http://192.168.0.100:7890/"
	    config.proxy.no_proxy = "localhost,127.0.0.1,127.0.1.1,.aliyun.com,10.233.64.0/18,10.233.0.0/18,192.168.12.201,192.168.12.202,192.168.12.203"
    end
  • 我的三台虚拟机IP地址分别是 192.168.12.201 / 192.168.12.202 / 192.168.12.203;192.168.0.100:7890

是我windows主机的地址和代理端口

Kubernetes版本信息
kubectl version 命令执行结果贴在下方

Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.5", GitCommit:"aea7bbadd2fc0cd689de94a54e5b7b758869d691", GitTreeState:"clean", BuildDate:"2021-09-15T21:10:45Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}

容器运行时
docker version / crictl version / nerdctl version 结果贴在下方

没有安装 crictl nerdctl

Client:

Version:           20.10.8

API version:       1.41

Go version:        go1.16.6

Git commit:        3967b7d

Built:             Fri Jul 30 19:50:40 2021

OS/Arch:           linux/amd64

Context:           default

Experimental:      true
  • Docker代理配置

cat /etc/systemd/system/docker.service.d/proxy.conf

[Service]
Environment="HTTP_PROXY=http://192.168.0.100:7890/"
Environment="HTTPS_PROXY=http://192.168.0.100:7890/"
Environment="NO_PROXY=localhost,127.0.0.1,127.0.1.1,.aliyun.com,10.233.64.0/18,10.233.0.0/18,192.168.12.201,192.168.12.202,192.168.12.203,192.168.12.204,192.168.12.205,192.168.12.206,192.168.12.207"

KubeSphere版本信息
例如:v3.0.0。在线安装。使用kk安装。

使用 `./kk create config` 默认的版本, ./kk create cluster -f config-sample.yaml 启动创建集群

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: node201, address: 192.168.12.201, internalAddress: 192.168.12.201, user: vagrant, privateKeyPath: "~/.ssh/id_rsa"}
  - {name: node202, address: 192.168.12.202, internalAddress: 192.168.12.202, user: vagrant, privateKeyPath: "~/.ssh/id_rsa"}
  - {name: node203, address: 192.168.12.203, internalAddress: 192.168.12.203, user: vagrant, privateKeyPath: "~/.ssh/id_rsa"}
  roleGroups:
    etcd:
    - node201
    control-plane:
    - node201
    worker:
    - node202
    - node203
  controlPlaneEndpoint:
    ## Internal loadbalancer for apiservers
    # internalLoadbalancer: haproxy

    domain: lb.kubesphere.local
    address: ""
    port: 6443
  kubernetes:
    version: v1.21.5
    clusterName: cluster.local
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
    multusCNI:
      enabled: false
  registry:
    plainHTTP: false
    privateRegistry: ""
    namespaceOverride: ""
    registryMirrors: []
    insecureRegistries: []
  addons: []

问题是什么

02:07:09 UTC [InitKubernetesModule] Generate kubeadm config
02:07:10 UTC success: [node201]
02:07:10 UTC [InitKubernetesModule] Init cluster using kubeadm
02:11:33 UTC stdout: [node201]
W0515 02:07:10.205632   16322 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.21.5
[preflight] Running pre-flight checks
        [WARNING FileExisting-socat]: socat not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost node201 node201.cluster.local node202 node202.cluster.local node203 node203.cluster.local] and IPs [10.233.0.1 192.168.12.201 127.0.0.1 192.168.12.202 192.168.12.203]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in docker:
                - 'docker ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'docker logs CONTAINERID'

error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

    seedscoder
    报错信息里面也很明显展示了排错的命令,你可以先根据报错提示的命令来排查问题