###########架构############## 架构######架构######架构######
aliyun slb——–m1—-
|
|——-m2—
|
|——-m2—
################config-sample.yaml ##########################

apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: ykj-prod-ks-master1, address: 172.16.3.221, internalAddress: 172.16.3.221, user: root, password: 777}
  - {name: ykj-prod-ks-master2, address: 172.16.6.133, internalAddress: 172.16.6.133, user: root, password: 777}
  - {name: ykj-prod-ks-master3, address: 172.16.1.3, internalAddress: 172.16.1.3, user: root, password: 777}
  - {name: ykj-prod-ks-node8, address: 172.16.1.14, internalAddress: 172.16.1.14, user: root, password: 777}
  roleGroups:
    etcd:
    - ykj-prod-ks-master1
    - ykj-prod-ks-master2
    - ykj-prod-ks-master3
    master: 
    - ykj-prod-ks-master1
    - ykj-prod-ks-master2
    - ykj-prod-ks-master3
    worker:
    - ykj-prod-ks-node8
  controlPlaneEndpoint:
    domain: lb.kubesphere.local
    address: "172.16.1.11"
    port: "6443"
  kubernetes:
    version: v1.18.6
    imageRepo: kubesphere
    clusterName: cluster.local
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
  registry:
    registryMirrors: []
    insecureRegistries: []
  addons: []


---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.0.0
spec:
  local_registry: ""
  persistence:
    storageClass: ""
  authentication:
    jwtSecret: ""
  etcd:
    monitoring: true
    endpointIps: 172.16.3.221,172.16.6.133,172.16.1.3
    port: 2379
    tlsEnable: true
  common:
    es:
      elasticsearchDataVolumeSize: 20Gi
      elasticsearchMasterVolumeSize: 4Gi
      elkPrefix: logstash
      logMaxAge: 7
    mysqlVolumeSize: 20Gi
    minioVolumeSize: 20Gi
    etcdVolumeSize: 20Gi
    openldapVolumeSize: 2Gi
    redisVolumSize: 2Gi
  console:
    enableMultiLogin: false  # enable/disable multi login
    port: 30880
  alerting:
    enabled: true
  auditing:
    enabled: true
  devops:
    enabled: false
    jenkinsMemoryLim: 2Gi
    jenkinsMemoryReq: 1500Mi
    jenkinsVolumeSize: 8Gi
    jenkinsJavaOpts_Xms: 512m
    jenkinsJavaOpts_Xmx: 512m
    jenkinsJavaOpts_MaxRAM: 2g
  events:
    enabled: true
    ruler:
      enabled: true
      replicas: 2
  logging:
    enabled: true
    logsidecarReplicas: 2
  metrics_server:
    enabled: true
  monitoring:
    prometheusMemoryRequest: 400Mi
    prometheusVolumeSize: 20Gi
  multicluster:
    clusterRole: none  # host | member | none
  networkpolicy:
    enabled: false
  notification:
    enabled: true
  openpitrix:
    enabled: false
  servicemesh:
    enabled: false

######错误信息###################错误信息#############错误信息############

[ykj-prod-ks-master1 172.16.3.221] MSG:
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
W1228 17:19:08.637496    9674 reset.go:99] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: an error on the server ("") has prevented the request from succeeding (get configmaps kubeadm-config)
[preflight] Running pre-flight checks
W1228 17:19:08.637621    9674 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

    xiaosage 你是不是执行错命令了,这个输出看下像是删除集群的操作

      Jeff
      ./kk create cluster -f config-sample.yaml

      执行的是这个。

      • Jeff 回复了此帖

        Jeff
        ##############重新来了一遍,日志#############

        INFO[19:37:25 CST] Installing kube binaries                     
        Push /ks3.0/kubekey/v1.18.6/amd64/kubeadm to 172.16.1.14:/tmp/kubekey/kubeadm   Done
        Push /ks3.0/kubekey/v1.18.6/amd64/kubeadm to 172.16.3.221:/tmp/kubekey/kubeadm   Done
        Push /ks3.0/kubekey/v1.18.6/amd64/kubeadm to 172.16.1.3:/tmp/kubekey/kubeadm   Done
        Push /ks3.0/kubekey/v1.18.6/amd64/kubeadm to 172.16.6.133:/tmp/kubekey/kubeadm   Done
        Push /ks3.0/kubekey/v1.18.6/amd64/kubelet to 172.16.3.221:/tmp/kubekey/kubelet   Done
        Push /ks3.0/kubekey/v1.18.6/amd64/kubectl to 172.16.3.221:/tmp/kubekey/kubectl   Done
        Push /ks3.0/kubekey/v1.18.6/amd64/kubelet to 172.16.1.3:/tmp/kubekey/kubelet   Done
        Push /ks3.0/kubekey/v1.18.6/amd64/helm to 172.16.3.221:/tmp/kubekey/helm   Done
        Push /ks3.0/kubekey/v1.18.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.16.3.221:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz   Done
        Push /ks3.0/kubekey/v1.18.6/amd64/kubelet to 172.16.1.14:/tmp/kubekey/kubelet   Done
        Push /ks3.0/kubekey/v1.18.6/amd64/kubectl to 172.16.1.14:/tmp/kubekey/kubectl   Done
        Push /ks3.0/kubekey/v1.18.6/amd64/kubectl to 172.16.1.3:/tmp/kubekey/kubectl   Done
        Push /ks3.0/kubekey/v1.18.6/amd64/helm to 172.16.1.14:/tmp/kubekey/helm   Done
        Push /ks3.0/kubekey/v1.18.6/amd64/helm to 172.16.1.3:/tmp/kubekey/helm   Done
        Push /ks3.0/kubekey/v1.18.6/amd64/kubelet to 172.16.6.133:/tmp/kubekey/kubelet   Done
        Push /ks3.0/kubekey/v1.18.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.16.1.3:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz   Done
        Push /ks3.0/kubekey/v1.18.6/amd64/kubectl to 172.16.6.133:/tmp/kubekey/kubectl   Done
        Push /ks3.0/kubekey/v1.18.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.16.1.14:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz   Done
        Push /ks3.0/kubekey/v1.18.6/amd64/helm to 172.16.6.133:/tmp/kubekey/helm   Done
        Push /ks3.0/kubekey/v1.18.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.16.6.133:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz   Done
        INFO[19:37:32 CST] Initializing kubernetes cluster              
        [ykj-prod-ks-master1 172.16.3.221] MSG:
        [reset] Reading configuration from the cluster...
        [reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
        W1228 19:42:38.132491   21761 reset.go:99] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: an error on the server ("") has prevented the request from succeeding (get configmaps kubeadm-config)
        [preflight] Running pre-flight checks
        W1228 19:42:38.132611   21761 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
        [reset] No etcd config found. Assuming external etcd
        [reset] Please, manually reset etcd to prevent further issues
        [reset] Stopping the kubelet service
        [reset] Unmounting mounted directories in "/var/lib/kubelet"
        [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
        [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
        [reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
        
        The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
        
        The reset process does not reset or clean up iptables rules or IPVS tables.
        If you wish to reset iptables, you must do so manually by using the "iptables" command.
        
        If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
        to reset your system's IPVS tables.
        
        The reset process does not clean your kubeconfig files and you must remove them manually.
        Please, check the contents of the $HOME/.kube/config file.
        [ykj-prod-ks-master1 172.16.3.221] MSG:
        [reset] Reading configuration from the cluster...
        [reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
        W1228 19:47:44.685537   24217 reset.go:99] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: an error on the server ("") has prevented the request from succeeding (get configmaps kubeadm-config)
        [preflight] Running pre-flight checks
        W1228 19:47:44.685651   24217 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
        [reset] No etcd config found. Assuming external etcd
        [reset] Please, manually reset etcd to prevent further issues
        [reset] Stopping the kubelet service
        [reset] Unmounting mounted directories in "/var/lib/kubelet"
        [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
        [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
        [reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
        
        The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
        
        The reset process does not reset or clean up iptables rules or IPVS tables.
        If you wish to reset iptables, you must do so manually by using the "iptables" command.
        
        If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
        to reset your system's IPVS tables.
        
        The reset process does not clean your kubeconfig files and you must remove them manually.
        Please, check the contents of the $HOME/.kube/config file.
        
        
        
        
        
        ERRO[19:52:14 CST] Failed to init kubernetes cluster: Failed to exec command: sudo -E /bin/sh -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml" 
        W1228 19:47:45.846358   24603 utils.go:26] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
        W1228 19:47:45.846622   24603 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
        [init] Using Kubernetes version: v1.18.6
        [preflight] Running pre-flight checks
        [preflight] Pulling images required for setting up a Kubernetes cluster
        [preflight] This might take a minute or two, depending on the speed of your internet connection
        [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
        [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
        [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
        [kubelet-start] Starting the kubelet
        [certs] Using certificateDir folder "/etc/kubernetes/pki"
        [certs] Generating "ca" certificate and key
        [certs] Generating "apiserver" certificate and key
        [certs] apiserver serving cert is signed for DNS names [ykj-prod-ks-master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost lb.kubesphere.local ykj-prod-ks-master1 ykj-prod-ks-master1.cluster.local ykj-prod-ks-master2 ykj-prod-ks-master2.cluster.local ykj-prod-ks-master3 ykj-prod-ks-master3.cluster.local ykj-prod-ks-node8 ykj-prod-ks-node8.cluster.local] and IPs [10.233.0.1 172.16.3.221 127.0.0.1 172.16.1.11 172.16.3.221 172.16.6.133 172.16.1.3 172.16.1.14 10.233.0.1]
        [certs] Generating "apiserver-kubelet-client" certificate and key
        [certs] Generating "front-proxy-ca" certificate and key
        [certs] Generating "front-proxy-client" certificate and key
        [certs] External etcd mode: Skipping etcd/ca certificate authority generation
        [certs] External etcd mode: Skipping etcd/server certificate generation
        [certs] External etcd mode: Skipping etcd/peer certificate generation
        [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
        [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
        [certs] Generating "sa" key and public key
        [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
        [kubeconfig] Writing "admin.conf" kubeconfig file
        [kubeconfig] Writing "kubelet.conf" kubeconfig file
        [kubeconfig] Writing "controller-manager.conf" kubeconfig file
        [kubeconfig] Writing "scheduler.conf" kubeconfig file
        [control-plane] Using manifest folder "/etc/kubernetes/manifests"
        [control-plane] Creating static Pod manifest for "kube-apiserver"
        W1228 19:47:49.104275   24603 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
        [control-plane] Creating static Pod manifest for "kube-controller-manager"
        W1228 19:47:49.111033   24603 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
        [control-plane] Creating static Pod manifest for "kube-scheduler"
        W1228 19:47:49.112761   24603 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
        [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
        [kubelet-check] Initial timeout of 40s passed.
        
        	Unfortunately, an error has occurred:
        		timed out waiting for the condition
        
        	This error is likely caused by:
        		- The kubelet is not running
        		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
        
        	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        		- 'systemctl status kubelet'
        		- 'journalctl -xeu kubelet'
        
        	Additionally, a control plane component may have crashed or exited when started by the container runtime.
        	To troubleshoot, list all containers using your preferred container runtimes CLI.
        
        	Here is one example how you may list all Kubernetes containers running in docker:
        		- 'docker ps -a | grep kube | grep -v pause'
        		Once you have found the failing container, you can inspect its logs with:
        		- 'docker logs CONTAINERID'
        
        error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
        To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1  node=172.16.3.221
        WARN[19:52:14 CST] Task failed ...                              
        WARN[19:52:14 CST] error: interrupted by error                  
        Error: Failed to init kubernetes cluster: interrupted by error
        Usage:
          kk create cluster [flags]
        
        Flags:
          -f, --filename string          Path to a configuration file
          -h, --help                     help for cluster
              --skip-pull-images         Skip pre pull images
              --with-kubernetes string   Specify a supported version of kubernetes
              --with-kubesphere          Deploy a specific version of kubesphere (default v3.0.0)
          -y, --yes                      Skip pre-check of the installation
        
        Global Flags:
              --debug   Print detailed information (default true)
        
        Failed to init kubernetes cluster: interrupted by error

        ########/var/log/messages日志##############

        `Dec 28 20:08:56 ykj-prod-ks-master1 kubelet: E1228 20:08:56.139912   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:08:56 ykj-prod-ks-master1 kubelet: E1228 20:08:56.240039   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:08:56 ykj-prod-ks-master1 kubelet: E1228 20:08:56.340173   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:08:56 ykj-prod-ks-master1 kubelet: E1228 20:08:56.440307   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:08:56 ykj-prod-ks-master1 kubelet: E1228 20:08:56.540437   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:08:56 ykj-prod-ks-master1 kubelet: E1228 20:08:56.640570   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:08:56 ykj-prod-ks-master1 kubelet: E1228 20:08:56.740682   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:08:56 ykj-prod-ks-master1 kubelet: E1228 20:08:56.840813   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:08:56 ykj-prod-ks-master1 kubelet: E1228 20:08:56.940937   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:08:57 ykj-prod-ks-master1 kubelet: E1228 20:08:57.041069   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:08:57 ykj-prod-ks-master1 kubelet: E1228 20:08:57.141202   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:08:57 ykj-prod-ks-master1 kubelet: E1228 20:08:57.241315   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:08:57 ykj-prod-ks-master1 kubelet: E1228 20:08:57.341469   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:08:57 ykj-prod-ks-master1 kubelet: E1228 20:08:57.342530   24817 kubelet.go:2188] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
        Dec 28 20:08:57 ykj-prod-ks-master1 kubelet: E1228 20:08:57.441757   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:08:57 ykj-prod-ks-master1 kubelet: E1228 20:08:57.541851   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:08:57 ykj-prod-ks-master1 kubelet: W1228 20:08:57.621064   24817 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
        Dec 28 20:08:57 ykj-prod-ks-master1 kubelet: E1228 20:08:57.641983   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:08:57 ykj-prod-ks-master1 kubelet: E1228 20:08:57.742087   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:08:57 ykj-prod-ks-master1 kubelet: E1228 20:08:57.842211   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:08:57 ykj-prod-ks-master1 kubelet: E1228 20:08:57.942342   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:08:58 ykj-prod-ks-master1 kubelet: E1228 20:08:58.042477   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:08:58 ykj-prod-ks-master1 kubelet: E1228 20:08:58.142634   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:08:58 ykj-prod-ks-master1 kubelet: E1228 20:08:58.242776   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:08:58 ykj-prod-ks-master1 kubelet: I1228 20:08:58.272240   24817 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
        Dec 28 20:08:58 ykj-prod-ks-master1 kubelet: I1228 20:08:58.311019   24817 kubelet_node_status.go:70] Attempting to register node ykj-prod-ks-master1
        Dec 28 20:08:58 ykj-prod-ks-master1 kubelet: E1228 20:08:58.312503   24817 kubelet_node_status.go:92] Unable to register node "ykj-prod-ks-master1" with API server: Post https://lb.kubesphere.local:6443/api/v1/nodes: EOF
        Dec 28 20:08:58 ykj-prod-ks-master1 kubelet: E1228 20:08:58.342917   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:08:58 ykj-prod-ks-master1 kubelet: E1228 20:08:58.443044   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:08:58 ykj-prod-ks-master1 kubelet: E1228 20:08:58.543167   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:08:58 ykj-prod-ks-master1 kubelet: E1228 20:08:58.643284   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:08:58 ykj-prod-ks-master1 kubelet: E1228 20:08:58.743398   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:08:58 ykj-prod-ks-master1 kubelet: E1228 20:08:58.843439   24817 event.go:269] Unable to write event: 'Patch https://lb.kubesphere.local:6443/api/v1/namespaces/default/events/ykj-prod-ks-master1.1654deeebb314644: EOF' (may retry after sleeping)
        Dec 28 20:08:58 ykj-prod-ks-master1 kubelet: E1228 20:08:58.843503   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:08:58 ykj-prod-ks-master1 kubelet: E1228 20:08:58.943615   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:08:59 ykj-prod-ks-master1 kubelet: E1228 20:08:59.043741   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:08:59 ykj-prod-ks-master1 kubelet: E1228 20:08:59.143872   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:08:59 ykj-prod-ks-master1 kubelet: E1228 20:08:59.243998   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:08:59 ykj-prod-ks-master1 kubelet: E1228 20:08:59.344124   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:08:59 ykj-prod-ks-master1 kubelet: E1228 20:08:59.444244   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:08:59 ykj-prod-ks-master1 kubelet: E1228 20:08:59.544377   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:08:59 ykj-prod-ks-master1 kubelet: E1228 20:08:59.644495   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:08:59 ykj-prod-ks-master1 kubelet: E1228 20:08:59.744622   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:08:59 ykj-prod-ks-master1 kubelet: E1228 20:08:59.844739   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:08:59 ykj-prod-ks-master1 kubelet: E1228 20:08:59.944867   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:00 ykj-prod-ks-master1 kubelet: E1228 20:09:00.044999   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:00 ykj-prod-ks-master1 kubelet: E1228 20:09:00.145141   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:00 ykj-prod-ks-master1 kubelet: E1228 20:09:00.245276   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:00 ykj-prod-ks-master1 kubelet: E1228 20:09:00.345401   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:00 ykj-prod-ks-master1 kubelet: E1228 20:09:00.445528   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:00 ykj-prod-ks-master1 kubelet: E1228 20:09:00.545659   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:00 ykj-prod-ks-master1 kubelet: E1228 20:09:00.645782   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:00 ykj-prod-ks-master1 kubelet: E1228 20:09:00.745937   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:00 ykj-prod-ks-master1 kubelet: E1228 20:09:00.846099   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:00 ykj-prod-ks-master1 kubelet: E1228 20:09:00.946243   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:01 ykj-prod-ks-master1 kubelet: E1228 20:09:01.046394   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:01 ykj-prod-ks-master1 kubelet: E1228 20:09:01.146560   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:01 ykj-prod-ks-master1 kubelet: E1228 20:09:01.246682   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:01 ykj-prod-ks-master1 kubelet: E1228 20:09:01.346832   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:01 ykj-prod-ks-master1 kubelet: E1228 20:09:01.446971   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:01 ykj-prod-ks-master1 kubelet: E1228 20:09:01.547106   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:01 ykj-prod-ks-master1 kubelet: E1228 20:09:01.647231   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:01 ykj-prod-ks-master1 kubelet: E1228 20:09:01.747371   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:01 ykj-prod-ks-master1 kubelet: E1228 20:09:01.847497   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:01 ykj-prod-ks-master1 kubelet: E1228 20:09:01.947624   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:02 ykj-prod-ks-master1 kubelet: E1228 20:09:02.047753   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:02 ykj-prod-ks-master1 kubelet: E1228 20:09:02.147894   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:02 ykj-prod-ks-master1 kubelet: E1228 20:09:02.248015   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:02 ykj-prod-ks-master1 kubelet: E1228 20:09:02.348171   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:02 ykj-prod-ks-master1 kubelet: E1228 20:09:02.357358   24817 kubelet.go:2188] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
        Dec 28 20:09:02 ykj-prod-ks-master1 kubelet: E1228 20:09:02.448296   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:02 ykj-prod-ks-master1 kubelet: E1228 20:09:02.548429   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:02 ykj-prod-ks-master1 kubelet: W1228 20:09:02.621862   24817 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
        Dec 28 20:09:02 ykj-prod-ks-master1 kubelet: E1228 20:09:02.648559   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:02 ykj-prod-ks-master1 kubelet: E1228 20:09:02.748708   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:02 ykj-prod-ks-master1 kubelet: E1228 20:09:02.848851   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:02 ykj-prod-ks-master1 kubelet: E1228 20:09:02.948991   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:03 ykj-prod-ks-master1 kubelet: E1228 20:09:03.049130   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:03 ykj-prod-ks-master1 kubelet: E1228 20:09:03.149259   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:03 ykj-prod-ks-master1 kubelet: E1228 20:09:03.249412   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:03 ykj-prod-ks-master1 kubelet: E1228 20:09:03.349578   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:03 ykj-prod-ks-master1 kubelet: E1228 20:09:03.449725   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:03 ykj-prod-ks-master1 kubelet: E1228 20:09:03.549858   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:03 ykj-prod-ks-master1 kubelet: E1228 20:09:03.649984   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:03 ykj-prod-ks-master1 kubelet: E1228 20:09:03.750099   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:03 ykj-prod-ks-master1 kubelet: E1228 20:09:03.838395   24817 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get node info: node "ykj-prod-ks-master1" not found
        Dec 28 20:09:03 ykj-prod-ks-master1 kubelet: E1228 20:09:03.850213   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:03 ykj-prod-ks-master1 kubelet: E1228 20:09:03.950341   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:04 ykj-prod-ks-master1 kubelet: E1228 20:09:04.050480   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:04 ykj-prod-ks-master1 kubelet: E1228 20:09:04.150626   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:04 ykj-prod-ks-master1 kubelet: E1228 20:09:04.250762   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:04 ykj-prod-ks-master1 kubelet: E1228 20:09:04.350885   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:04 ykj-prod-ks-master1 kubelet: E1228 20:09:04.451013   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:04 ykj-prod-ks-master1 kubelet: E1228 20:09:04.551148   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:04 ykj-prod-ks-master1 kubelet: E1228 20:09:04.651294   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:04 ykj-prod-ks-master1 kubelet: E1228 20:09:04.751431   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:04 ykj-prod-ks-master1 kubelet: E1228 20:09:04.755341   24817 controller.go:136] failed to ensure node lease exists, will retry in 7s, error: an error on the server ("") has prevented the request from succeeding (get leases.coordination.k8s.io ykj-prod-ks-master1)
        Dec 28 20:09:04 ykj-prod-ks-master1 kubelet: E1228 20:09:04.805546   24817 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.RuntimeClass: an error on the server ("") has prevented the request from succeeding (get runtimeclasses.node.k8s.io)
        Dec 28 20:09:04 ykj-prod-ks-master1 kubelet: E1228 20:09:04.851546   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found
        Dec 28 20:09:04 ykj-prod-ks-master1 kubelet: E1228 20:09:04.951659   24817 kubelet.go:2268] node "ykj-prod-ks-master1" not found

        ######现存的容器#######

        [root@ykj-prod-ks-master1 ks3.0]# docker ps  -a
        CONTAINER ID   IMAGE                     COMMAND                  CREATED          STATUS          PORTS     NAMES
        711423a78f85   ffce5e64d915              "kube-controller-man…"   22 minutes ago   Up 22 minutes             k8s_kube-controller-manager_kube-controller-manager-ykj-prod-ks-master1_kube-system_c114c9491ca019816d8f5ed82b8e9a2b_0
        a585c0a4e852   0e0972b2b5d1              "kube-scheduler --au…"   22 minutes ago   Up 22 minutes             k8s_kube-scheduler_kube-scheduler-ykj-prod-ks-master1_kube-system_44178452dcde1a1b2c23f8efd20fdf0e_0
        0d1d43569976   56acd67ea15a              "kube-apiserver --ad…"   22 minutes ago   Up 22 minutes             k8s_kube-apiserver_kube-apiserver-ykj-prod-ks-master1_kube-system_255423ce6846706cac2e8c371607b684_0
        cd183250a68d   kubesphere/pause:3.2      "/pause"                 22 minutes ago   Up 22 minutes             k8s_POD_kube-apiserver-ykj-prod-ks-master1_kube-system_255423ce6846706cac2e8c371607b684_0
        9627d42e0f08   kubesphere/pause:3.2      "/pause"                 22 minutes ago   Up 22 minutes             k8s_POD_kube-scheduler-ykj-prod-ks-master1_kube-system_44178452dcde1a1b2c23f8efd20fdf0e_0
        ccbc82be798d   kubesphere/pause:3.2      "/pause"                 22 minutes ago   Up 22 minutes             k8s_POD_kube-controller-manager-ykj-prod-ks-master1_kube-system_c114c9491ca019816d8f5ed82b8e9a2b_0
        5808e0118e7c   kubesphere/etcd:v3.3.12   "/usr/local/bin/etcd"    33 minutes ago   Up 33 minutes             etcd1

          xiaosage

          既然static pod已经启动,说明kubelet正常,而且kube-apiserver看起来也运行正常,所以应该是lb的问题。
          可以对比curl -k https://{masterip}:6443curl -k https://{lbip}:6443 的返回结果排查下。

            Cauchy
            您看方便远程看下吗?
            排查了一下。只有当前安装节点的6443是起来的 其他节点的6443端口都不通。重试了一遍,还是有如下错误日志。:

            `[ykj-prod-ks-master1 172.16.3.221] MSG:
            [reset] Reading configuration from the cluster…
            [reset] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -oyaml’
            W1229 10:39:01.694145 23117 reset.go:99] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
            [preflight] Running pre-flight checks
            W1229 10:39:01.694266 23117 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
            [reset] No etcd config found. Assuming external etcd
            [reset] Please, manually reset etcd to prevent further issues
            [reset] Stopping the kubelet service
            [reset] Unmounting mounted directories in “/var/lib/kubelet”
            [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
            [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
            [reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

            The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

            The reset process does not reset or clean up iptables rules or IPVS tables.
            If you wish to reset iptables, you must do so manually by using the “iptables” command.

            If your cluster was setup to utilize IPVS, run ipvsadm –clear (or similar)
            to reset your system’s IPVS tables.

            The reset process does not clean your kubeconfig files and you must remove them manually.
            Please, check the contents of the $HOME/.kube/config file.
            `

              xiaosage
              当前节点的6443是通的说明kube-apiserver已经启动,需要保证lb的地址172.16.1.11的6443也是通的才行,这个应该跟lb转发策略、防火墙或者安全组等有关系。

              如果lb确定已经转发了当前master节点的6443,那这个就应该跟lb有关系了,腾讯云不支持内网lb,需要使用外网lb,阿里云的也可以试下,或者给阿里云提个工单问问。