报错信息

WARN[03:26:30 EDT] Task failed ...                              
WARN[03:26:30 EDT] error: Failed to upload kubeadm certs: Failed to exec command: sudo -E /bin/sh -c "/usr/local/bin/kubeadm init phase upload-certs --upload-certs" 
I0729 03:25:07.391990    9186 version.go:252] remote version is much newer: v1.21.3; falling back to: stable-1.18
W0729 03:25:10.262903    9186 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
error execution phase upload-certs: error uploading certs: error creating token: timed out waiting for the condition
To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1 
Error: Failed to get cluster status: Failed to upload kubeadm certs: Failed to exec command: sudo -E /bin/sh -c "/usr/local/bin/kubeadm init phase upload-certs --upload-certs" 
I0729 03:25:07.391990    9186 version.go:252] remote version is much newer: v1.21.3; falling back to: stable-1.18
W0729 03:25:10.262903    9186 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
error execution phase upload-certs: error uploading certs: error creating token: timed out waiting for the condition
To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1

配置文件

apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
  name: sample
spec:
  hosts: 
  # You should complete the ssh information of the hosts
  - {name: master1, address: 10.1.6.110, internalAddress: 10.1.6.110,username: root,password: xxx}
  - {name: master2, address: 10.1.6.115, internalAddress: 10.1.6.115,username: root,password: xxx}
  - {name: master3, address: 10.1.6.116, internalAddress: 10.1.6.116,username: root,password: xxx}
  - {name: node1, address: 10.1.6.111, internalAddress: 10.1.6.111,username: root,password: xxx}
  - {name: node2, address: 10.1.6.112, internalAddress: 10.1.6.112,username: root,password: xxx}
  - {name: node3, address: 10.1.6.113, internalAddress: 10.1.6.113,username: root,password: xxx}
  roleGroups:
    etcd:
    - master1
    - master2
    - master3
    master:
    - master1
    - master2
    - master3
    worker:
    - node1
    - node2
    - node3
  controlPlaneEndpoint:
    # If loadbalancer was used, 'address' should be set to loadbalancer's ip.
    domain: lb.kubesphere.local
    address:  10.1.6.35
    port: 6443
  kubernetes:
    version: v1.18.5
    clusterName: cluster.local
    proxyMode: ipvs
    masqueradeAll: false
    maxPods: 110
    nodeCidrMaskSize: 24
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
  registry:
    privateRegistry: ""
3 年 后

没人回答嘛,我也想知道KK如何更换master