这是日志:

10:20:32 CST [DeleteNodeConfirmModule] Display confirmation form

Are you sure to delete this node? [yes/no]: yes

10:20:33 CST success: [LocalHost]

10:20:33 CST [CompareConfigAndClusterInfoModule] Find information about nodes that are expected to be deleted

10:20:33 CST stdout: [node2]

NAME

node1

node2

node3

node4

node5

10:20:33 CST message: [node2]

1. check the node name in the config-sample.yaml

2. check the node name in the Kubernetes cluster

3. check the node name is the first master and etcd node name

10:20:33 CST retry: [node2]

10:20:39 CST stdout: [node2]

NAME

node1

node2

node3

node4

node5

10:20:39 CST message: [node2]

1. check the node name in the config-sample.yaml

2. check the node name in the Kubernetes cluster

3. check the node name is the first master and etcd node name

10:20:39 CST retry: [node2]

10:20:44 CST stdout: [node2]

NAME

node1

node2

node3

node4

node5

10:20:44 CST message: [node2]

1. check the node name in the config-sample.yaml

2. check the node name in the Kubernetes cluster

3. check the node name is the first master and etcd node name

10:20:44 CST failed: [node2]

10:20:44 CST skipped: [node3]

10:20:44 CST skipped: [node4]

10:20:44 CST skipped: [node5]

error: Pipeline[DeleteNodePipeline] execute failed: Module[CompareConfigAndClusterInfoModule] exec failed:

failed: [node2] [FindNode] exec failed after 3 retries: 1. check the node name in the config-sample.yaml

2. check the node name in the Kubernetes cluster

3. check the node name is the first master and etcd node name

这是config-sample.yaml

apiVersion: kubekey.kubesphere.io/v1alpha2

kind: Cluster

metadata:

name: sample

spec:

hosts:

  • {name: node1, address: 10.220.0.123, internalAddress: 10.220.0.123, user: root, password: “9Jt585&2Bi”}

  • {name: node2, address: 10.220.0.9, internalAddress: 10.220.0.9, user: root, password: “9Jt585&2Bi”}

  • {name: node3, address: 10.220.0.80, internalAddress: 10.220.0.80, user: root, password: “9Jt585&2Bi”}

  • {name: node4, address: 10.220.0.65, internalAddress: 10.220.0.65, user: root, password: “9Jt585&2Bi”}

  • {name: node5, address: 10.220.0.87, internalAddress: 10.220.0.87, user: root, password: “9Jt585&2Bi”}

    roleGroups:

    etcd:

    • node2

    • node3

    • node4

    • node5

      control-plane:

    • node2

    • node3

    • node4

    • node5

      worker:

    • node1

    • node2

    • node3

    • node4

    • node5

    controlPlaneEndpoint:

    Internal loadbalancer for apiservers

    internalLoadbalancer: haproxy

    domain: lb.kubesphere.local

    address: ""

    port: 6443

    kubernetes:

    version: v1.22.17

    clusterName: cluster.local

    autoRenewCerts: true

    containerManager: docker

    etcd:

    type: kubekey

    network:

    plugin: calico

    kubePodsCIDR: 10.233.64.0/18

    kubeServiceCIDR: 10.233.0.0/18

    multus support. https://github.com/k8snetworkplumbingwg/multus-cni

    multusCNI:

    enabled: false

    registry:

    privateRegistry: ""

    namespaceOverride: ""

    registryMirrors: []

    insecureRegistries: []

    addons: []

都没有任何错,但是执行删除就是报node的错,这是为啥呀

[root@node2 ~]# kubectl get node

NAME STATUS ROLES AGE VERSION

node1 Ready worker 39h v1.22.17

node2 Ready,SchedulingDisabled control-plane,master,worker 39h v1.22.17

node3 Ready control-plane,master,worker 39h v1.22.17

node4 Ready control-plane,master,worker 39h v1.22.17

node5 Ready control-plane,master,worker 22h v1.22.17

严格按照文档执行的,但是就到node2这里报错了

而且我测试了,可能和node2都没关系,如果把node3放在最上面,他也会报同样错