创建部署问题时,请参考下面模板,你提供的信息越多,越容易及时获得解答。如果未按模板创建问题,管理员有权关闭问题。
确保帖子格式清晰易读,用 markdown code block 语法格式化代码块。
你只花一分钟创建的问题,不能指望别人花上半个小时给你解答。
操作系统信息
虚拟机,Kylin OS 10V3,4C/8G
Kubernetes版本信息
将 v1.26.15
1.27.16 1.28.15 1.29.15 1.30.12 1.31.8 命令执行结果贴在下方
容器运行时
将 crictl version
结果贴在下方
Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.7.13
RuntimeApiVersion: v1
Kubekey版本信息
例如:v3.1.9。在线安装。
问题是kubekey部署双栈集群,执行到添加节点时报错
config.yaml
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
name: cluster-local
spec:
hosts:
- name: kylin-master-1
address: 10.135.38.101
internalAddress: 10.135.38.101
ip6: 2406:440:600::1:0:121
user: root
publickey: /root/.ssh/id_rsa
- name: kylin-master-2
address: 10.135.38.102
internalAddress: 10.135.38.102
ip6: 2406:440:600::1:0:122
user: root
publickey: /root/.ssh/id_rsa
- name: kylin-master-3
address: 10.135.38.103
internalAddress: 10.135.38.103
ip6: 2406:440:600::1:0:123
user: root
publickey: /root/.ssh/id_rsa
- name: kylin-worker-1
address: 10.135.38.104
internalAddress: 10.135.38.104
ip6: 2406:440:600::1:0:104
user: root
publickey: /root/.ssh/id_rsa
- name: kylin-worker-2
address: 10.135.38.105
internalAddress: 10.135.38.105
ip6: 2406:440:600::1:0:105
user: root
publickey: /root/.ssh/id_rsa
roleGroups:
etcd:
- kylin-master-1
- kylin-master-2
- kylin-master-3
master:
- kylin-master-1
- kylin-master-2
- kylin-master-3
worker:
- kylin-worker-1
- kylin-worker-2
controlPlaneEndpoint:
# Internal loadbalancer for apiservers
internalLoadbalancer: haproxy
externalDNS: false
domain: lb.kubesphere.local
address: ""
port: 6443
kubernetes:
version: v1.26.15
apiserverCertExtraSans:
- lb.kubespheredev.local
containerManager: containerd
clusterName: cluster-local
autoRenewCerts: true
masqueradeAll: false
maxPods: 110
podPidsLimit: 10000
nodeCidrMaskSize: 24
proxyMode: ipvs
kubeProxyConfiguration:
ipvs:
excludeCIDR:
- 172.16.0.2/24
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
etcd:
type: kubekey
dataDir: "/var/lib/etcd"
heartbeatInterval: 250
electionTimeout: 5000
snapshotCount: 10000
autoCompactionRetention: 8
metrics: basic
quotaBackendBytes: 2147483648
maxRequestBytes: 1572864
maxSnapshots: 5
maxWals: 5
logLevel: info
network:
plugin: cilium
cilium:
ipv6: true
kubePodsCIDR: 10.233.64.0/18,fd85:ee78:d8a6:8607::1:0000/64
kubeServiceCIDR: 10.233.0.0/18,fd85:ee78:d8a6:8607::1000/116
报错信息
使用kubekey 3.1.9 部署双栈集群部署失败,单栈集群部署正常。
kubernetes 1.26.15 1.27.16 1.28.15 1.29.15 1.30.12 1.31.8 都能复现问题
在执行至添加节点时出现报错
[JoinNodesModule] Join worker node
sudo -E /bin/bash -c "/opt/local/bin/kubeadm join --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=FileExisting-crictl,ImagePull"
16:35:32 CST stdout: [kylin-worker-1]
[preflight] Running pre-flight checks
error execution phase preflight: couldn't validate the identity of the API Server: could not find a JWS signature in the cluster-info ConfigMap for token ID "sppfm8"
To see the stack trace of this error execute with --v=5 or higher
16:35:32 CST stderr: [kylin-worker-1]
Failed to exec command: sudo -E /bin/bash -c "/opt/local/bin/kubeadm join --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=FileExisting-crictl,ImagePull"
[preflight] Running pre-flight checks
error execution phase preflight: couldn't validate the identity of the API Server: could not find a JWS signature in the cluster-info ConfigMap for token ID "sppfm8"
To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1
16:35:32 CST stdout: [kylin-worker-1]
[preflight] Running pre-flight checks
error execution phase preflight: couldn't validate the identity of the API Server: could not find a JWS signature in the cluster-info ConfigMap for token ID "sppfm8"
To see the stack trace of this error execute with --v=5 or higher
16:35:32 CST command: [kylin-worker-1]
sudo -E /bin/bash -c "/opt/local/bin/kubeadm reset -f --cri-socket unix:///run/containerd/containerd.sock"
16:35:32 CST stdout: [kylin-worker-1]
[preflight] Running pre-flight checks
W0530 16:35:32.631649 54797 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
16:35:32 CST stdout: [kylin-worker-1]
[preflight] Running pre-flight checks
W0530 16:35:32.631649 54797 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
16:35:32 CST message: [kylin-worker-1]
join node failed: Failed to exec command: sudo -E /bin/bash -c "/opt/local/bin/kubeadm join --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=FileExisting-crictl,ImagePull"
[preflight] Running pre-flight checks
error execution phase preflight: couldn't validate the identity of the API Server: could not find a JWS signature in the cluster-info ConfigMap for token ID "sppfm8"
To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1
16:35:32 CST retry: [kylin-worker-1]