背景:
1主2从
版主:1.24
添加新工作节点
操作是否如下:
将新节点的信息放在配置文件里面的 hosts
和 roleGroups
之下。然后执行
./kk add nodes -f sample.yaml
但是我现在是无外网,添加新节点,有什么镜像所需要的?能否在现集群的拿到
背景:
1主2从
版主:1.24
操作是否如下:
将新节点的信息放在配置文件里面的 hosts
和 roleGroups
之下。然后执行
./kk add nodes -f sample.yaml
但是我现在是无外网,添加新节点,有什么镜像所需要的?能否在现集群的拿到
添加新节点爆的错误
依赖环境:
+--------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| name | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker | containerd | nfs client | ceph client | glusterfs client | time |
+--------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| master | y | y | y | y | y | | | y | | | v1.6.4 | | | | UTC 02:09:33 |
| node1 | y | y | y | y | y | | | y | | | v1.6.4 | | | | UTC 02:09:33 |
| node2 | y | y | y | y | y | | | y | | | v1.6.4 | | | | HKT 10:09:30 |
+--------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
而且在新机器上已经导入镜像:
samle.yaml文件
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
name: sample
spec:
hosts:
##You should complete the ssh information of the hosts
- {name: master, address: 10.10.30.120, internalAddress: 10.10.30.120,user: test, password: 123456}
- {name: node1, address: 10.10.30.121, internalAddress: 10.10.30.121,user: test, password: 123456}
- {name: node2, address: 10.10.30.122, internalAddress: 10.10.30.122,user: test, password: 123456}
roleGroups:
etcd:
- master
master:
- master
worker:
- node1
- node2
controlPlaneEndpoint:
##Internal loadbalancer for apiservers
#internalLoadbalancer: haproxy
##If the external loadbalancer was used, 'address' should be set to loadbalancer's ip.
domain: lb.kubesphere.local
address: ""
port: 6443
kubernetes:
version: v1.26.5
clusterName: cluster.local
proxyMode: ipvs
masqueradeAll: false
maxPods: 110
nodeCidrMaskSize: 24
containerManager: containerd
network:
plugin: calico
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
registry:
privateRegistry: ""
需要时间同步