1.KubeKey
KubeKey(由 Go 语言开发)是一种全新的安装工具,替代了以前使用的基于 ansible 的安装程序。KubeKey 为您提供灵活的安装选择,您可以仅安装 Kubernetes,也可以同时安装 Kubernetes 和 KubeSphere。
KubeKey 的几种使用场景:
- 仅安装 Kubernetes;
- 使用一个命令同时安装 Kubernetes 和 KubeSphere;
- 扩缩集群;
- 升级集群;
- 安装 Kubernetes 相关的插件(Chart 或 YAML)。
KubeKey 如何运作?
无论您是使用它来创建,扩缩还是升级集群,都必须事先使用 kk 准备配置文件。此配置文件包含集群的基本参数,例如主机信息、网络配置(CNI 插件以及 Pod 和 Service CIDR)、仓库镜像、插件(YAML 或 Chart)和可插拔组件选项(如果您安装 KubeSphere)。
准备好配置文件后,您需要使用 ./kk 命令以及不同的标志来进行不同的操作。KubeKey 会自动安装 Docker,并拉取所有必要的镜像以进行安装。安装完成后,您还可以检查安装日志。
为什么选择 KubeKey
- 以前基于 ansible 的安装程序依赖于许多软件,例如 Python。KubeKey 由 Go 语言开发,可以消除在多种环境中出现的问题,确保成功安装。
- KubeKey 支持多种安装选项,例如 All-in-One、多节点安装以及离线安装。
- KubeKey 使用 Kubeadm 在节点上尽可能多地并行安装 Kubernetes 集群,使安装更简便,提高效率。与旧版的安装程序相比,它极大地节省了安装时间。
- KubeKey 旨在将集群作为对象来进行安装,即 CaaO。
支持矩阵
若需使用 KubeKey 来安装 Kubernetes 和 KubeSphere v3.1.0,请参见下表以查看所有受支持的 Kubernetes 版本。
KubeSphere 版本 | 受支持的 Kubernetes 版本 |
---|---|
v3.1.0 | v1.17.0,v1.17.4,v1.17.5,v1.17.6,v1.17.7,v1.17.8,v1.17.9,v1.18.3,v1.18.5,v1.18.6,v1.18.8,v1.19.0,v1.19.8,v1.19.9,v1.20.4 |
- 您也可以运行 ./kk version –show-supported-k8s,查看能使用 KubeKey 安装的所有受支持的 Kubernetes 版本。
- 能使用 KubeKey 安装的 Kubernetes 版本与 KubeSphere v3.1.0 支持的 Kubernetes 版本不同。如需在现有 Kubernetes 集群上安装 KubeSphere v3.1.0,您的 Kubernetes 版本必须为 v1.17.x,v1.18.x,v1.19.x 或 v1.20.x。
2.部署环境配置
Host_Name | ip | OS | KubeSpeher_version | kubernetes_version | docker_version | role |
---|---|---|---|---|---|---|
KubeSphere-01 | 193.169.100.50 | Centos_7.6 | KubeSpher v1.3.1 | kubernetes v1.20.4 | docker v1.19.3.15 | etcd, master, worker |
KubeSphere-02 | 193.169.100.51 | Centos_7.6 | KubeSpher v1.3.1 | kubernetes v1.20.4 | docker v1.19.3.15 | etcd, master, worker |
KubeSphere-03 | 193.169.100.52 | Centos_7.6 | KubeSpher v1.3.1 | kubernetes v1.20.4 | docker v1.19.3.15 | etcd, master, worker |
KubeSphere-04 | 193.169.100.53 | Centos_7.6 | KubeSpher v1.3.1 | kubernetes v1.20.4 | docker v1.19.3.15 | worker |
KubeSphere-05 | 193.169.100.54 | Centos_7.6 | KubeSpher v1.3.1 | kubernetes v1.20.4 | docker v1.19.3.15 | worker |
KubeSphere-06 | 193.169.100.55 | Centos_7.6 | KubeSpher v1.3.1 | kubernetes v1.20.4 | docker v1.19.3.15 | worker |
KubeSphere-HA01 | 193.169.100.56 | Centos_7.6 | Keepalived,HAProxy | |||
KubeSphere-HA02 | 193.169.100.57 | Centos_7.6 | Keepalived,HAProxy | |||
Ceph-stroage01 | 193.169.100.58 | Centos_7.6 | ceph | |||
Ceph-stroage02 | 193.169.100.59 | Centos_7.6 | ceph | |||
Ceph-stroage03 | 193.169.100.60 | Centos_7.6 | ceph |
系统要求
- 建议您使用干净的操作系统(不安装任何其他软件),否则可能会有冲突。
- 请确保每个节点的硬盘至少有 100G。
- 所有节点必须都能通过 SSH 访问。
- 所有节点时间同步。
- 所有节点都应使用 sudo curl openssl ebtables socat ipset conntrack docker
- KubeKey 使用 /var/lib/docker 作为默认路径来存储所有 Docker 相关文件(包括镜像)。建议您添加附加存储卷,分别给 /var/lib/docker 和 /mnt/registry 挂载至少 100G。
磁盘挂载配置:
step 1.创建目录
step 2.修改/etc/fstabmkdir /var/lib/docker mkdir /mnt/registry
cat >> /etc/fstab <<EOF UUID=f8d43060-a846-4cc5-a915-9b0cef614891 /var/lib/docker xfs defaults 0 0 UUID=8351159c-8e83-45ef-97f2-455560f992cc /mnt/registry xfs defaults 0 0 EOF
2.1.配置防火墙和selinux(所有节点)
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
2.2.配置 hosts(所有节点)
修改主机名
nmcli g hostname kubesphere-01
配置主机名解析
cat >> /etc/hosts << EOF
# kubesphere Cluster
193.169.100.50 kubesphere-01
193.169.100.51 kubesphere-02
193.169.100.52 kubesphere-03
193.169.100.53 kubesphere-04
193.169.100.54 kubesphere-05
193.169.100.55 kubesphere-06
# etcd cluster
193.169.100.62 etcd-cluster01
193.169.100.63 etcd-cluster02
193.169.100.64 etcd-cluster03
# HA
193.169.100.56 kubesphere-ha01
193.169.100.57 kubesphere-ha02
# VIP
193.169.100.61 lb.kubesphere.local
# Ceph-stroage
193.169.100.58 ceph-stroage01
193.169.100.59 ceph-stroage02
193.169.100.60 ceph-stroage03
EOF
2.3.配置SSH互信(所有节点,可选)
管理节点创建ssh密钥
[root@localhost yum.repos.d]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:YCK3yFzTa7qi+10Jwy0Og5wKeUrjMKQfeLZuG0JtnSo root@ceph-stroage01
The key's randomart image is:
+---[RSA 2048]----+
| |
| . |
| .. = + |
|+*o*.*.o |
|O*@o*o+ S |
|BBo*.* . |
|+E+.o o |
| o+o o |
|o=+oo |
+----[SHA256]-----+
[root@localhost yum.repos.d]#
分发密钥,认证
[root@localhost yum.repos.d]# ssh-copy-id ceph-stroage01
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'ceph-stroage01 (193.169.100.78)' can't be established.
ECDSA key fingerprint is SHA256:R0iFewa45i6a74ftDpgS5VdhzOm6Cihfd5cIbs7jJPA.
ECDSA key fingerprint is MD5:42:4c:19:03:db:a6:e1:de:e1:98:7d:84:12:7f:b8:36.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@ceph-stroage01's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'ceph-stroage01'"
and check to make sure that only the key(s) you wanted were added.
[root@localhost yum.repos.d]#
2.4.配置 yum 源(所有节点,可选)
2.5.网络配置
step 1.清空网卡配置
ifdown em1
ifdown em2
rm -rf /etc/sysconfig/network-scripts/ifcfg-em1
rm -rf /etc/sysconfig/network-scripts/ifcfg-em2
step 2.创建 bond 网卡
nmcli con add type bond con-name bond0 ifname bond0 mode 802.3ad ip4 192.168.60.152/24 gw4 192.168.60.254
step 3.设置 bond 模式
nmcli con mod id bond0 bond.options mode=802.3ad,miimon=100,lacp_rate=fast,xmit_hash_policy=layer2+3
step 4.将物理网卡绑定至 bond
nmcli con add type bond-slave ifname em1 con-name em1 master bond0
nmcli con add type bond-slave ifname em2 con-name em2 master bond0
step 5.修改网卡模式
vi /etc/sysconfig/network-scripts/ifcfg-bond0
BOOTPROTO=static
step 6.重启 Network Manager
systemctl restart NetworkManager
# Display NIC information
nmcli con
step 7.修改主机名和 DNS
hostnamectl set-hostname worker-1
vim /etc/resolv.conf
2.6.时间同步配置(所有节点)
step 1.所有节点安装 chrony
yum install chrony -y
step 2.设置时区
timedatectl set-timezone Asia/Shanghai
step 3.配置时间服务器
[root@localhost yum.repos.d]# cat /etc/chrony.conf
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server 193.169.100.58 iburst
# Record the rate at which the system clock gains/losses time.
driftfile /var/lib/chrony/drift
# Allow the system clock to be stepped in the first three updates
# if its offset is larger than 1 second.
makestep 1.0 3
# Enable kernel synchronization of the real-time clock (RTC).
rtcsync
# Enable hardware timestamping on all interfaces that support it.
#hwtimestamp *
# Increase the minimum number of selectable sources required to adjust
# the system clock.
#minsources 2
# Allow NTP client access from local network.
#allow 192.168.0.0/16
allow 193.169.100.0/24
# Serve time even if not synchronized to a time source.
#local stratum 10
local stratum 3
# Specify file containing keys for NTP authentication.
#keyfile /etc/chrony.keys
# Specify directory for log files.
logdir /var/log/chrony
# Select which information is logged.
#log measurements statistics tracking
[root@localhost yum.repos.d]#
检查 ntp-server 是否可用。
chronyc activity -v
step 4.配置客户端
[root@localhost ~]# cat /etc/chrony.conf
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server 193.169.100.58 iburst
# Record the rate at which the system clock gains/losses time.
driftfile /var/lib/chrony/drift
# Allow the system clock to be stepped in the first three updates
# if its offset is larger than 1 second.
makestep 1.0 3
# Enable kernel synchronization of the real-time clock (RTC).
rtcsync
# Enable hardware timestamping on all interfaces that support it.
#hwtimestamp *
# Increase the minimum number of selectable sources required to adjust
# the system clock.
#minsources 2
# Allow NTP client access from local network.
#allow 192.168.0.0/16
# Serve time even if not synchronized to a time source.
#local stratum 10
# Specify file containing keys for NTP authentication.
#keyfile /etc/chrony.keys
# Specify directory for log files.
logdir /var/log/chrony
# Select which information is logged.
#log measurements statistics tracking
[root@localhost ~]#
2.7.更新系统和依赖(所有节点,可选)
执行以下命令更新系统包并安装依赖项:
yum install openssl openssl-devel socat conntrack ebtables ipset conntrack-tools sudo curl nfs-utils -y
2.8.优化系统
step 1.添加所需的内核引导参数(K8s节点)
/sbin/grubby --update-kernel=ALL --args='cgroup_enable=memory cgroup.memory=nokmem swapaccount=1'
step 2.启用 overlay2 内核模块(K8s节点)
echo "overlay2" | sudo tee -a /etc/modules-load.d/overlay.conf
step 3.刷新动态生成的 grub2 配置(K8s节点)
sudo grub2-set-default 0
step 4.调整内核参数并使修改生效(K8s节点)
cat <<EOF | sudo tee -a /etc/sysctl.conf
vm.max_map_count = 262144
fs.may_detach_mounts = 1
net.ipv4.ip_forward = 1
vm.swappiness=1
kernel.pid_max =1000000
fs.inotify.max_user_instances=524288
EOF
sudo sysctl -p
step 5.调整系统限制(所有节点)
vim /etc/security/limits.conf
* soft nofile 1024000
* hard nofile 1024000
* soft memlock unlimited
* hard memlock unlimited
root soft nofile 1024000
root hard nofile 1024000
root soft memlock unlimited
step 6.删除旧的限制配置(所有节点)
sudo rm /etc/security/limits.d/20-nproc.conf
step 7.重启系统
3.Keepalived + HAproxy高可用配置
集群有三个主节点,三个工作节点,两个用于负载均衡的节点,以及一个虚拟 IP 地址也可称为“浮动 IP 地址”。
Keepalived 提供 VRPP 实现,并允许您配置 Linux 机器使负载均衡,预防单点故障。HAProxy 提供可靠、高性能的负载均衡,能与 Keepalived 完美配合。
3.1.HAproxy
在两台用于负载均衡的机器上运行以下命令以配置 Proxy(HA):
step 1.安装haproxy所需要的依赖环境
# yum install -y gcc gcc-c++ glibc glibc-devel pcre pcre-devel openssl openssl-devel systemd-devel vim iotop bc zip unzip zlib-devel lrzsz tree screen lsof tcpdump wget
step 2.Haproxy编译及安装
# 下载并解压源码包
wget http://haproxy.1wt.eu/download/2.3.10/src/haproxy-2.3.10.tar.gz
tar xzf haproxy-2.3.10.tar.gz
cd haproxy-2.3.10
# make 代码
make ARCH=x86_64 TARGET=linux-glibc USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1 USE_SYSTEMD=1 USE_CPU_AFFINITY=1 PREFIX=/usr/local/haproxy
# make install 程序
make install PREFIX=/usr/local/haproxy
# 执行程序复制到sbin下
cp /usr/local/haproxy/sbin/haproxy /usr/sbin/
step 3.验证haproxy版本信息
[root@localhost haproxy-2.3.10]# haproxy -v
HA-Proxy version 2.3.10-4764f0e 2021/04/23 - https://haproxy.org/
Status: stable branch - will stop receiving fixes around Q1 2022.
Known bugs: http://www.haproxy.org/bugs/bugs-2.3.10.html
Running on: Linux 3.10.0-957.el7.x86_64 #1 SMP Thu Nov 8 23:39:32 UTC 2018 x86_64
[root@localhost haproxy-2.3.10]#
step 4.创建haproxy的所需目录
mkdir /etc/haproxy
mkdir /etc/haproxy/conf
step 5.配置Haproxy服务
cat > /etc/haproxy/haproxy.cfg <<EOF
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
# local2.* /var/log/haproxy.log
#
log /dev/log local0
log /dev/log local1 notice
daemon
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 1
timeout http-request 10s
timeout queue 20s
timeout connect 5s
timeout client 20s
timeout server 20s
timeout http-keep-alive 10s
timeout check 10s
#---------------------------------------------------------------------
# apiserver frontend which proxys to the masters
#---------------------------------------------------------------------
frontend apiserver
bind *:6443
mode tcp
option tcplog
default_backend apiserver
#---------------------------------------------------------------------
# round robin balancing for apiserver
#---------------------------------------------------------------------
backend apiserver
option httpchk GET /healthz
http-check expect status 200
mode tcp
option ssl-hello-chk
balance roundrobin
# 替换为自己的主机名 + 网络地址
server kubesphere-01 193.169.100.50:6443 check
server kubesphere-02 193.169.100.51:6443 check
server kubesphere-03 193.169.100.52:6443 check
#---------------------------------------------------------------------
# collection haproxy statistics message
#---------------------------------------------------------------------
listen stats
bind *:1080
stats auth admin:awesomePassword
stats refresh 5s
stats realm HAProxy\ Statistics
stats uri /admin?stats
EOF
保存文件并运行以下命令以重启 HAproxy。
step 6.haproxy编译安装完成之后需要准备启动脚本文件,在CentOS7之后都是用service文件启动
cat > /etc/systemd/system/haproxy.service <<EOF
[Unit]
Description=HAProxy Load Balancer
After=syslog.target network.target
[Service]
ExecStartPre=/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -f /etc/haproxy/conf -c -q
ExecStart=/usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -f /etc/haproxy/conf -p /run/haproxy.pid
ExecReload=/bin/kill -USR2 $MAINPID
[Install]
WantedBy=multi-user.target
EOF
step 7.启动haproxy服务
systemctl daemon-reload
systemctl start haproxy
systemctl enable haproxy
服务器启动后,检测后端服务状态
[root@localhost haproxy-2.3.10]# systemctl start haproxy
Broadcast message from systemd-journald@kubesphere-ha01 (Sat 2021-06-26 01:24:01 CST):
haproxy[27305]: backend apiserver has no server available!
Broadcast message from systemd-journald@kubesphere-ha01 (Sat 2021-06-26 01:24:01 CST):
haproxy[27305]: backend apiserver has no server available!
Message from syslogd@localhost at Jun 26 01:24:01 ...
haproxy[27305]:backend apiserver has no server available!
Message from syslogd@localhost at Jun 26 01:24:01 ...
haproxy[27305]:backend apiserver has no server available!
[root@localhost haproxy-2.3.10]#
3.2.Keepalived
step 1.安装依赖包
yum -y install libnl libnl-devel
step 2.编译安装 keepalived
# 下载并解压源码包
wget https://www.keepalived.org/software/keepalived-2.2.2.tar.gz
tar xzf keepalived-2.2.2.tar.gz
cd keepalived-2.2.2
# 编译安装
./configure --prefix=/usr/local/keepalived/
make && make install
step 3.创建keepalived配置文件目录
keepalived的默认配置文件位置是: /etc/keepalived/keepalived.conf
mkdir /etc/keepalived
拷贝配置文件到/etc/keepalived目录下
cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf
拷贝keepalived脚本到/etc/sysconfig/ 目录
cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
step 4.修改配置文件:
==两个节点的配置略有不通,请注意。==
cat > /etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived
global_defs {
notification_email {
}
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_script chk_haproxy {
script "killall -0 haproxy"
interval 2
weight 2
}
vrrp_instance haproxy-vip {
state BACKUP
priority 100
# 高可用服务使用的网络地址在哪个网卡上
interface ens18
virtual_router_id 60
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
# 本机的IP地址
unicast_src_ip 193.169.100.56
unicast_peer {
# 对端机器的IP地址
193.169.100.57
}
virtual_ipaddress {
# VIP address
193.169.100.61/24
}
track_script {
chk_haproxy
}
}
EOF
配置文件说明:
- 对于 interface 字段,您必须提供自己的网卡信息。您可以在机器上运行 ifconfig 以获取该值。
- 为 unicast_src_ip 提供的 IP 地址是您当前机器的 IP 地址。
- 对于也安装了 HAproxy 和 Keepalived 进行负载均衡的其他机器,必须在字段 unicast_peer 中输入其 IP 地址。
step 4.启动服务
systemctl start keepalived.service
systemctl enable keepalived.service
step 5.验证高可用
4.存储配置
安装 NFS Client
https://kubesphere.com.cn/docs/installing-on-linux/persistent-storage-configurations/install-nfs-client/
安装 GlusterFS
https://kubesphere.com.cn/docs/installing-on-linux/persistent-storage-configurations/install-glusterfs/
安装 Ceph
https://kubesphere.com.cn/docs/installing-on-linux/persistent-storage-configurations/install-ceph-csi-rbd/
http://dbaselife.com/project-3/doc-752/
5.安装步骤:
5.1.安装包下载:
提示:由于包含所有组件镜像,该压缩包较大,如果网络不佳,可能会导致下载耗时较长。也可根据文档中的镜像列表将相关镜像导入私有镜像仓库中后使用kubekey自行安装。
# md5: 65e9a1158a682412faa1166c0cf06772
curl -Ok https://kubesphere-installer.pek3b.qingstor.com/offline/v3.1.0/kubesphere-all-kubesphere-all-v3.1.0-offline-linux-amd64.tar.gz
5.2.创建配置文件
step 1.创建集群配置文件
安装包解压后进入kubesphere-all-v3.1.0-offline-linux-amd64
[root@kubersphere-01 registry]# ll kubesphere-all-v3.1.0-offline-linux-amd64/
total 13364
drwxr-xr-x. 5 root root 76 May 6 12:59 charts
drwxr-xr-x. 2 root root 186 Jun 7 13:22 dependencies
-rw-r--r--. 1 root root 4833 Jun 16 16:54 images-list.txt
-rwxr-xr-x. 1 1001 polkitd 13662028 Apr 28 19:39 kk
drwxr-xr-x. 7 root root 97 Jun 16 17:36 kubekey
drwxr-xr-x. 2 root root 4096 May 6 14:39 kubesphere-images
-rwxr-xr-x. 1 root root 6762 May 8 10:56 offline-installation-tool.sh
[root@kubersphere-01 registry]#
step 2.创建配置文件
root@kubesphere-01 ~]# cd kubesphere-all-v3.1.0-offline-linux-amd64
[root@kubesphere-01 kubesphere-all-v3.1.0-offline-linux-amd64]# ./kk create config --with-kubesphere v3.1.0 --with-kubernetes v1.20.4
[root@kubesphere-01 kubesphere-all-v3.1.0-offline-linux-amd64]#
step 3.修改生成的配置文件
修改生成的配置文件config-sample.yaml,也可使用-f参数自定义配置文件路径。
cat > /root/kubesphere-all-v3.1.0-offline-linux-amd64/xiangxun.yaml <<EOF
apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: kubesphere-01, address: 193.169.100.50, internalAddress: 193.169.100.50, user: root, password: 123456}
- {name: kubesphere-02, address: 193.169.100.51, internalAddress: 193.169.100.51, user: root, password: 123456}
- {name: kubesphere-03, address: 193.169.100.52, internalAddress: 193.169.100.52, user: root, password: 123456}
- {name: kubesphere-04, address: 193.169.100.53, internalAddress: 193.169.100.53, user: root, password: 123456}
- {name: kubesphere-05, address: 193.169.100.54, internalAddress: 193.169.100.54, user: root, password: 123456}
- {name: kubesphere-06, address: 193.169.100.55, internalAddress: 193.169.100.55, user: root, password: 123456}
- {name: etcd-cluster01, address: 193.169.100.62, internalAddress: 193.169.100.62, user: root, password: 123456}
- {name: etcd-cluster02, address: 193.169.100.63, internalAddress: 193.169.100.63, user: root, password: 123456}
- {name: etcd-cluster03, address: 193.169.100.64, internalAddress: 193.169.100.64, user: root, password: 123456}
roleGroups:
etcd:
- etcd-cluster01
- etcd-cluster02
- etcd-cluster03
master:
- kubesphere-01
- kubesphere-02
- kubesphere-03
worker:
- kubesphere-04
- kubesphere-05
- kubesphere-06
controlPlaneEndpoint:
domain: lb.kubesphere.local
address: "193.169.100.61"
port: 6443
kubernetes:
version: v1.20.4
imageRepo: kubesphere
clusterName: cluster.local
network:
plugin: calico
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
registry:
registryMirrors: []
insecureRegistries: []
privateRegistry: dockerhub.kubekey.local
addons: []
---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
name: ks-installer
namespace: kubesphere-system
labels:
version: v3.1.0
spec:
persistence:
storageClass: ""
authentication:
jwtSecret: ""
zone: ""
local_registry: ""
etcd:
monitoring: false
endpointIps: localhost
port: 2379
tlsEnable: true
common:
redis:
enabled: false
redisVolumSize: 2Gi
openldap:
enabled: false
openldapVolumeSize: 2Gi
minioVolumeSize: 20Gi
monitoring:
endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
es:
elasticsearchMasterVolumeSize: 4Gi
elasticsearchDataVolumeSize: 20Gi
logMaxAge: 7
elkPrefix: logstash
basicAuth:
enabled: false
username: ""
password: ""
externalElasticsearchUrl: ""
externalElasticsearchPort: ""
console:
enableMultiLogin: true
port: 30880
alerting:
enabled: false
# thanosruler:
# replicas: 1
# resources: {}
auditing:
enabled: false
devops:
enabled: false
jenkinsMemoryLim: 2Gi
jenkinsMemoryReq: 1500Mi
jenkinsVolumeSize: 8Gi
jenkinsJavaOpts_Xms: 512m
jenkinsJavaOpts_Xmx: 512m
jenkinsJavaOpts_MaxRAM: 2g
events:
enabled: false
ruler:
enabled: true
replicas: 2
logging:
enabled: false
logsidecar:
enabled: true
replicas: 2
metrics_server:
enabled: false
monitoring:
storageClass: ""
prometheusMemoryRequest: 400Mi
prometheusVolumeSize: 20Gi
multicluster:
clusterRole: none
network:
networkpolicy:
enabled: false
ippool:
type: none
topology:
type: none
notification:
enabled: false
openpitrix:
store:
enabled: false
servicemesh:
enabled: false
kubeedge:
enabled: false
cloudCore:
nodeSelector: {"node-role.kubernetes.io/worker": ""}
tolerations: []
cloudhubPort: "10000"
cloudhubQuicPort: "10001"
cloudhubHttpsPort: "10002"
cloudstreamPort: "10003"
tunnelPort: "10004"
cloudHub:
advertiseAddress:
- ""
nodeLimit: "100"
service:
cloudhubNodePort: "30000"
cloudhubQuicNodePort: "30001"
cloudhubHttpsNodePort: "30002"
cloudstreamNodePort: "30003"
tunnelNodePort: "30004"
edgeWatcher:
nodeSelector: {"node-role.kubernetes.io/worker": ""}
tolerations: []
edgeWatcherAgent:
nodeSelector: {"node-role.kubernetes.io/worker": ""}
tolerations: []
EOF
下表详细描述了上述参数。
Parameter | Description |
---|---|
roleGroups | |
etcd | etcd 节点名称 |
master | 主节点名称 |
worker | 工作节点名称 |
hosts | |
name | 实例的主机名 |
address | 通过 SSH 相互连接所使用的 IP 地址,为管理网络。 |
internalAddress | 实例的私有 IP 地址。 |
port | 端口 22 是 SSH 的默认端口,因此您无需将它添加至该 YAML 文件中 |
user | 指定用户名 |
password | 指定用户密码 |
privateKeyPath | 指定无密码登录的密钥文件 privateKeyPath: “~/.ssh/id_rsa” |
arch | 指定安装的操作系统的平台 arch: arm64 |
controlPlaneEndpoint | |
domain | 使用外部负载均衡的地址 domain: lb.kubesphere.local |
address | 当使用多主机时,需要指定外部负载均衡的IP |
port | 指定外部负载均衡的port ,默认为 6443 |
addons | 指定存储,从而自定义持久化存储插件,例如 NFS 客户端、Ceph RBD、GlusterFS 等。 |
kubernetes | |
version | 要安装的 Kubernetes 版本。 |
imageRepo | 将下载图像的 Docker Hub 存储库。 |
clusterName | The Kubernetes cluster name. |
masqueradeAll* | 如果使用纯 iptables 代理模式,masqueradeAll 会告诉 kube-proxy 对所有内容进行 SNAT。 它默认为假。 |
maxPods* | 可以在此 Kubelet 上运行的最大 Pod 数。 默认为 110。 |
nodeCidrMaskSize* | 集群中节点 CIDR 的掩码大小。 默认为 24。 |
proxyMode* | 要使用的代理模式。 它默认为 ipvs。 |
network | |
plugin | 要使用的 CNI 插件。 KubeKey 默认安装 Calico,您也可以指定 Flannel。 注意有些特性只有在CNI插件采用Calico时才能使用,比如Pod IP Pools。 |
calico.ipipMode* | 用于在启动时创建的 IPv4 POOL 的 IPIP 模式。 如果设置为 Never 以外的值,则应将 vxlanMode 设置为 Never。 允许的值为 Always、CrossSubnet 和 Never。 它默认为始终。 |
calico.vxlanMode* | 用于启动时创建的 IPv4 POOL 的 VXLAN 模式。 如果设置为 Never 以外的值,则 ipipMode 应设置为 Never。 允许的值为 Always、CrossSubnet 和 Never。 它默认为从不。 |
calico.vethMTU* | 最大传输单元 (MTU) 设置决定了可以通过您的网络传输的最大数据包大小。 默认为 1440。 |
kubePodsCIDR | Kubernetes Pod 子网的有效 CIDR 块。 它不应与您的节点子网和 Kubernetes 服务子网重叠。 |
kubeServiceCIDR | Kubernetes 服务的有效 CIDR 块。 它不应与您的节点子网和 Kubernetes Pod 子网重叠。 |
registry | |
registryMirrors | 配置 Docker 注册表镜像以加快下载速度。 { “registry-mirrors”: [“https://<my-docker-mirror-host>”] } |
insecureRegistries | 设置不安全镜像注册地址。 { “insecure-registries”: [“dockerhub.kubekey.local:5000”]} |
privateRegistry* | 使用kubekey创建私有仓库,则该参数设置为:dockerhub.kubekey.local |
5.3.环境初始化 (可选)
5.3.1.基于docker registry部署自签名私有镜像仓库(可选)
step 1.安装docker-ce
yum install docker-ce-19.03.15 docker-ce-cli-19.03.15 container-selinux-2.119.2 -y
启动服务
systemctl enable --now docker
step 2.使用自签名证书
执行以下命令生成您自己的证书:
mkdir -p certs
openssl req \
-newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key \
-x509 -days 36500 -out certs/domain.crt
当您生成自己的证书时,请确保在字段 Common Name 中指定一个域名: dockerhub.kubekey.local。
step 3.load 仓库镜像到本地
[root@kubesphere-01 kubesphere-all-v3.1.0-offline-linux-amd64]# docker load -i registry.tar.gz
9a5d14f9f550: Loading layer [==================================================>] 5.885MB/5.885MB
de9819405bcf: Loading layer [==================================================>] 818.7kB/818.7kB
b4592cba0628: Loading layer [==================================================>] 20.08MB/20.08MB
3764c3e89288: Loading layer [==================================================>] 4.096kB/4.096kB
7b9a3910f3c3: Loading layer [==================================================>] 2.048kB/2.048kB
Loaded image: registry:latest
[root@kubesphere-01 kubesphere-all-v3.1.0-offline-linux-amd64]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry latest 1fd8e1b0bb7e 2 months ago 26.2MB
[root@kubesphere-01 kubesphere-all-v3.1.0-offline-linux-amd64]#
修改镜像标签
# docker tag registry:latest registry:2
step 4.启动 Docker 仓库
docker run -d \
--restart=always \
--name registry \
-v "$(pwd)"/certs:/certs \
-v /mnt/registry:/var/lib/registry \
-e REGISTRY_HTTP_ADDR=0.0.0.0:443 \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
-p 443:443 \
registry:2
step 5.配置仓库
在 /etc/hosts 中添加一个条目,将主机名(即仓库域名;dockerhub.kubekey.local)映射到您机器的私有 IP 地址
# docker registry
193.169.100.50 dockerhub.kubekey.local
step 6.执行以下命令,复制证书到指定目录,并使 Docker 信任该证书。
mkdir -p /etc/docker/certs.d/dockerhub.kubekey.local
cp certs/domain.crt /etc/docker/certs.d/dockerhub.kubekey.local/ca.crt
5.3.2.使用kk创建镜像仓库
step 1.使用kk创建自签名镜像仓库,可执行如下命令:
./kk init registry -f xiangxun.yaml
执行过程如下
[root@kubesphere-01 kubesphere-all-v3.1.0-offline-linux-amd64]# ./kk init registry -f xiangxun.yaml
INFO[11:41:19 CST] Init local registry
Local image registry created successfully. Address: dockerhub.kubekey.local:5000
INFO[11:41:24 CST] Init local registry successful.
[root@kubesphere-01 kubesphere-all-v3.1.0-offline-linux-amd64]#
执行完,根据日志分析并查看进行,发现registry服务已经运行
[root@kubesphere-01 kubesphere-all-v3.1.0-offline-linux-amd64]# ps -ef|grep registry
root 13819 1 0 23:22 ? 00:00:00 /usr/local/bin/registry serve /etc/kubekey/registry/config.yaml
root 19223 13863 0 23:58 pts/3 00:00:00 grep --color=auto registry
[root@kubesphere-01 kubesphere-all-v3.1.0-offline-linux-amd64]#
registry 仓库的服务:/usr/local/bin/registry
仓库的配置文件 : /etc/kubekey/registry/config.yaml
step 2.配置docker使用不安全的镜像仓库(所有节点)
cat >/etc/docker/daemon.json <<EOF
{
"insecure-registries": ["dockerhub.kubekey.local:5000"]
}
EOF
5.3.3.使用如下命令创建 registry 仓库,前提需要提前load registry 的镜像。**
step 1.安装docker-ce
yum install docker-ce-19.03.15 docker-ce-cli-19.03.15 container-selinux-2.119.2 -y
启动服务
systemctl enable --now docker
step 2.load 仓库镜像到本地
[root@kubesphere-01 kubesphere-all-v3.1.0-offline-linux-amd64]# docker load -i registry.tar.gz
9a5d14f9f550: Loading layer [==================================================>] 5.885MB/5.885MB
de9819405bcf: Loading layer [==================================================>] 818.7kB/818.7kB
b4592cba0628: Loading layer [==================================================>] 20.08MB/20.08MB
3764c3e89288: Loading layer [==================================================>] 4.096kB/4.096kB
7b9a3910f3c3: Loading layer [==================================================>] 2.048kB/2.048kB
Loaded image: registry:latest
[root@kubesphere-01 kubesphere-all-v3.1.0-offline-linux-amd64]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry latest 1fd8e1b0bb7e 2 months ago 26.2MB
[root@kubesphere-01 kubesphere-all-v3.1.0-offline-linux-amd64]#
修改镜像标签
# docker tag registry:latest registry:2
step 3.初始化服务,创建镜像仓库
./kk init os -f xiangxun.yaml -s ./dependencies/ –add-images-repo
执行成功后,可以看到kk为我们创建的镜像仓库,添加镜像仓库的地址 dockerhub.kubekey.local 到/etc/hosts配置中。
[root@localhost kubesphere-all-v3.1.0-offline-linux-amd64]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
dde79ecd18b1 registry:2 "/entrypoint.sh /etc…" About a minute ago Up About a minute 0.0.0.0:443->443/tcp, :::443->443/tcp, 5000/tcp kubekey-registry
[root@localhost kubesphere-all-v3.1.0-offline-linux-amd64]#
5.4.环境初始化
- 若已安装相关依赖,并且已经准备好镜像仓库,可略过该步骤。 (为避免依赖问题的产生,建议提前安装相关依赖或使用已安装相关依赖的系统镜像执行安装)
- 如需使用kk创建自签名镜像仓库,则会在当前机器启动docker registry服务,请确保当前机器存在registry:2,如没有,可从kubesphere-images-v3.0.0/registry.tar中导入,导入命令:docker load < registry.tar
- 由kk启动的镜像仓库端口为443,请确保所有机器均可访问当前机器443端口。镜像数据存储到本地/mnt/registry (建议单独挂盘)。
- dependencies目录中仅提供了ubuntu16.04 (ubuntu-16.04-amd64-debs.tar.gz)、ubuntu18.04 (ubuntu-18.04-amd64-debs.tar.gz)以及centos7 (centos-7-amd64-rpms.tar.gz)的相关依赖包,其它操作系统可自行制作rpm或deb依赖包。打包规则为 ${releaseID}-${versionID}-${osArch}-${debs or rpms}.tar.gz
对配置文件中所有节点安装依赖:
./kk init os -f xiangxun.yaml -s ./dependencies
或如下命令:
[root@kubesphere-01 kubesphere-all-v3.1.0-offline-linux-amd64]# ./kk init os -f xiangxun.yaml -s ./dependencies
INFO[11:54:38 CST] Init operating system
INFO[11:54:39 CST] Start initializing kubesphere-01 [193.169.100.50] node=193.169.100.50
INFO[11:54:40 CST] Start initializing kubesphere-03 [193.169.100.52] node=193.169.100.52
INFO[11:54:40 CST] Start initializing kubesphere-02 [193.169.100.51] node=193.169.100.51
INFO[11:54:41 CST] Start initializing kubesphere-04 [193.169.100.53] node=193.169.100.53
INFO[11:54:42 CST] Start initializing kubesphere-05 [193.169.100.54] node=193.169.100.54
INFO[11:54:42 CST] Start initializing kubesphere-06 [193.169.100.55] node=193.169.100.55
Push /root/kubesphere-all-v3.1.0-offline-linux-amd64/dependencies/centos-7-amd64-rpms.tar.gz to 193.169.100.55:/tmp Done
Push /root/kubesphere-all-v3.1.0-offline-linux-amd64/dependencies/centos-7-amd64-rpms.tar.gz to 193.169.100.53:/tmp Done
Push /root/kubesphere-all-v3.1.0-offline-linux-amd64/dependencies/centos-7-amd64-rpms.tar.gz to 193.169.100.50:/tmp Done
Push /root/kubesphere-all-v3.1.0-offline-linux-amd64/dependencies/centos-7-amd64-rpms.tar.gz to 193.169.100.54:/tmp Done
Push /root/kubesphere-all-v3.1.0-offline-linux-amd64/dependencies/centos-7-amd64-rpms.tar.gz to 193.169.100.52:/tmp Done
Push /root/kubesphere-all-v3.1.0-offline-linux-amd64/dependencies/centos-7-amd64-rpms.tar.gz to 193.169.100.51:/tmp Done
INFO[12:00:22 CST] Complete initialization kubesphere-02 [193.169.100.51] node=193.169.100.51
INFO[12:01:12 CST] Complete initialization kubesphere-04 [193.169.100.53] node=193.169.100.53
INFO[12:01:19 CST] Complete initialization kubesphere-06 [193.169.100.55] node=193.169.100.55
INFO[12:01:29 CST] Complete initialization kubesphere-05 [193.169.100.54] node=193.169.100.54
INFO[12:01:30 CST] Complete initialization kubesphere-01 [193.169.100.50] node=193.169.100.50
INFO[12:01:38 CST] Complete initialization kubesphere-03 [193.169.100.52] node=193.169.100.52
INFO[12:01:38 CST] Init operating system successful.
[root@kubesphere-01 kubesphere-all-v3.1.0-offline-linux-amd64]#
5.5.导入镜像
进入kubesphere-all-v3.1.0-offline-linux-amd64/kubesphere-images-v3.0.0
[root@kubesphere-01 kubesphere-all-v3.1.0-offline-linux-amd64]# cd kubesphere-images/
[root@kubesphere-01 kubesphere-images]# ll
total 8502032
-rw-r--r--. 1 root root 454462873 May 6 11:16 csi-images.tar.gz
-rw-r--r--. 1 root root 1352109590 May 6 15:33 example-images-images.tar.gz
-rw-r--r--. 1 root root 559227048 May 6 11:45 istio-images.tar.gz
-rw-r--r--. 1 root root 813403333 Jun 16 17:03 k8s-images.tar.gz
-rw-r--r--. 1 root root 58786774 May 6 12:22 kubeedge-images.tar.gz
-rw-r--r--. 1 root root 3558946714 May 17 15:49 kubesphere-devops-images.tar.gz
-rw-r--r--. 1 root root 1107488410 May 8 10:54 kubesphere-images.tar.gz
-rw-r--r--. 1 root root 736609633 May 6 11:38 kubesphere-logging-images.tar.gz
-rw-r--r--. 1 root root 34754037 May 6 12:20 openpitrix-images.tar.gz
-rw-r--r--. 1 root root 30274818 May 6 12:21 weave-scope-images.tar.gz
[root@kubesphere-01 kubesphere-images]#
使用push-images.sh将镜像导入之前准备的仓库中,脚本后镜像仓库地址请填写真实仓库地址
./offline-installation-tool.sh -l images-list.txt -d ./kubesphere-images -r dockerhub.kubekey.local
如需自行导入镜像,以kubesphere/kube-apiserver:v1.17.9为例
docker tag kubesphere/kube-apiserver:v1.17.9 dockerhub.kubesphere.local/kubesphere/kube-apiserver:v1.17.9
i retag镜像时需要保留原始镜像的namespace
5.6.部署kubesphere
执行部署,对当前的安装环境进行检查
[root@kubesphere-01 kubesphere-all-v3.1.0-offline-linux-amd64]# ./kk create cluster -f xiangxun.yaml
+----------------+------+------+---------+----------+-------+-------+-----------+----------+------------+-------------+------------------+--------------+
| name | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time |
+----------------+------+------+---------+----------+-------+-------+-----------+----------+------------+-------------+------------------+--------------+
| kubesphere-02 | y | y | y | y | y | y | y | 19.03.15 | y | y | y | CST 13:04:51 |
| kubesphere-06 | y | y | y | y | y | y | y | 19.03.15 | y | y | y | CST 13:04:52 |
| kubesphere-05 | y | y | y | y | y | y | y | 19.03.15 | y | y | y | CST 13:04:52 |
| etcd-cluster01 | y | y | y | y | y | y | y | 19.03.15 | y | y | y | CST 13:04:52 |
| etcd-cluster03 | y | y | y | y | y | y | y | 19.03.15 | y | y | y | CST 13:04:51 |
| kubesphere-04 | y | y | y | y | y | y | y | 19.03.15 | y | y | y | CST 13:04:51 |
| kubesphere-01 | y | y | y | y | y | y | y | 19.03.15 | y | y | y | CST 13:04:51 |
| kubesphere-03 | y | y | y | y | y | y | y | 19.03.15 | y | y | y | CST 13:04:52 |
| etcd-cluster02 | y | y | y | y | y | y | y | 19.03.15 | y | y | y | CST 13:04:52 |
+----------------+------+------+---------+----------+-------+-------+-----------+----------+------------+-------------+------------------+--------------+
This is a simple check of your environment.
Before installation, you should ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations
Continue this installation? [yes/no]:
使用 “–skip-pull-images” 可以跳过镜像的下载。
初始化过程中
[root@kubesphere-01 ssl]# kubectl get pod -n kubesphere-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ks-installer-7f449cf566-6n7gm 0/1 Pending 0 22m <none> <none> <none> <none>
[root@kubesphere-01 ssl]#
运行以下命令以检查安装日志。
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
看到以下信息时,表明高可用集群已成功创建。
#####################################################
### Welcome to KubeSphere! ###
#####################################################
Console: http://193.169.100.62:30880
Account: admin
Password: P@88w0rd
NOTES:
1. After you log into the console, please check the
monitoring status of service components in
"Cluster Management". If any service is not
ready, please wait patiently until all components
are up and running.
2. Please change the default password after login.
#####################################################
https://kubesphere.io 2021-06-29 13:00:55
#####################################################
INFO[13:01:15 CST] Installation is complete.
Please check the result using the command:
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
6.添加或删除节点
6.1.添加新节点
自 KubeSphere v3.0.0 起,您可以使用全新的安装程序 KubeKey 将新节点添加到集群。从根本上说,该操作是基于 Kubelet 的注册机制。
KubeSphere 支持混合环境,这意味着新添加的主机操作系统可以是 CentOS 或者 Ubuntu。
添加工作节点
step 1.使用 KubeKey 检索集群信息
./kk create config --from-cluster
step 2.在配置文件中,将新节点的信息放在 hosts 和 roleGroups 之下。
···
spec:
hosts:
- {name: master1, address: 192.168.0.3, internalAddress: 192.168.0.3, user: root, password: Qcloud@123}
- {name: node1, address: 192.168.0.4, internalAddress: 192.168.0.4, user: root, password: Qcloud@123}
- {name: node2, address: 192.168.0.5, internalAddress: 192.168.0.5, user: root, password: Qcloud@123}
roleGroups:
etcd:
- master1
master:
- master1
worker:
- node1
- node2
···
step 3.执行以下命令:
./kk add nodes -f sample.yaml
安装完成后,您将能够在 KubeSphere 的控制台上查看新节点及其信息。在集群管理页面,选择左侧菜单节点管理下的集群节点,或者执行命令 kubectl get node 以检查更改。
$ kubectl get node
NAME STATUS ROLES AGE VERSION
master1 Ready master,worker 20d v1.17.9
node1 Ready worker 31h v1.17.9
node2 Ready worker 31h v1.17.9
添加主节点以实现高可用
点的步骤与添加工作节点的步骤大体一致,不过您需要为集群配置负载均衡器。
step 1.使用 KubeKey 创建配置文件
./kk create config --from-cluster
step 2.将新节点和负载均衡器的信息添加到文件中
apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
name: sample
spec:
hosts:
# You should complete the ssh information of the hosts
- {name: master1, address: 172.16.0.2, internalAddress: 172.16.0.2, user: root, password: Testing123}
- {name: master2, address: 172.16.0.5, internalAddress: 172.16.0.5, user: root, password: Testing123}
- {name: master3, address: 172.16.0.6, internalAddress: 172.16.0.6, user: root, password: Testing123}
- {name: worker1, address: 172.16.0.3, internalAddress: 172.16.0.3, user: root, password: Testing123}
- {name: worker2, address: 172.16.0.4, internalAddress: 172.16.0.4, user: root, password: Testing123}
- {name: worker3, address: 172.16.0.7, internalAddress: 172.16.0.7, user: root, password: Testing123}
roleGroups:
etcd:
- master1
- master2
- master3
master:
- master1
- master2
- master3
worker:
- worker1
- worker2
- worker3
controlPlaneEndpoint:
# If loadbalancer is used, 'address' should be set to loadbalancer's ip.
domain: lb.kubesphere.local
address: 172.16.0.253
port: 6443
kubernetes:
version: v1.17.9
imageRepo: kubesphere
clusterName: cluster.local
proxyMode: ipvs
masqueradeAll: false
maxPods: 110
nodeCidrMaskSize: 24
network:
plugin: calico
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
registry:
privateRegistry: ""
step 3.保存文件并执行以下命令以应用配置。
./kk add nodes -f sample.yaml
6.2.删除节点
停止调度节点
将节点标记为不可调度可防止调度程序将新的 Pod 放置到该节点上,同时不会影响该节点上的现有 Pod。作为节点重启或者其他维护之前的准备步骤,这十分有用。
以 admin 身份登录控制台,访问集群管理页面。若要将节点标记为不可调度,从左侧菜单中选择节点管理下的集群节点,找到想要从集群中删除的节点,点击停止调度按钮。或者,直接执行命令 kubectl cordon $NODENAME。
守护进程集的 Pod 可以在无法调度的节点上运行。守护进程集通常提供应在节点上运行的本地节点服务,即使正在驱逐工作负载应用程序也不受影响。
删除节点
若要删除节点,您需要首先准备集群的配置文件(即在设置集群时所用的配置文件)。
./kk create config --from-cluster
请确保在该配置文件中提供主机的所有信息,然后运行以下命令以删除节点。
./kk delete node <nodeName> -f config-sample.yaml
7.卸载 KubeSphere 和 Kubernetes
卸载 KubeSphere 和 Kubernetes 意味着将其从您的机器上移除。该操作不可逆,且不会进行任何备份。请谨慎操作。
如需删除集群,请执行以下命令。
如果是按照快速入门 (All-in-One) 安装的 KubeSphere:
./kk delete cluster
如果是使用高级模式安装的 KubeSphere(使用配置文件创建):
./kk delete cluster [-f config-sample.yaml]
8.集群升级
单节点集群
升级集群到指定版本。
./kk upgrade [--with-kubernetes version] [--with-kubesphere version]
- –with-kubernetes 指定kubernetes目标版本。
- –with-kubesphere 指定kubesphere目标版本。
多节点集群
通过指定配置文件对集群进行升级。
./kk upgrade [--with-kubernetes version] [--with-kubesphere version] [(-f | --file) path]
- –with-kubernetes 指定kubernetes目标版本。
- –with-kubesphere 指定kubesphere目标版本。
- -f 指定集群安装时创建的配置文件。
i 升级多节点集群需要指定配置文件. 如果集群非kubekey创建,或者创建集群时生成的配置文件丢失,需要重新生成配置文件,或使用以下方法生成。
Getting cluster info and generating kubekey’s configuration file (optional).
./kk create config [--from-cluster] [(-f | --file) path] [--kubeconfig path]
- –from-cluster 根据已存在集群信息生成配置文件.
- -f 指定生成配置文件路径.
- –kubeconfig 指定集群kubeconfig文件.
由于无法全面获取集群配置,生成配置文件后,请根据集群实际信息补全配置文件。
9.启用 kubectl 自动补全
KubeKey 不会启用 kubectl 自动补全功能,请确保已安装 bash-autocompletion 并可以正常工作。
# Install bash-completion
apt-get install bash-completion
# Source the completion script in your ~/.bashrc file
echo 'source <(kubectl completion bash)' >>~/.bashrc
# Add the completion script to the /etc/bash_completion.d directory
kubectl completion bash >/etc/bash_completion.d/kubectl
KubeSphere v3.1.0 镜像清单
##k8s-images
kubesphere/kube-apiserver:v1.20.4
kubesphere/kube-scheduler:v1.20.4
kubesphere/kube-proxy:v1.20.4
kubesphere/kube-controller-manager:v1.20.4
kubesphere/kube-apiserver:v1.19.8
kubesphere/kube-scheduler:v1.19.8
kubesphere/kube-proxy:v1.19.8
kubesphere/kube-controller-manager:v1.19.8
kubesphere/kube-apiserver:v1.19.9
kubesphere/kube-scheduler:v1.19.9
kubesphere/kube-proxy:v1.19.9
kubesphere/kube-controller-manager:v1.19.9
kubesphere/kube-apiserver:v1.18.6
kubesphere/kube-scheduler:v1.18.6
kubesphere/kube-proxy:v1.18.6
kubesphere/kube-controller-manager:v1.18.6
kubesphere/kube-apiserver:v1.17.9
kubesphere/kube-scheduler:v1.17.9
kubesphere/kube-proxy:v1.17.9
kubesphere/kube-controller-manager:v1.17.9
kubesphere/pause:3.1
kubesphere/pause:3.2
kubesphere/etcd:v3.4.13
calico/cni:v3.16.3
calico/kube-controllers:v3.16.3
calico/node:v3.16.3
calico/pod2daemon-flexvol:v3.16.3
coredns/coredns:1.6.9
kubesphere/k8s-dns-node-cache:1.15.12
openebs/provisioner-localpv:2.3.0
openebs/linux-utils:2.3.0
kubesphere/nfs-client-provisioner:v3.1.0-k8s1.11
##csi-images
csiplugin/csi-neonsan:v1.2.0
csiplugin/csi-neonsan-ubuntu:v1.2.0
csiplugin/csi-neonsan-centos:v1.2.0
csiplugin/csi-provisioner:v1.5.0
csiplugin/csi-attacher:v2.1.1
csiplugin/csi-resizer:v0.4.0
csiplugin/csi-snapshotter:v2.0.1
csiplugin/csi-node-driver-registrar:v1.2.0
csiplugin/csi-qingcloud:v1.2.0
##kubesphere-images
kubesphere/ks-apiserver:v3.1.0
kubesphere/ks-console:v3.1.0
kubesphere/ks-controller-manager:v3.1.0
kubesphere/ks-installer:v3.1.0
kubesphere/kubectl:v1.19.0
redis:5.0.5-alpine
alpine:3.10.4
haproxy:2.0.4
nginx:1.14-alpine
minio/minio:RELEASE.2019-08-07T01-59-21Z
minio/mc:RELEASE.2019-08-07T23-14-43Z
mirrorgooglecontainers/defaultbackend-amd64:1.4
kubesphere/nginx-ingress-controller:v0.35.0
osixia/openldap:1.3.0
csiplugin/snapshot-controller:v2.0.1
kubesphere/kubefed:v0.7.0
kubesphere/tower:v0.2.0
kubesphere/prometheus-config-reloader:v0.42.1
kubesphere/prometheus-operator:v0.42.1
prom/alertmanager:v0.21.0
prom/prometheus:v2.26.0
prom/node-exporter:v0.18.1
kubesphere/ks-alerting-migration:v3.1.0
jimmidyson/configmap-reload:v0.3.0
kubesphere/notification-manager-operator:v1.0.0
kubesphere/notification-manager:v1.0.0
kubesphere/metrics-server:v0.4.2
kubesphere/kube-rbac-proxy:v0.8.0
kubesphere/kube-state-metrics:v1.9.7
openebs/provisioner-localpv:2.3.0
thanosio/thanos:v0.18.0
grafana/grafana:7.4.3
##kubesphere-logging-images
kubesphere/elasticsearch-oss:6.7.0-1
kubesphere/elasticsearch-curator:v5.7.6
kubesphere/fluentbit-operator:v0.5.0
kubesphere/fluentbit-operator:migrator
kubesphere/fluent-bit:v1.6.9
elastic/filebeat:6.7.0
kubesphere/kube-auditing-operator:v0.1.2
kubesphere/kube-auditing-webhook:v0.1.2
kubesphere/kube-events-exporter:v0.1.0
kubesphere/kube-events-operator:v0.1.0
kubesphere/kube-events-ruler:v0.2.0
kubesphere/log-sidecar-injector:1.1
docker:19.03
##istio-images
istio/pilot:1.6.10
istio/proxyv2:1.6.10
jaegertracing/jaeger-agent:1.17
jaegertracing/jaeger-collector:1.17
jaegertracing/jaeger-es-index-cleaner:1.17
jaegertracing/jaeger-operator:1.17.1
jaegertracing/jaeger-query:1.17
kubesphere/kiali:v1.26.1
kubesphere/kiali-operator:v1.26.1
##kubesphere-devops-images
kubesphere/ks-jenkins:2.249.1
jenkins/jnlp-slave:3.27-1
kubesphere/s2ioperator:v3.1.0
kubesphere/s2irun:v2.1.1
kubesphere/builder-base:v3.1.0
kubesphere/builder-nodejs:v3.1.0
kubesphere/builder-maven:v3.1.0
kubesphere/builder-go:v3.1.0
kubesphere/s2i-binary:v2.1.0
kubesphere/tomcat85-java11-centos7:v2.1.0
kubesphere/tomcat85-java11-runtime:v2.1.0
kubesphere/tomcat85-java8-centos7:v2.1.0
kubesphere/tomcat85-java8-runtime:v2.1.0
kubesphere/java-11-centos7:v2.1.0
kubesphere/java-8-centos7:v2.1.0
kubesphere/java-8-runtime:v2.1.0
kubesphere/java-11-runtime:v2.1.0
kubesphere/nodejs-8-centos7:v2.1.0
kubesphere/nodejs-6-centos7:v2.1.0
kubesphere/nodejs-4-centos7:v2.1.0
kubesphere/python-36-centos7:v2.1.0
kubesphere/python-35-centos7:v2.1.0
kubesphere/python-34-centos7:v2.1.0
kubesphere/python-27-centos7:v2.1.0
##openpitrix-images
kubesphere/openpitrix-jobs:v3.1.0
##weave-scope-images
weaveworks/scope:1.13.0
##kubeedge-images
kubeedge/cloudcore:v1.6.1
kubesphere/edge-watcher:v0.1.0
kubesphere/kube-rbac-proxy:v0.5.0
kubesphere/edge-watcher-agent:v0.1.0
##example-images-images
kubesphere/examples-bookinfo-productpage-v1:1.16.2
kubesphere/examples-bookinfo-reviews-v1:1.16.2
kubesphere/examples-bookinfo-reviews-v2:1.16.2
kubesphere/examples-bookinfo-reviews-v3:1.16.2
kubesphere/examples-bookinfo-details-v1:1.16.2
kubesphere/examples-bookinfo-ratings-v1:1.16.3
busybox:1.31.1
joosthofman/wget:1.0
kubesphere/netshoot:v1.0
nginxdemos/hello:plain-text
wordpress:4.8-apache
mirrorgooglecontainers/hpa-example:latest
java:openjdk-8-jre-alpine
fluent/fluentd:v1.4.2-2.0
perl:latest