JJackieWuK零S
第1步:准备 Ubuntu 18.04 系统
安装 Ubuntu 18.04 系统,并登录系统,可以看到是一个干净的18.04.6 LTS系统
PS C:\Users\Matt> ssh jackie@192.168.117.128
The authenticity of host '192.168.117.128 (192.168.117.128)' can't be established.
ECDSA key fingerprint is SHA256:xguq091Nii0er2EJ6r6EzQ2BilQG452sIfAu+plJXRY.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.117.128' (ECDSA) to the list of known hosts.
jackie@192.168.117.128's password:
Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 4.15.0-177-generic x86_64)
安装依赖项目
安装包括socat、conntrack、ebtables、ipset 等依赖程序
jackie@jackiehost:~$ sudo apt install socat
[sudo] password for jackie:
Reading package lists... Done
Building dependency tree
Reading state information... Done
...
jackie@jackiehost:~$ sudo apt install conntrack
Reading package lists... Done
Building dependency tree
Reading state information...
...
jackie@jackiehost:~$ sudo apt install ebtables ipset
Reading package lists... Done
Building dependency tree
Reading state information... Done
...
安装配置加速器
为了更快的完成docker镜像的拉取和安装,可以配置加速器。
通过docker守护进程加速,所以需要先安装Docker。
jackie@jackiehost:~$ sudo apt install docker.io
Reading package lists... Done
Building dependency tree
Reading state information... Done
配置阿里云加速器
jackie@jackiehost:~$ sudo mkdir -p /etc/docker
jackie@jackiehost:~$ sudo vi /etc/docker/daemon.json
jackie@jackiehost:~$ cat /etc/docker/daemon.json
{
"registry-mirrors": ["https://xxxxxx.mirror.aliyuncs.com"]
}
为了让节点之间可以互相连接,需要开启openssh服务,并创建SSH密钥对
jackie@jackiehost:~$ sudo su
root@jackiehost:/home/jackie# ssh-keygen -t rsa -C "jackiewuuuu@163.com"
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:JVY5JkSC0dXuXkHarpKV4CXqL3i0P4kcjEkpYgc29e0 jackiewuuuu@163.com
The key's randomart image is:
+---[RSA 2048]----+
| ...+.++... |
| + o o...= . |
|. o o .o+.= |
|....o ..oo+ o |
|...o + ES= o . |
| o = . + o |
| = + = o |
| . B = o |
| . +oo |
+----[SHA256]-----+
第2步:安装 KubeKey
首先配置环境变量,下载kubekey程序
jackie@jackiehost:~$ export KKZONE=cn
jackie@jackiehost:~$ curl -sfL https://get-kk.kubesphere.io | VERSION=v2.0.0 sh -
Downloading kubekey v2.0.0 from https://kubernetes.pek3b.qingstor.com/kubekey/releases/download/v2.0.0/kubekey-v2.0.0-linux-amd64.tar.gz ...
Kubekey v2.0.0 Download Complete!
jackie@jackiehost:~$ chmod +x kk
第3步:安装 Kubesphere
好了,万事俱备,可以开始用kk程序 以已all-in-one 方式安装Kubesphere 以及 kubernates。
jackie@jackiehost:~$ sudo su
root@jackiehost:/home/jackie# ./kk create cluster --with-kubernetes v1.21.5 --with-kubesphere v3.2.1
_ __ _ _ __
| | / / | | | | / /
| |/ / _ _| |__ ___| |/ / ___ _ _
| \| | | | '_ \ / _ \ \ / _ \ | | |
| |\ \ |_| | |_) | __/ |\ \ __/ |_| |
\_| \_/\__,_|_.__/ \___\_| \_/\___|\__, |
__/ |
|___/
17:07:10 UTC [NodePreCheckModule] A pre-check on nodes
17:07:11 UTC success: [jackiehost]
17:07:11 UTC [ConfirmModule] Display confirmation form
+------------+------+------+---------+----------+-------+-------+-----------+--------+---------+------------+-------------+------------------+--------------+
| name | sudo | curl | openssl | ebtables | socat | ipset | conntrack | chrony | docker | nfs client | ceph client | glusterfs client | time |
+------------+------+------+---------+----------+-------+-------+-----------+--------+---------+------------+-------------+------------------+--------------+
| jackiehost | y | y | y | y | y | y | y | | 20.10.7 | | | | UTC 17:07:11 |
+------------+------+------+---------+----------+-------+-------+-----------+--------+---------+------------+-------------+------------------+--------------+
This is a simple check of your environment.
Before installation, you should ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations
Continue this installation? [yes/no]:
确认安装环境没有问题后输入yes,开始下载并安装Kubesphere
Continue this installation? [yes/no]: yes
17:08:02 UTC success: [LocalHost]
17:08:02 UTC [NodeBinariesModule] Download installation binaries
17:08:02 UTC message: [localhost]
downloading amd64 kubeadm v1.21.5 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 42.7M 100 42.7M 0 0 3161k 0 0:00:13 0:00:13 --:--:-- 4007k
17:08:21 UTC message: [localhost]
downloading amd64 kubelet v1.21.5 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 112M 100 112M 0 0 4799k 0 0:00:24 0:00:24 --:--:-- 5506k
17:09:01 UTC message: [localhost]
downloading amd64 kubectl v1.21.5 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 44.4M 100 44.4M 0 0 173k 0 0:04:23 0:04:23 --:--:-- 366k
17:13:24 UTC message: [localhost]
downloading amd64 helm v3.6.3 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
2 13.0M 2 384k 0 0 1082 0 3:31:03 0:06:03 3:25:00 0
15:19:47 UTC success: [LocalHost]
15:19:47 UTC [NodeBinariesModule] Download installation binaries
15:19:47 UTC message: [localhost]
downloading amd64 kubeadm v1.21.5 ...
15:19:47 UTC message: [localhost]
kubeadm is existed
15:19:47 UTC message: [localhost]
downloading amd64 kubelet v1.21.5 ...
15:19:48 UTC message: [localhost]
kubelet is existed
15:19:48 UTC message: [localhost]
downloading amd64 kubectl v1.21.5 ...
15:19:48 UTC message: [localhost]
kubectl is existed
15:19:48 UTC message: [localhost]
downloading amd64 helm v3.6.3 ...
15:19:48 UTC message: [localhost]
helm is existed
15:19:48 UTC message: [localhost]
downloading amd64 kubecni v0.9.1 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0
4 37.9M 4 1873k 0 0 11143 0 0:59:29 0:02:52 0:56:37 16762
8 37.9M 8 3460k 0 0 6246 0 1:46:07 0:09:27 1:36:40 0
8 37.9M 8 3460k 0 0 6235 0 1:46:18 0:09:28 1:36:50 0
8 37.9M 8 3460k 0 0 492 0 22:27:16 1:59:58 20:27:18 017:19:47 UTC failed: [LocalHost]
error: Pipeline[CreateClusterPipeline] execute failed: Module[NodeBinariesModule] exec failed:
failed: [LocalHost] execute task timeout, Timeout=7200000000000
第一次安装报错退出了, 从日志看像是因为超时引起,可能是网络不佳造成。
重新执行命令再试一次!
root@jackiehost:/home/jackie# ./kk create cluster --with-kubernetes v1.21.5 --with-kubesphere v3.2.1
_ __ _ _ __
| | / / | | | | / /
| |/ / _ _| |__ ___| |/ / ___ _ _
| \| | | | '_ \ / _ \ \ / _ \ | | |
| |\ \ |_| | |_) | __/ |\ \ __/ |_| |
\_| \_/\__,_|_.__/ \___\_| \_/\___|\__, |
__/ |
|___/
23:22:13 UTC [NodePreCheckModule] A pre-check on nodes
23:22:13 UTC success: [jackiehost]
23:22:13 UTC [ConfirmModule] Display confirmation form
+------------+------+------+---------+----------+-------+-------+-----------+--------+---------+------------+-------------+------------------+--------------+
| name | sudo | curl | openssl | ebtables | socat | ipset | conntrack | chrony | docker | nfs client | ceph client | glusterfs client | time |
+------------+------+------+---------+----------+-------+-------+-----------+--------+---------+------------+-------------+------------------+--------------+
| jackiehost | y | y | y | y | y | y | y | | 20.10.7 | | | | UTC 23:22:13 |
+------------+------+------+---------+----------+-------+-------+-----------+--------+---------+------------+-------------+------------------+--------------+
This is a simple check of your environment.
Before installation, you should ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations
Continue this installation? [yes/no]: yes
23:22:18 UTC success: [LocalHost]
23:22:18 UTC [NodeBinariesModule] Download installation binaries
23:22:18 UTC message: [localhost]
downloading amd64 kubeadm v1.21.5 ...
23:22:18 UTC message: [localhost]
kubeadm is existed
23:22:18 UTC message: [localhost]
downloading amd64 kubelet v1.21.5 ...
23:22:18 UTC message: [localhost]
kubelet is existed
23:22:18 UTC message: [localhost]
downloading amd64 kubectl v1.21.5 ...
23:22:18 UTC message: [localhost]
kubectl is existed
23:22:18 UTC message: [localhost]
downloading amd64 helm v3.6.3 ...
23:22:18 UTC message: [localhost]
helm is existed
23:22:18 UTC message: [localhost]
downloading amd64 kubecni v0.9.1 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 37.9M 100 37.9M 0 0 5848k 0 0:00:06 0:00:06 --:--:-- 7812k
23:22:25 UTC message: [localhost]
downloading amd64 docker 20.10.8 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 58.1M 100 58.1M 0 0 6842k 0 0:00:08 0:00:08 --:--:-- 10.2M
23:22:34 UTC message: [localhost]
downloading amd64 crictl v1.22.0 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 17.8M 100 17.8M 0 0 4463k 0 0:00:04 0:00:04 --:--:-- 7952k
23:22:38 UTC message: [localhost]
downloading amd64 etcd v3.4.13 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 16.5M 100 16.5M 0 0 4407k 0 0:00:03 0:00:03 --:--:-- 8639k
23:22:42 UTC success: [LocalHost]
23:22:42 UTC [ConfigureOSModule] Prepare to init OS
23:22:43 UTC success: [jackiehost]
23:22:43 UTC [ConfigureOSModule] Generate init os script
23:22:43 UTC success: [jackiehost]
23:22:43 UTC [ConfigureOSModule] Exec init os script
23:22:45 UTC stdout: [jackiehost]
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
kernel.pid_max = 65535
no crontab for root
23:22:45 UTC success: [jackiehost]
23:22:45 UTC [ConfigureOSModule] configure the ntp server for each node
23:22:45 UTC skipped: [jackiehost]
23:22:45 UTC [KubernetesStatusModule] Get kubernetes cluster status
23:22:45 UTC success: [jackiehost]
23:22:45 UTC [InstallContainerModule] Sync docker binaries
23:22:45 UTC skipped: [jackiehost]
23:22:45 UTC [InstallContainerModule] Generate containerd service
23:22:45 UTC skipped: [jackiehost]
23:22:45 UTC [InstallContainerModule] Enable containerd
23:22:45 UTC skipped: [jackiehost]
23:22:45 UTC [InstallContainerModule] Generate docker service
23:22:45 UTC skipped: [jackiehost]
23:22:45 UTC [InstallContainerModule] Generate docker config
23:22:45 UTC skipped: [jackiehost]
23:22:45 UTC [InstallContainerModule] Enable docker
23:22:45 UTC skipped: [jackiehost]
23:22:45 UTC [InstallContainerModule] Add auths to container runtime
23:22:45 UTC skipped: [jackiehost]
23:22:45 UTC [PullModule] Start to pull images on all nodes
23:22:45 UTC message: [jackiehost]
downloading image: kubesphere/pause:3.4.1
23:23:07 UTC message: [jackiehost]
downloading image: kubesphere/kube-apiserver:v1.21.5
23:23:30 UTC message: [jackiehost]
downloading image: kubesphere/kube-controller-manager:v1.21.5
23:23:53 UTC message: [jackiehost]
downloading image: kubesphere/kube-scheduler:v1.21.5
23:24:13 UTC message: [jackiehost]
downloading image: kubesphere/kube-proxy:v1.21.5
23:24:36 UTC message: [jackiehost]
downloading image: coredns/coredns:1.8.0
23:24:54 UTC message: [jackiehost]
downloading image: kubesphere/k8s-dns-node-cache:1.15.12
23:25:17 UTC message: [jackiehost]
downloading image: calico/kube-controllers:v3.20.0
23:25:38 UTC message: [jackiehost]
downloading image: calico/cni:v3.20.0
23:25:47 UTC message: [jackiehost]
downloading image: calico/node:v3.20.0
23:26:15 UTC message: [jackiehost]
downloading image: calico/pod2daemon-flexvol:v3.20.0
23:26:34 UTC success: [jackiehost]
23:26:34 UTC [ETCDPreCheckModule] Get etcd status
23:26:34 UTC success: [jackiehost]
23:26:34 UTC [CertsModule] Fetcd etcd certs
23:26:34 UTC success: [jackiehost]
23:26:34 UTC [CertsModule] Generate etcd Certs
[certs] Generating "ca" certificate and key
[certs] admin-jackiehost serving cert is signed for DNS names [etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local jackiehost lb.kubesphere.local localhost] and IPs [127.0.0.1 ::1 192.168.117.128]
[certs] member-jackiehost serving cert is signed for DNS names [etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local jackiehost lb.kubesphere.local localhost] and IPs [127.0.0.1 ::1 192.168.117.128]
[certs] node-jackiehost serving cert is signed for DNS names [etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local jackiehost lb.kubesphere.local localhost] and IPs [127.0.0.1 ::1 192.168.117.128]
23:26:34 UTC success: [LocalHost]
23:26:34 UTC [CertsModule] Synchronize certs file
23:26:35 UTC success: [jackiehost]
23:26:35 UTC [CertsModule] Synchronize certs file to master
23:26:35 UTC skipped: [jackiehost]
23:26:35 UTC [InstallETCDBinaryModule] Install etcd using binary
23:26:35 UTC success: [jackiehost]
23:26:35 UTC [InstallETCDBinaryModule] Generate etcd service
23:26:35 UTC success: [jackiehost]
23:26:35 UTC [InstallETCDBinaryModule] Generate access address
23:26:35 UTC success: [jackiehost]
23:26:35 UTC [ETCDConfigureModule] Health check on exist etcd
23:26:35 UTC skipped: [jackiehost]
23:26:35 UTC [ETCDConfigureModule] Generate etcd.env config on new etcd
23:26:35 UTC success: [jackiehost]
23:26:35 UTC [ETCDConfigureModule] Refresh etcd.env config on all etcd
23:26:35 UTC success: [jackiehost]
23:26:35 UTC [ETCDConfigureModule] Restart etcd
23:26:42 UTC stdout: [jackiehost]
Created symlink /etc/systemd/system/multi-user.target.wants/etcd.service → /etc/systemd/system/etcd.service.
23:26:42 UTC success: [jackiehost]
23:26:42 UTC [ETCDConfigureModule] Health check on all etcd
23:26:42 UTC success: [jackiehost]
23:26:42 UTC [ETCDConfigureModule] Refresh etcd.env config to exist mode on all etcd
23:26:42 UTC success: [jackiehost]
23:26:42 UTC [ETCDConfigureModule] Health check on all etcd
23:26:42 UTC success: [jackiehost]
23:26:42 UTC [ETCDBackupModule] Backup etcd data regularly
23:26:48 UTC success: [jackiehost]
23:26:48 UTC [InstallKubeBinariesModule] Synchronize kubernetes binaries
23:27:01 UTC success: [jackiehost]
23:27:01 UTC [InstallKubeBinariesModule] Synchronize kubelet
23:27:01 UTC success: [jackiehost]
23:27:01 UTC [InstallKubeBinariesModule] Generate kubelet service
23:27:01 UTC success: [jackiehost]
23:27:01 UTC [InstallKubeBinariesModule] Enable kubelet service
23:27:01 UTC success: [jackiehost]
23:27:01 UTC [InstallKubeBinariesModule] Generate kubelet env
23:27:01 UTC success: [jackiehost]
23:27:01 UTC [InitKubernetesModule] Generate kubeadm config
23:27:01 UTC success: [jackiehost]
23:27:01 UTC [InitKubernetesModule] Init cluster using kubeadm
23:27:15 UTC stdout: [jackiehost]
W0519 23:27:01.962580 37524 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
W0519 23:27:01.997650 37524 kubelet.go:215] detected "cgroupfs" as the Docker cgroup driver, the provided value "systemd" in "KubeletConfiguration" will be overrided
[init] Using Kubernetes version: v1.21.5
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [jackiehost jackiehost.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost] and IPs [10.233.0.1 192.168.117.128 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 10.501881 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node jackiehost as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node jackiehost as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 8f8ltg.xpmur5a7wzfbuact
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join lb.kubesphere.local:6443 --token 8f8ltg.xpmur5a7wzfbuact \
--discovery-token-ca-cert-hash sha256:e26575fbe165991985b644dffa48b98146dad6f3eeb891714acaf00d5096890e \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join lb.kubesphere.local:6443 --token 8f8ltg.xpmur5a7wzfbuact \
--discovery-token-ca-cert-hash sha256:e26575fbe165991985b644dffa48b98146dad6f3eeb891714acaf00d5096890e
23:27:15 UTC success: [jackiehost]
23:27:15 UTC [InitKubernetesModule] Copy admin.conf to ~/.kube/config
23:27:16 UTC success: [jackiehost]
23:27:16 UTC [InitKubernetesModule] Remove master taint
23:27:16 UTC stdout: [jackiehost]
node/jackiehost untainted
23:27:16 UTC success: [jackiehost]
23:27:16 UTC [InitKubernetesModule] Add worker label
23:27:16 UTC stdout: [jackiehost]
node/jackiehost labeled
23:27:16 UTC success: [jackiehost]
23:27:16 UTC [ClusterDNSModule] Generate coredns service
23:27:17 UTC success: [jackiehost]
23:27:17 UTC [ClusterDNSModule] Override coredns service
23:27:17 UTC stdout: [jackiehost]
service "kube-dns" deleted
23:27:17 UTC stdout: [jackiehost]
service/coredns created
Warning: resource clusterroles/system:coredns is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
clusterrole.rbac.authorization.k8s.io/system:coredns configured
23:27:17 UTC success: [jackiehost]
23:27:17 UTC [ClusterDNSModule] Generate nodelocaldns
23:27:17 UTC success: [jackiehost]
23:27:17 UTC [ClusterDNSModule] Deploy nodelocaldns
23:27:17 UTC stdout: [jackiehost]
serviceaccount/nodelocaldns created
daemonset.apps/nodelocaldns created
23:27:17 UTC success: [jackiehost]
23:27:17 UTC [ClusterDNSModule] Generate nodelocaldns configmap
23:27:18 UTC success: [jackiehost]
23:27:18 UTC [ClusterDNSModule] Apply nodelocaldns configmap
23:27:18 UTC stdout: [jackiehost]
configmap/nodelocaldns created
23:27:18 UTC success: [jackiehost]
23:27:18 UTC [KubernetesStatusModule] Get kubernetes cluster status
23:27:18 UTC stdout: [jackiehost]
v1.21.5
23:27:18 UTC stdout: [jackiehost]
jackiehost v1.21.5 [map[address:192.168.117.128 type:InternalIP] map[address:jackiehost type:Hostname]]
23:27:22 UTC stdout: [jackiehost]
I0519 23:27:20.518007 39372 version.go:254] remote version is much newer: v1.24.0; falling back to: stable-1.21
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
25e2d65baa8ddfd2d56ae98e9e3fcb2d23374222f1ad384754409af9b7a5664f
23:27:22 UTC stdout: [jackiehost]
secret/kubeadm-certs patched
23:27:22 UTC stdout: [jackiehost]
secret/kubeadm-certs patched
23:27:22 UTC stdout: [jackiehost]
secret/kubeadm-certs patched
23:27:22 UTC stdout: [jackiehost]
a9angq.zuw64ssl5xpbsww2
23:27:22 UTC success: [jackiehost]
23:27:22 UTC [JoinNodesModule] Generate kubeadm config
23:27:22 UTC skipped: [jackiehost]
23:27:22 UTC [JoinNodesModule] Join control-plane node
23:27:22 UTC skipped: [jackiehost]
23:27:22 UTC [JoinNodesModule] Join worker node
23:27:22 UTC skipped: [jackiehost]
23:27:22 UTC [JoinNodesModule] Copy admin.conf to ~/.kube/config
23:27:22 UTC skipped: [jackiehost]
23:27:22 UTC [JoinNodesModule] Remove master taint
23:27:22 UTC skipped: [jackiehost]
23:27:22 UTC [JoinNodesModule] Add worker label to master
23:27:22 UTC skipped: [jackiehost]
23:27:22 UTC [JoinNodesModule] Synchronize kube config to worker
23:27:22 UTC skipped: [jackiehost]
23:27:22 UTC [JoinNodesModule] Add worker label to worker
23:27:22 UTC skipped: [jackiehost]
23:27:22 UTC [DeployNetworkPluginModule] Generate calico
23:27:22 UTC success: [jackiehost]
23:27:22 UTC [DeployNetworkPluginModule] Deploy calico
23:27:23 UTC stdout: [jackiehost]
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers created
23:27:23 UTC success: [jackiehost]
23:27:23 UTC [ConfigureKubernetesModule] Configure kubernetes
23:27:23 UTC success: [jackiehost]
23:27:23 UTC [ChownModule] Chown user $HOME/.kube dir
23:27:23 UTC success: [jackiehost]
23:27:23 UTC [AutoRenewCertsModule] Generate k8s certs renew script
23:27:23 UTC success: [jackiehost]
23:27:23 UTC [AutoRenewCertsModule] Generate k8s certs renew service
23:27:23 UTC success: [jackiehost]
23:27:23 UTC [AutoRenewCertsModule] Generate k8s certs renew timer
23:27:23 UTC success: [jackiehost]
23:27:23 UTC [AutoRenewCertsModule] Enable k8s certs renew service
23:27:24 UTC success: [jackiehost]
23:27:24 UTC [SaveKubeConfigModule] Save kube config as a configmap
23:27:24 UTC success: [LocalHost]
23:27:24 UTC [AddonsModule] Install addons
23:27:24 UTC success: [LocalHost]
23:27:24 UTC [DeployStorageClassModule] Generate OpenEBS manifest
23:27:24 UTC success: [jackiehost]
23:27:24 UTC [DeployStorageClassModule] Deploy OpenEBS as cluster default StorageClass
23:27:25 UTC success: [jackiehost]
23:27:25 UTC [DeployKubeSphereModule] Generate KubeSphere ks-installer crd manifests
23:27:25 UTC success: [jackiehost]
23:27:25 UTC [DeployKubeSphereModule] Apply ks-installer
23:27:25 UTC stdout: [jackiehost]
namespace/kubesphere-system created
serviceaccount/ks-installer created
customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io created
clusterrole.rbac.authorization.k8s.io/ks-installer created
clusterrolebinding.rbac.authorization.k8s.io/ks-installer created
deployment.apps/ks-installer created
23:27:25 UTC success: [jackiehost]
23:27:25 UTC [DeployKubeSphereModule] Add config to ks-installer manifests
23:27:25 UTC success: [jackiehost]
23:27:25 UTC [DeployKubeSphereModule] Create the kubesphere namespace
23:27:25 UTC success: [jackiehost]
23:27:25 UTC [DeployKubeSphereModule] Setup ks-installer config
23:27:25 UTC stdout: [jackiehost]
secret/kube-etcd-client-certs created
23:27:25 UTC success: [jackiehost]
23:27:25 UTC [DeployKubeSphereModule] Apply ks-installer
23:27:28 UTC stdout: [jackiehost]
namespace/kubesphere-system unchanged
serviceaccount/ks-installer unchanged
customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io unchanged
clusterrole.rbac.authorization.k8s.io/ks-installer unchanged
clusterrolebinding.rbac.authorization.k8s.io/ks-installer unchanged
deployment.apps/ks-installer unchanged
clusterconfiguration.installer.kubesphere.io/ks-installer created
23:27:28 UTC success: [jackiehost]
#####################################################
### Welcome to KubeSphere! ###
#####################################################
Console: http://192.168.117.128:30880
Account: admin
Password: P@88w0rd
NOTES:
1. After you log into the console, please check the
monitoring status of service components in
"Cluster Management". If any service is not
ready, please wait patiently until all components
are up and running.
2. Please change the default password after login.
#####################################################
https://kubesphere.io 2022-05-19 23:34:07
#####################################################
23:34:09 UTC success: [jackiehost]
23:34:09 UTC Pipeline[CreateClusterPipeline] execute successful
Installation is complete.
Please check the result using the command:
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
root@jackiehost:/home/jackie#
这次一次安装成功了!
过程没有任何问题,按照安装报告里的链接打开Kubesphere控制台,又看到了熟悉的界面。
用默认密码登录即可看到最新的控制台界面啦
到这里安装过程基本完成,总结一下需要注意的点
- 安装过程对网络要求比较高,特别是访问GitHub等站点的网速,所以使用加速器可以加快安装过程
- 可能有失败的情况,但是不影响多次安装,多次尝试之后安装也可以成功
- 安装过程可能耗时较长,保持服务器过程中间网络通常,剩下就是保持耐心了
好了,这篇开箱安装过程就到这里了,整个过程还是非常顺滑的。
下面可以开始享受 Kubesphere 了