Ubuntu20.04,4C/16G 4台
Kubernetes v1.20.4 多节点多机
KubeSphere v3.1.0
apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: inslabs-16g1, address: 10.0.23.21, internalAddress: 10.0.23.21, user: root, privateKeyPath: "~/.ssh/id_rsa"}
- {name: inslabs-16g2, address: 10.0.23.22, internalAddress: 10.0.23.22, user: root, privateKeyPath: "~/.ssh/id_rsa"}
- {name: inslabs-12g1, address: 10.0.23.24, internalAddress: 10.0.23.24, user: root, privateKeyPath: "~/.ssh/id_rsa"}
- {name: inslabs-16g3, address: 10.0.23.26, internalAddress: 10.0.23.26, user: root, privateKeyPath: "~/.ssh/id_rsa"}
roleGroups:
etcd:
- inslabs-16g2
master:
- inslabs-16g2
worker:
- inslabs-16g1
- inslabs-16g2
- inslabs-16g3
- inslabs-12g1
controlPlaneEndpoint:
domain: inslabs-16g2
address: ""
port: 6443
kubernetes:
version: v1.20.4
imageRepo: kubesphere
clusterName: cluster.local
network:
plugin: calico
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
registry:
registryMirrors: []
insecureRegistries: []
addons: []
执行日志
➜ kubesphere ./kk create cluster -f config-sample.yaml
+--------------+------+------+---------+----------+-------+-------+-----------+---------+------------+-------------+------------------+--------------+
| name | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time |
+--------------+------+------+---------+----------+-------+-------+-----------+---------+------------+-------------+------------------+--------------+
| inslabs-16g3 | y | y | y | y | y | y | y | 20.10.6 | | | | CST 12:01:28 |
| inslabs-16g2 | y | y | y | y | y | y | y | 20.10.6 | y | | | CST 12:01:28 |
| inslabs-12g1 | y | y | y | y | y | y | y | 20.10.6 | | | | CST 12:01:28 |
| inslabs-16g1 | y | y | y | y | y | y | y | 20.10.6 | | | | CST 12:01:28 |
+--------------+------+------+---------+----------+-------+-------+-----------+---------+------------+-------------+------------------+--------------+
This is a simple check of your environment.
Before installation, you should ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations
Continue this installation? [yes/no]: yes
INFO[12:01:31 CST] Downloading Installation Files
INFO[12:01:31 CST] Downloading kubeadm ...
INFO[12:01:31 CST] Downloading kubelet ...
INFO[12:01:32 CST] Downloading kubectl ...
INFO[12:01:32 CST] Downloading helm ...
INFO[12:01:32 CST] Downloading kubecni ...
INFO[12:01:32 CST] Configuring operating system ...
[inslabs-16g2 10.0.23.22] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
[inslabs-16g1 10.0.23.21] MSG:
net.ipv4.ip_forward = 1
vm.swappiness = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
fs.inotify.max_user_instances = 524288
[inslabs-16g3 10.0.23.26] MSG:
net.ipv4.ip_forward = 1
vm.swappiness = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
fs.inotify.max_user_instances = 524288
[inslabs-12g1 10.0.23.24] MSG:
net.ipv4.ip_forward = 1
vm.swappiness = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
fs.inotify.max_user_instances = 524288
INFO[12:01:36 CST] Installing docker ...
INFO[12:01:40 CST] Start to download images on all nodes
[inslabs-16g3] Downloading image: kubesphere/pause:3.2
[inslabs-16g1] Downloading image: kubesphere/pause:3.2
[inslabs-16g2] Downloading image: kubesphere/etcd:v3.4.13
[inslabs-12g1] Downloading image: kubesphere/pause:3.2
[inslabs-16g2] Downloading image: kubesphere/pause:3.2
[inslabs-16g1] Downloading image: kubesphere/kube-proxy:v1.20.4
[inslabs-16g3] Downloading image: kubesphere/kube-proxy:v1.20.4
[inslabs-12g1] Downloading image: kubesphere/kube-proxy:v1.20.4
[inslabs-16g2] Downloading image: kubesphere/kube-apiserver:v1.20.4
[inslabs-16g2] Downloading image: kubesphere/kube-controller-manager:v1.20.4
[inslabs-16g1] Downloading image: coredns/coredns:1.6.9
[inslabs-16g3] Downloading image: coredns/coredns:1.6.9
[inslabs-16g1] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[inslabs-16g3] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[inslabs-16g1] Downloading image: calico/kube-controllers:v3.16.3
[inslabs-16g3] Downloading image: calico/kube-controllers:v3.16.3
[inslabs-12g1] Downloading image: coredns/coredns:1.6.9
[inslabs-16g1] Downloading image: calico/cni:v3.16.3
[inslabs-16g3] Downloading image: calico/cni:v3.16.3
[inslabs-12g1] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[inslabs-16g1] Downloading image: calico/node:v3.16.3
[inslabs-16g2] Downloading image: kubesphere/kube-scheduler:v1.20.4
[inslabs-16g3] Downloading image: calico/node:v3.16.3
[inslabs-12g1] Downloading image: calico/kube-controllers:v3.16.3
[inslabs-16g1] Downloading image: calico/pod2daemon-flexvol:v3.16.3
[inslabs-16g3] Downloading image: calico/pod2daemon-flexvol:v3.16.3
[inslabs-12g1] Downloading image: calico/cni:v3.16.3
[inslabs-12g1] Downloading image: calico/node:v3.16.3
[inslabs-16g2] Downloading image: kubesphere/kube-proxy:v1.20.4
[inslabs-12g1] Downloading image: calico/pod2daemon-flexvol:v3.16.3
[inslabs-16g2] Downloading image: coredns/coredns:1.6.9
[inslabs-16g2] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[inslabs-16g2] Downloading image: calico/kube-controllers:v3.16.3
[inslabs-16g2] Downloading image: calico/cni:v3.16.3
[inslabs-16g2] Downloading image: calico/node:v3.16.3
[inslabs-16g2] Downloading image: calico/pod2daemon-flexvol:v3.16.3
INFO[12:03:22 CST] Generating etcd certs
INFO[12:03:23 CST] Synchronizing etcd certs
INFO[12:03:23 CST] Creating etcd service
[inslabs-16g2 10.0.23.22] MSG:
etcd will be installed
INFO[12:03:26 CST] Starting etcd cluster
[inslabs-16g2 10.0.23.22] MSG:
Configuration file will be created
INFO[12:03:27 CST] Refreshing etcd configuration
INFO[12:03:29 CST] Backup etcd data regularly
INFO[12:03:36 CST] Get cluster status
[inslabs-16g2 10.0.23.22] MSG:
Cluster will be created.
INFO[12:03:36 CST] Installing kube binaries
Push /opt/kubesphere/kubekey/v1.20.4/amd64/kubeadm to 10.0.23.22:/tmp/kubekey/kubeadm Done
Push /opt/kubesphere/kubekey/v1.20.4/amd64/kubelet to 10.0.23.22:/tmp/kubekey/kubelet Done
Push /opt/kubesphere/kubekey/v1.20.4/amd64/kubectl to 10.0.23.22:/tmp/kubekey/kubectl Done
Push /opt/kubesphere/kubekey/v1.20.4/amd64/helm to 10.0.23.22:/tmp/kubekey/helm Done
Push /opt/kubesphere/kubekey/v1.20.4/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 10.0.23.22:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
Push /opt/kubesphere/kubekey/v1.20.4/amd64/kubeadm to 10.0.23.26:/tmp/kubekey/kubeadm Done
Push /opt/kubesphere/kubekey/v1.20.4/amd64/kubeadm to 10.0.23.21:/tmp/kubekey/kubeadm Done
Push /opt/kubesphere/kubekey/v1.20.4/amd64/kubeadm to 10.0.23.24:/tmp/kubekey/kubeadm Done
Push /opt/kubesphere/kubekey/v1.20.4/amd64/kubelet to 10.0.23.26:/tmp/kubekey/kubelet Done
Push /opt/kubesphere/kubekey/v1.20.4/amd64/kubelet to 10.0.23.21:/tmp/kubekey/kubelet Done
Push /opt/kubesphere/kubekey/v1.20.4/amd64/kubelet to 10.0.23.24:/tmp/kubekey/kubelet Done
Push /opt/kubesphere/kubekey/v1.20.4/amd64/kubectl to 10.0.23.26:/tmp/kubekey/kubectl Done
Push /opt/kubesphere/kubekey/v1.20.4/amd64/kubectl to 10.0.23.21:/tmp/kubekey/kubectl Done
Push /opt/kubesphere/kubekey/v1.20.4/amd64/kubectl to 10.0.23.24:/tmp/kubekey/kubectl Done
Push /opt/kubesphere/kubekey/v1.20.4/amd64/helm to 10.0.23.26:/tmp/kubekey/helm Done
Push /opt/kubesphere/kubekey/v1.20.4/amd64/helm to 10.0.23.21:/tmp/kubekey/helm Done
Push /opt/kubesphere/kubekey/v1.20.4/amd64/helm to 10.0.23.24:/tmp/kubekey/helm Done
Push /opt/kubesphere/kubekey/v1.20.4/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 10.0.23.26:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
Push /opt/kubesphere/kubekey/v1.20.4/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 10.0.23.21:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
Push /opt/kubesphere/kubekey/v1.20.4/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 10.0.23.24:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
INFO[12:04:05 CST] Initializing kubernetes cluster
[inslabs-16g2 10.0.23.22] MSG:
W0517 12:04:05.917551 1224584 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.20.4
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.6. Latest validated version: 19.03
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [inslabs-12g1 inslabs-12g1.cluster.local inslabs-16g1 inslabs-16g1.cluster.local inslabs-16g2 inslabs-16g2.cluster.local inslabs-16g3 inslabs-16g3.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.233.0.1 10.0.23.22 127.0.0.1 10.0.23.21 10.0.23.24 10.0.23.26]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 64.502217 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node inslabs-16g2 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node inslabs-16g2 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: n9mlfe.zmclpyz5ajh88zt0
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join inslabs-16g2:6443 --token n9mlfe.zmclpyz5ajh88zt0 \
--discovery-token-ca-cert-hash sha256:7bdaeed0646fa8b3941c2e9e38ae6dc1a98536163c036539e0769778a9730f9d \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join inslabs-16g2:6443 --token n9mlfe.zmclpyz5ajh88zt0 \
--discovery-token-ca-cert-hash sha256:7bdaeed0646fa8b3941c2e9e38ae6dc1a98536163c036539e0769778a9730f9d
[inslabs-16g2 10.0.23.22] MSG:
node/inslabs-16g2 untainted
[inslabs-16g2 10.0.23.22] MSG:
node/inslabs-16g2 labeled
[inslabs-16g2 10.0.23.22] MSG:
service "kube-dns" deleted
[inslabs-16g2 10.0.23.22] MSG:
service/coredns created
[inslabs-16g2 10.0.23.22] MSG:
serviceaccount/nodelocaldns created
daemonset.apps/nodelocaldns created
[inslabs-16g2 10.0.23.22] MSG:
configmap/nodelocaldns created
[inslabs-16g2 10.0.23.22] MSG:
I0517 12:05:38.831820 1226360 version.go:254] remote version is much newer: v1.21.1; falling back to: stable-1.20
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
7fa25445163c19e424390247fe9c8374ac130120243382cbc3821ecc783fdd0c
[inslabs-16g2 10.0.23.22] MSG:
secret/kubeadm-certs patched
[inslabs-16g2 10.0.23.22] MSG:
secret/kubeadm-certs patched
[inslabs-16g2 10.0.23.22] MSG:
secret/kubeadm-certs patched
[inslabs-16g2 10.0.23.22] MSG:
kubeadm join inslabs-16g2:6443 --token jm78ys.60gf8pyyste0ueyd --discovery-token-ca-cert-hash sha256:7bdaeed0646fa8b3941c2e9e38ae6dc1a98536163c036539e0769778a9730f9d
[inslabs-16g2 10.0.23.22] MSG:
inslabs-16g2 v1.20.4 [map[address:10.0.23.22 type:InternalIP] map[address:inslabs-16g2 type:Hostname]]
INFO[12:05:40 CST] Joining nodes to cluster
[inslabs-16g1 10.0.23.21] MSG:
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.6. Latest validated version: 19.03
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0517 12:05:42.448839 295590 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[inslabs-16g3 10.0.23.26] MSG:
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.6. Latest validated version: 19.03
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0517 12:05:42.558612 298756 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[inslabs-12g1 10.0.23.24] MSG:
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.6. Latest validated version: 19.03
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0517 12:05:42.849012 299431 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[inslabs-16g1 10.0.23.21] MSG:
node/inslabs-16g1 labeled
[inslabs-12g1 10.0.23.24] MSG:
node/inslabs-12g1 labeled
[inslabs-16g3 10.0.23.26] MSG:
node/inslabs-16g3 labeled
INFO[12:06:25 CST] Deploying network plugin ...
[inslabs-16g2 10.0.23.22] MSG:
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
INFO[12:06:26 CST] Congratulations! Installation is successful.
➜ kubesphere kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
error: error executing jsonpath "{.items[0].metadata.name}": Error executing template: array index out of bounds: index 0, length 0. Printing more information for debugging the template:
template was:
{.items[0].metadata.name}
object given to jsonpath engine was:
map[string]interface {}{"apiVersion":"v1", "items":[]interface {}{}, "kind":"List", "metadata":map[string]interface {}{"resourceVersion":"", "selfLink":""}}
error: expected 'logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER]'.
POD or TYPE/NAME is a required argument for the logs command
See 'kubectl logs -h' for help and examples
➜ kubesphere kubectl get node
NAME STATUS ROLES AGE VERSION
inslabs-12g1 Ready worker 63m v1.20.4
inslabs-16g1 Ready worker 63m v1.20.4
inslabs-16g2 Ready control-plane,master,worker 64m v1.20.4
inslabs-16g3 Ready worker 63m v1.20.4
➜ kubesphere
查看日志也能不正常查看
➜ kubesphere kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}'
error: error executing jsonpath "{.items[0].metadata.name}": Error executing template: array index out of bounds: index 0, length 0. Printing more information for debugging the template:
template was:
{.items[0].metadata.name}
object given to jsonpath engine was:
map[string]interface {}{"apiVersion":"v1", "items":[]interface {}{}, "kind":"List", "metadata":map[string]interface {}{"resourceVersion":"", "selfLink":""}}