创建部署问题时,请参考下面模板,你提供的信息越多,越容易及时获得解答。
你只花一分钟创建的问题,不能指望别人花上半个小时给你解答。
发帖前请点击 发表主题 右边的 预览(👀) 按钮,确保帖子格式正确。
操作系统信息
例如:虚拟机,Ubuntu18.04,8C/16G
Kubernetes版本信息
例如:v21.5。多节点。
容器运行时
例如,使用 docker/containerd,20.10.17 / 1.6.6
KubeSphere版本信息
例如:v2.1.1/v3.2.0。在线安装。全套安装。
问题是什么
无法正常安装kubesphere集群
cat config-sample.yaml
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
name: sample
spec:
hosts:
{name: master1, address: 192.168.1.201, internalAddress: 192.168.1.201, user: edge, password: “1”}
{name: master2, address: 192.168.1.204, internalAddress: 192.168.1.204, user: edge, password: “1”}
{name: master3, address: 192.168.1.205, internalAddress: 192.168.1.205, user: edge, password: “1”}
{name: node1, address: 192.168.1.202, internalAddress: 192.168.1.202, user: edge, password: “1”}
{name: node2, address: 192.168.1.203, internalAddress: 192.168.1.203, user: edge, password: “1”}
roleGroups:
etcd:
master1
master2
master3
control-plane:
master1
master2
master3
worker:
node1
node2
controlPlaneEndpoint:
Internal loadbalancer for apiservers
internalLoadbalancer: haproxy
domain: lb.kubesphere.local
address: ""
port: 6443
kubernetes:
version: v1.21.5
clusterName: cluster.local
autoRenewCerts: true
containerManager: docker
etcd:
type: kubeadm
network:
plugin: calico
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
multusCNI:
enabled: false
registry:
privateRegistry: ""
namespaceOverride: ""
registryMirrors: []
insecureRegistries: []
addons:
name: nfs-client
namespace: kube-system
sources:
chart:
name: nfs-client-provisioner
repo: https://charts.kubesphere.io/main
valuesFile: /home/edge/nfs-client.yaml
cat /home/edge/nfs-client.yaml
nfs:
server: “192.168.1.201”
path: “/mnt/demo”
storageClass:
defaultClass: true
报错日志
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join lb.kubesphere.local:6443 –token a04ekp.dg96juxuj7jv0p0n \
--discovery-token-ca-cert-hash sha256:ed1b856234c6305f8b64d9e0b6d02731405f80c4d7449cd54d6c4e65631aa12b \\
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join lb.kubesphere.local:6443 –token a04ekp.dg96juxuj7jv0p0n \
--discovery-token-ca-cert-hash sha256:ed1b856234c6305f8b64d9e0b6d02731405f80c4d7449cd54d6c4e65631aa12b
12:07:18 CST skipped: [master3]
12:07:18 CST skipped: [master2]
12:07:18 CST success: [master1]
12:07:18 CST [InitKubernetesModule] Copy admin.conf to ~/.kube/config
12:07:20 CST skipped: [master3]
12:07:20 CST skipped: [master2]
12:07:20 CST success: [master1]
12:07:20 CST [InitKubernetesModule] Remove master taint
12:07:20 CST skipped: [master3]
12:07:20 CST skipped: [master2]
12:07:20 CST skipped: [master1]
12:07:20 CST [InitKubernetesModule] Add worker label
12:07:20 CST skipped: [master3]
12:07:20 CST skipped: [master2]
12:07:20 CST skipped: [master1]
12:07:20 CST [ClusterDNSModule] Generate coredns service
12:07:24 CST skipped: [master3]
12:07:24 CST skipped: [master2]
12:07:24 CST success: [master1]
12:07:24 CST [ClusterDNSModule] Override coredns service
12:07:24 CST stdout: [master1]
service “kube-dns” deleted
12:07:26 CST stdout: [master1]
service/coredns created
Warning: resource clusterroles/system:coredns is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create –save-config or kubectl apply. The missing annotation will be patched automatically.
clusterrole.rbac.authorization.k8s.io/system:coredns configured
12:07:26 CST skipped: [master3]
12:07:26 CST skipped: [master2]
12:07:26 CST success: [master1]
12:07:26 CST [ClusterDNSModule] Generate nodelocaldns
12:07:27 CST skipped: [master3]
12:07:27 CST skipped: [master2]
12:07:27 CST success: [master1]
12:07:27 CST [ClusterDNSModule] Deploy nodelocaldns
12:07:27 CST stdout: [master1]
serviceaccount/nodelocaldns created
daemonset.apps/nodelocaldns created
12:07:27 CST skipped: [master3]
12:07:27 CST skipped: [master2]
12:07:27 CST success: [master1]
12:07:27 CST [ClusterDNSModule] Generate nodelocaldns configmap
12:07:27 CST skipped: [master3]
12:07:27 CST skipped: [master2]
12:07:27 CST success: [master1]
12:07:27 CST [ClusterDNSModule] Apply nodelocaldns configmap
12:07:28 CST stdout: [master1]
configmap/nodelocaldns created
12:07:28 CST skipped: [master3]
12:07:28 CST skipped: [master2]
12:07:28 CST success: [master1]
12:07:28 CST [KubernetesStatusModule] Get kubernetes cluster status
12:07:28 CST stdout: [master1]
v1.21.5
12:07:28 CST stdout: [master1]
master1 v1.21.5 [map[address:192.168.1.201 type:InternalIP] map[address:master1 type:Hostname]]
12:07:31 CST stdout: [master1]
I0811 12:07:30.180566 56869 version.go:254] remote version is much newer: v1.24.3; falling back to: stable-1.21
[upload-certs] Storing the certificates in Secret “kubeadm-certs” in the “kube-system” Namespace
[upload-certs] Using certificate key:
7c18e5773d2d54657a3af8b21e31b2d97d582d488d84797453986316448bc3aa
12:07:31 CST stdout: [master1]
secret/kubeadm-certs patched
12:07:31 CST stdout: [master1]
secret/kubeadm-certs patched
12:07:32 CST stdout: [master1]
secret/kubeadm-certs patched
12:07:33 CST stdout: [master1]
p5e7ir.l5vd33au76klluim
12:07:33 CST success: [master1]
12:07:33 CST success: [master2]
12:07:33 CST success: [master3]
12:07:33 CST [JoinNodesModule] Generate kubeadm config
12:07:49 CST skipped: [master1]
12:07:49 CST success: [node2]
12:07:49 CST success: [master3]
12:07:49 CST success: [node1]
12:07:49 CST success: [master2]
12:07:49 CST [JoinNodesModule] Join control-plane node
12:09:24 CST stdout: [master3]
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster…
[preflight] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -o yaml’
W0811 12:07:53.271746 32111 utils.go:69] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.233.0.10]; the provided value is: [169.254.25.10]
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
[download-certs] Downloading the certificates in Secret “kubeadm-certs” in the “kube-system” Namespace
[certs] Using certificateDir folder “/etc/kubernetes/pki”
[certs] Generating “etcd/healthcheck-client” certificate and key
[certs] Generating “etcd/server” certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master3] and IPs [192.168.1.205 127.0.0.1 ::1]
[certs] Generating “etcd/peer” certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master3] and IPs [192.168.1.205 127.0.0.1 ::1]
[certs] Generating “apiserver-etcd-client” certificate and key
[certs] Generating “apiserver” certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost master1 master1.cluster.local master2 master2.cluster.local master3 master3.cluster.local node1 node1.cluster.local node2 node2.cluster.local] and IPs [10.233.0.1 192.168.1.205 127.0.0.1 192.168.1.201 192.168.1.204 192.168.1.202 192.168.1.203]
[certs] Generating “apiserver-kubelet-client” certificate and key
[certs] Generating “front-proxy-client” certificate and key
[certs] Valid certificates and keys now exist in “/etc/kubernetes/pki”
[certs] Using the existing “sa” key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
[kubeconfig] Writing “admin.conf” kubeconfig file
[kubeconfig] Writing “controller-manager.conf” kubeconfig file
[kubeconfig] Writing “scheduler.conf” kubeconfig file
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
[control-plane] Creating static Pod manifest for “kube-scheduler”
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap…
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for “etcd”
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
[kubelet-check] Initial timeout of 40s passed.
[upload-config] Storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace
[mark-control-plane] Marking the node master3 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master3 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
This node has joined the cluster and a new control plane instance was created:
* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.
To start administering your cluster from this node, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Run ‘kubectl get nodes’ to see this node join the cluster.
12:09:42 CST stdout: [master2]
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster…
[preflight] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -o yaml’
W0811 12:07:53.277195 46909 utils.go:69] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.233.0.10]; the provided value is: [169.254.25.10]
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
[download-certs] Downloading the certificates in Secret “kubeadm-certs” in the “kube-system” Namespace
[certs] Using certificateDir folder “/etc/kubernetes/pki”
[certs] Generating “apiserver-etcd-client” certificate and key
[certs] Generating “etcd/server” certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master2] and IPs [192.168.1.204 127.0.0.1 ::1]
[certs] Generating “etcd/peer” certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master2] and IPs [192.168.1.204 127.0.0.1 ::1]
[certs] Generating “etcd/healthcheck-client” certificate and key
[certs] Generating “apiserver” certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost master1 master1.cluster.local master2 master2.cluster.local master3 master3.cluster.local node1 node1.cluster.local node2 node2.cluster.local] and IPs [10.233.0.1 192.168.1.204 127.0.0.1 192.168.1.201 192.168.1.205 192.168.1.202 192.168.1.203]
[certs] Generating “apiserver-kubelet-client” certificate and key
[certs] Generating “front-proxy-client” certificate and key
[certs] Valid certificates and keys now exist in “/etc/kubernetes/pki”
[certs] Using the existing “sa” key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
[kubeconfig] Writing “admin.conf” kubeconfig file
[kubeconfig] Writing “controller-manager.conf” kubeconfig file
[kubeconfig] Writing “scheduler.conf” kubeconfig file
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
[control-plane] Creating static Pod manifest for “kube-scheduler”
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap…
[kubelet-check] Initial timeout of 40s passed.
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for “etcd”
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
[upload-config] Storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace
[mark-control-plane] Marking the node master2 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master2 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
This node has joined the cluster and a new control plane instance was created:
* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.
To start administering your cluster from this node, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Run ‘kubectl get nodes’ to see this node join the cluster.
12:09:42 CST skipped: [master1]
12:09:42 CST success: [master3]
12:09:42 CST success: [master2]
12:09:42 CST [JoinNodesModule] Join worker node
12:10:03 CST stdout: [node1]
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster…
[preflight] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -o yaml’
W0811 12:09:46.579853 4587 utils.go:69] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.233.0.10]; the provided value is: [169.254.25.10]
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap…
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run ‘kubectl get nodes’ on the control-plane to see this node join the cluster.
12:10:03 CST stdout: [node2]
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster…
[preflight] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -o yaml’
W0811 12:09:46.572009 57508 utils.go:69] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.233.0.10]; the provided value is: [169.254.25.10]
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap…
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run ‘kubectl get nodes’ on the control-plane to see this node join the cluster.
12:10:03 CST success: [node1]
12:10:03 CST success: [node2]
12:10:03 CST [JoinNodesModule] Copy admin.conf to ~/.kube/config
12:10:07 CST skipped: [master1]
12:10:07 CST success: [master2]
12:10:07 CST success: [master3]
12:10:07 CST [JoinNodesModule] Remove master taint
12:10:07 CST skipped: [master3]
12:10:07 CST skipped: [master2]
12:10:07 CST skipped: [master1]
12:10:07 CST [JoinNodesModule] Add worker label to master
12:10:07 CST skipped: [master3]
12:10:07 CST skipped: [master1]
12:10:07 CST skipped: [master2]
12:10:07 CST [JoinNodesModule] Synchronize kube config to worker
12:10:14 CST success: [node2]
12:10:14 CST success: [node1]
12:10:14 CST [JoinNodesModule] Add worker label to worker
12:10:30 CST stdout: [node2]
node/node2 labeled
12:10:34 CST stdout: [node1]
node/node1 labeled
12:10:34 CST success: [node2]
12:10:34 CST success: [node1]
12:10:34 CST [InternalLoadbalancerModule] Generate haproxy.cfg
12:10:35 CST success: [node1]
12:10:35 CST success: [node2]
12:10:35 CST [InternalLoadbalancerModule] Calculate the MD5 value according to haproxy.cfg
12:10:35 CST success: [node1]
12:10:35 CST success: [node2]
12:10:35 CST [InternalLoadbalancerModule] Generate haproxy manifest
12:10:35 CST success: [node1]
12:10:35 CST success: [node2]
12:10:35 CST [InternalLoadbalancerModule] Update kubelet config
12:10:35 CST stdout: [master1]
server: https://lb.kubesphere.local:6443
12:10:36 CST stdout: [master3]
server: https://lb.kubesphere.local:6443
12:10:36 CST stdout: [node2]
server: https://lb.kubesphere.local:6443
12:10:36 CST stdout: [master2]
server: https://lb.kubesphere.local:6443
12:10:36 CST stdout: [node1]
server: https://lb.kubesphere.local:6443
12:10:38 CST success: [master1]
12:10:38 CST success: [master3]
12:10:38 CST success: [master2]
12:10:38 CST success: [node2]
12:10:38 CST success: [node1]
12:10:38 CST [InternalLoadbalancerModule] Update kube-proxy configmap
12:10:40 CST success: [master1]
12:10:40 CST [InternalLoadbalancerModule] Update /etc/hosts
12:10:40 CST success: [node1]
12:10:40 CST success: [master1]
12:10:40 CST success: [node2]
12:10:40 CST success: [master2]
12:10:40 CST success: [master3]
12:10:40 CST [DeployNetworkPluginModule] Generate calico
12:10:41 CST skipped: [master3]
12:10:41 CST skipped: [master2]
12:10:41 CST success: [master1]
12:10:41 CST [DeployNetworkPluginModule] Deploy calico
12:11:25 CST stdout: [master1]
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers created
Error from server: error when retrieving current configuration of:
Resource: “apiextensions.k8s.io/v1, Resource=customresourcedefinitions”, GroupVersionKind: “apiextensions.k8s.io/v1, Kind=CustomResourceDefinition”
Name: “globalnetworksets.crd.projectcalico.org”, Namespace: ""
from server for: “/etc/kubernetes/network-plugin.yaml”: etcdserver: request timed out
Error from server: error when retrieving current configuration of:
Resource: “rbac.authorization.k8s.io/v1, Resource=clusterroles”, GroupVersionKind: “rbac.authorization.k8s.io/v1, Kind=ClusterRole”
Name: “calico-node”, Namespace: ""
from server for: “/etc/kubernetes/network-plugin.yaml”: etcdserver: request timed out
Error from server: error when creating “/etc/kubernetes/network-plugin.yaml”: etcdserver: request timed out
12:11:25 CST message: [master1]
deploy network plugin failed: Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubectl apply -f /etc/kubernetes/network-plugin.yaml –force”
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers created
Error from server: error when retrieving current configuration of:
Resource: “apiextensions.k8s.io/v1, Resource=customresourcedefinitions”, GroupVersionKind: “apiextensions.k8s.io/v1, Kind=CustomResourceDefinition”
Name: “globalnetworksets.crd.projectcalico.org”, Namespace: ""
from server for: “/etc/kubernetes/network-plugin.yaml”: etcdserver: request timed out
Error from server: error when retrieving current configuration of:
Resource: “rbac.authorization.k8s.io/v1, Resource=clusterroles”, GroupVersionKind: “rbac.authorization.k8s.io/v1, Kind=ClusterRole”
Name: “calico-node”, Namespace: ""
from server for: “/etc/kubernetes/network-plugin.yaml”: etcdserver: request timed out
Error from server: error when creating “/etc/kubernetes/network-plugin.yaml”: etcdserver: request timed out: Process exited with status 1
12:11:25 CST retry: [master1]
12:11:32 CST stdout: [master1]
configmap/calico-config unchanged
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org configured
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers unchanged
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers unchanged
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node unchanged
daemonset.apps/calico-node created
serviceaccount/calico-node unchanged
deployment.apps/calico-kube-controllers unchanged
serviceaccount/calico-kube-controllers unchanged
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers configured
12:11:32 CST skipped: [master3]
12:11:32 CST skipped: [master2]
12:11:32 CST success: [master1]
12:11:32 CST [ConfigureKubernetesModule] Configure kubernetes
12:11:32 CST success: [master3]
12:11:32 CST success: [master1]
12:11:32 CST success: [master2]
12:11:32 CST success: [node1]
12:11:32 CST success: [node2]
12:11:32 CST [ChownModule] Chown user $HOME/.kube dir
12:11:32 CST success: [master2]
12:11:32 CST success: [master1]
12:11:32 CST success: [master3]
12:11:32 CST success: [node1]
12:11:32 CST success: [node2]
12:11:32 CST [AutoRenewCertsModule] Generate k8s certs renew script
12:11:33 CST success: [master2]
12:11:33 CST success: [master3]
12:11:33 CST success: [master1]
12:11:33 CST [AutoRenewCertsModule] Generate k8s certs renew service
12:11:33 CST success: [master3]
12:11:33 CST success: [master2]
12:11:33 CST success: [master1]
12:11:33 CST [AutoRenewCertsModule] Generate k8s certs renew timer
12:11:33 CST success: [master2]
12:11:33 CST success: [master3]
12:11:33 CST success: [master1]
12:11:33 CST [AutoRenewCertsModule] Enable k8s certs renew service
12:11:34 CST success: [master1]
12:11:34 CST success: [master3]
12:11:34 CST success: [master2]
12:11:34 CST [SaveKubeConfigModule] Save kube config as a configmap
12:11:37 CST success: [LocalHost]
12:11:37 CST [AddonsModule] Install addons
12:11:37 CST message: [LocalHost]
Install addon [1-0]: nfs-client
Release “nfs-client” does not exist. Installing it now.
NAME: nfs-client
LAST DEPLOYED: Thu Aug 11 12:11:56 2022
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
12:12:23 CST success: [LocalHost]
12:12:23 CST [DeployStorageClassModule] Generate OpenEBS manifest
12:12:27 CST message: [master1]
Default storageClass in cluster is not unique!
12:12:27 CST skipped: [master3]
12:12:27 CST skipped: [master2]
12:12:27 CST skipped: [master1]
12:12:27 CST [DeployStorageClassModule] Deploy OpenEBS as cluster default StorageClass
12:12:42 CST message: [master1]
deploy local-volume.yaml failed: Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubectl apply -f /etc/kubernetes/addons/local-volume.yaml”
error: the path “/etc/kubernetes/addons/local-volume.yaml” does not exist: Process exited with status 1
12:12:42 CST retry: [master1]
12:12:48 CST message: [master1]
deploy local-volume.yaml failed: Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubectl apply -f /etc/kubernetes/addons/local-volume.yaml”
error: the path “/etc/kubernetes/addons/local-volume.yaml” does not exist: Process exited with status 1
12:12:48 CST retry: [master1]
12:12:55 CST message: [master1]
deploy local-volume.yaml failed: Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubectl apply -f /etc/kubernetes/addons/local-volume.yaml”
error: the path “/etc/kubernetes/addons/local-volume.yaml” does not exist: Process exited with status 1
12:12:55 CST skipped: [master3]
12:12:55 CST skipped: [master2]
12:12:55 CST failed: [master1]
error: Pipeline[CreateClusterPipeline] execute failed: Module[DeployStorageClassModule] exec failed:
failed: [master1] [DeployOpenEBS] exec failed after 3 retires: deploy local-volume.yaml failed: Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubectl apply -f /etc/kubernetes/addons/local-volume.yaml”
error: the path “/etc/kubernetes/addons/local-volume.yaml” does not exist: Process exited with status 1
root@master1:/home/edge#