按照此文档(https://kubesphere.com.cn/docs/installing-on-linux/introduction/storage-configuration/)配置ceph-csi存储后,执行安装命令报错并退出。
安装日志:

[root@kubesphere-1 ~]# ./kk create cluster -f config-sample.yaml
+--------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
| name   | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time         |
+--------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
| node3  | y    | y    | y       |          |       |       |           | y      | y          |             |                  | CST 17:36:46 |
| master | y    | y    | y       |          |       |       |           | y      | y          |             |                  | CST 17:36:46 |
| node1  | y    | y    | y       |          |       |       |           | y      | y          |             |                  | CST 17:36:46 |
| node2  | y    | y    | y       |          |       |       |           | y      | y          |             |                  | CST 17:36:46 |
+--------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+

This is a simple check of your environment.
Before installation, you should ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations

Continue this installation? [yes/no]: yes
INFO[17:36:49 CST] Downloading Installation Files               
INFO[17:36:49 CST] Downloading kubeadm ...                      
INFO[17:36:49 CST] Downloading kubelet ...                      
INFO[17:36:49 CST] Downloading kubectl ...                      
INFO[17:36:49 CST] Downloading kubecni ...                      
INFO[17:36:49 CST] Downloading helm ...                         
INFO[17:36:49 CST] Configurating operating system ...           
[node3 10.0.3.115] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
[node1 10.0.3.21] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
[master 10.0.3.208] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
[node2 10.0.3.69] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
INFO[17:36:53 CST] Installing docker ...                        
INFO[17:36:59 CST] Start to download images on all nodes        
[node1] Downloading image: kubesphere/pause:3.1
[node3] Downloading image: kubesphere/pause:3.1
[node2] Downloading image: kubesphere/pause:3.1
[master] Downloading image: kubesphere/etcd:v3.3.12
[node3] Downloading image: kubesphere/kube-proxy:v1.17.9
[node1] Downloading image: kubesphere/kube-proxy:v1.17.9
[node3] Downloading image: coredns/coredns:1.6.9
[node2] Downloading image: kubesphere/kube-proxy:v1.17.9
[node1] Downloading image: coredns/coredns:1.6.9
[node2] Downloading image: coredns/coredns:1.6.9
[node3] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[master] Downloading image: kubesphere/pause:3.1
[node1] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[master] Downloading image: kubesphere/kube-apiserver:v1.17.9
[node3] Downloading image: calico/kube-controllers:v3.15.1
[node2] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[node1] Downloading image: calico/kube-controllers:v3.15.1
[master] Downloading image: kubesphere/kube-controller-manager:v1.17.9
[node3] Downloading image: calico/cni:v3.15.1
[node2] Downloading image: calico/kube-controllers:v3.15.1
[node1] Downloading image: calico/cni:v3.15.1
[master] Downloading image: kubesphere/kube-scheduler:v1.17.9
[node3] Downloading image: calico/node:v3.15.1
[node1] Downloading image: calico/node:v3.15.1
[node2] Downloading image: calico/cni:v3.15.1
[master] Downloading image: kubesphere/kube-proxy:v1.17.9
[node3] Downloading image: calico/pod2daemon-flexvol:v3.15.1
[node1] Downloading image: calico/pod2daemon-flexvol:v3.15.1
[node2] Downloading image: calico/node:v3.15.1
[master] Downloading image: coredns/coredns:1.6.9
[node2] Downloading image: calico/pod2daemon-flexvol:v3.15.1
[master] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[master] Downloading image: calico/kube-controllers:v3.15.1
[master] Downloading image: calico/cni:v3.15.1
[master] Downloading image: calico/node:v3.15.1
[master] Downloading image: calico/pod2daemon-flexvol:v3.15.1
INFO[17:37:07 CST] Generating etcd certs                        
INFO[17:37:09 CST] Synchronizing etcd certs                     
INFO[17:37:09 CST] Creating etcd service                        
INFO[17:37:16 CST] Starting etcd cluster                        
[master 10.0.3.208] MSG:
Configuration file will be created
INFO[17:37:17 CST] Refreshing etcd configuration                
Waiting for etcd to start
INFO[17:37:24 CST] Get cluster status                           
[master 10.0.3.208] MSG:
Cluster will be created.
INFO[17:37:25 CST] Installing kube binaries                     
Push /root/kubekey/v1.17.9/amd64/kubeadm to 10.0.3.208:/tmp/kubekey/kubeadm   Done
Push /root/kubekey/v1.17.9/amd64/kubeadm to 10.0.3.69:/tmp/kubekey/kubeadm   Done
Push /root/kubekey/v1.17.9/amd64/kubeadm to 10.0.3.21:/tmp/kubekey/kubeadm   Done
Push /root/kubekey/v1.17.9/amd64/kubeadm to 10.0.3.115:/tmp/kubekey/kubeadm   Done
Push /root/kubekey/v1.17.9/amd64/kubelet to 10.0.3.208:/tmp/kubekey/kubelet   Done
Push /root/kubekey/v1.17.9/amd64/kubelet to 10.0.3.69:/tmp/kubekey/kubelet   Done
Push /root/kubekey/v1.17.9/amd64/kubelet to 10.0.3.115:/tmp/kubekey/kubelet   Done
Push /root/kubekey/v1.17.9/amd64/kubelet to 10.0.3.21:/tmp/kubekey/kubelet   Done
Push /root/kubekey/v1.17.9/amd64/kubectl to 10.0.3.21:/tmp/kubekey/kubectl   Done
Push /root/kubekey/v1.17.9/amd64/kubectl to 10.0.3.208:/tmp/kubekey/kubectl   Done
Push /root/kubekey/v1.17.9/amd64/kubectl to 10.0.3.69:/tmp/kubekey/kubectl   Done
Push /root/kubekey/v1.17.9/amd64/kubectl to 10.0.3.115:/tmp/kubekey/kubectl   Done
Push /root/kubekey/v1.17.9/amd64/helm to 10.0.3.208:/tmp/kubekey/helm   Done
Push /root/kubekey/v1.17.9/amd64/helm to 10.0.3.21:/tmp/kubekey/helm   Done
Push /root/kubekey/v1.17.9/amd64/helm to 10.0.3.69:/tmp/kubekey/helm   Done
Push /root/kubekey/v1.17.9/amd64/helm to 10.0.3.115:/tmp/kubekey/helm   Done
Push /root/kubekey/v1.17.9/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 10.0.3.208:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz   Done
Push /root/kubekey/v1.17.9/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 10.0.3.69:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz   Done
Push /root/kubekey/v1.17.9/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 10.0.3.21:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz   Done
Push /root/kubekey/v1.17.9/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 10.0.3.115:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz   Done
INFO[17:37:38 CST] Initializing kubernetes cluster              
[master 10.0.3.208] MSG:
W1120 17:37:38.737915   10728 defaults.go:186] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
W1120 17:37:38.738278   10728 validation.go:28] Cannot validate kube-proxy config - no validator is available
W1120 17:37:38.738297   10728 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.9
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING FileExisting-ebtables]: ebtables not found in system path
	[WARNING FileExisting-socat]: socat not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost lb.kubesphere.local master master.cluster.local node1 node1.cluster.local node2 node2.cluster.local node3 node3.cluster.local] and IPs [10.233.0.1 10.0.3.208 127.0.0.1 10.0.3.208 10.0.3.21 10.0.3.69 10.0.3.115 10.233.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[controlplane] Adding extra host path mount "host-time" to "kube-controller-manager"
W1120 17:37:44.179999   10728 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[controlplane] Adding extra host path mount "host-time" to "kube-controller-manager"
W1120 17:37:44.201325   10728 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[controlplane] Adding extra host path mount "host-time" to "kube-controller-manager"
W1120 17:37:44.202817   10728 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 20.503836 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: re95bv.6jhu8a860t2oxc78
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join lb.kubesphere.local:6443 --token re95bv.6jhu8a860t2oxc78 \
    --discovery-token-ca-cert-hash sha256:d7196ddb35a4fabeaf15cc7966462eee3f6abad5177ee2cc11fd175c435e64b1 \
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join lb.kubesphere.local:6443 --token re95bv.6jhu8a860t2oxc78 \
    --discovery-token-ca-cert-hash sha256:d7196ddb35a4fabeaf15cc7966462eee3f6abad5177ee2cc11fd175c435e64b1
[master 10.0.3.208] MSG:
service "kube-dns" deleted
[master 10.0.3.208] MSG:
service/coredns created
[master 10.0.3.208] MSG:
serviceaccount/nodelocaldns created
daemonset.apps/nodelocaldns created
[master 10.0.3.208] MSG:
configmap/nodelocaldns created
[master 10.0.3.208] MSG:
W1120 17:38:49.630658   13561 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W1120 17:38:49.631552   13561 version.go:102] falling back to the local client version: v1.17.9
W1120 17:38:49.631851   13561 validation.go:28] Cannot validate kube-proxy config - no validator is available
W1120 17:38:49.631875   13561 validation.go:28] Cannot validate kubelet config - no validator is available
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
f1bc3ab4145b8e6001ba5964df24d563fbf671ee201abfaed1d7f2eebf4bfdb4
[master 10.0.3.208] MSG:
secret/kubeadm-certs patched
[master 10.0.3.208] MSG:
secret/kubeadm-certs patched
[master 10.0.3.208] MSG:
secret/kubeadm-certs patched
[master 10.0.3.208] MSG:
W1120 17:38:51.132323   13997 validation.go:28] Cannot validate kubelet config - no validator is available
W1120 17:38:51.132403   13997 validation.go:28] Cannot validate kube-proxy config - no validator is available
kubeadm join lb.kubesphere.local:6443 --token b9857s.t4ngfxnl1w3spbtg     --discovery-token-ca-cert-hash sha256:d7196ddb35a4fabeaf15cc7966462eee3f6abad5177ee2cc11fd175c435e64b1
[master 10.0.3.208] MSG:
NAME     STATUS     ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION               CONTAINER-RUNTIME
master   NotReady   master   49s   v1.17.9   10.0.3.208    <none>        CentOS Linux 7 (Core)   3.10.0-957.12.2.el7.x86_64   docker://19.3.13
INFO[17:38:51 CST] Deploying network plugin ...                 
[master 10.0.3.208] MSG:
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
INFO[17:38:55 CST] Joining nodes to cluster                     
[node1 10.0.3.21] MSG:
W1120 17:38:55.656580   20324 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING FileExisting-ebtables]: ebtables not found in system path
	[WARNING FileExisting-socat]: socat not found in system path
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
W1120 17:38:58.906753   20324 defaults.go:186] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[node1 10.0.3.21] MSG:
node/node1 labeled
[node2 10.0.3.69] MSG:
W1120 17:38:55.749156   22084 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING FileExisting-ebtables]: ebtables not found in system path
	[WARNING FileExisting-socat]: socat not found in system path
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
W1120 17:39:00.833714   22084 defaults.go:186] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[node3 10.0.3.115] MSG:
W1120 17:38:55.511592   27917 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING FileExisting-ebtables]: ebtables not found in system path
	[WARNING FileExisting-socat]: socat not found in system path
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
W1120 17:39:00.840400   27917 defaults.go:186] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[node2 10.0.3.69] MSG:
node/node2 labeled
[node3 10.0.3.115] MSG:
node/node3 labeled
INFO[17:39:47 CST] Installing addon [2-1]: ceph-csi-rbd         
`WARN`[17:39:47 CST] Task failed ...                              
`WARN`[17:39:47 CST] error: Kubernetes cluster unreachable: Get "https://10.0.3.208:6443/version?timeout=32s": Forbidden 
Error: Failed to deploy addons: Kubernetes cluster unreachable: Get "https://10.0.3.208:6443/version?timeout=32s": Forbidden
Usage:
  kk create cluster [flags]

Flags:
  -f, --filename string          Path to a configuration file
  -h, --help                     help for cluster
      --skip-pull-images         Skip pre pull images
      --with-kubernetes string   Specify a supported version of kubernetes
      --with-kubesphere          Deploy a specific version of kubesphere (default v3.0.0)
  -y, --yes                      Skip pre-check of the installation

Global Flags:
      --debug   Print detailed information (default true)

Failed to deploy addons: Kubernetes cluster unreachable: Get "https://10.0.3.208:6443/version?timeout=32s": Forbidden

我的配置文件如下:

[root@kubesphere-1 ~]# cat config-sample.yaml
apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: master, address: 10.0.3.208, internalAddress: 10.0.3.208, user: root, password: 12345}
  - {name: node1, address: 10.0.3.21, internalAddress: 10.0.3.21, user: root, password: 12345}
  - {name: node2, address: 10.0.3.69, internalAddress: 10.0.3.69, user: root, password: 12345}
  - {name: node3, address: 10.0.3.115, internalAddress: 10.0.3.115, user: root, password: 12345}
  roleGroups:
    etcd:
    - master
    master: 
    - master
    worker:
    - node1
    - node2
    - node3
  controlPlaneEndpoint:
    domain: lb.kubesphere.local
    address: ""
    port: "6443"
  kubernetes:
    version: v1.17.9
    imageRepo: kubesphere
    clusterName: cluster.local
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
  registry:
    registryMirrors: ["http://192.168.18.253:4003","http://192.168.18.253:4002","http://192.168.18.253:4004"]
    insecureRegistries: ["192.168.18.253:4003","192.168.18.253:4004","192.168.18.253:4002"]
  addons:
  - name: ceph-csi-rbd
    namespace: kube-system
    sources:
      chart:
        name: ceph-csi-rbd
        repo: https://ceph.github.io/csi-charts
        values: /root/ceph-csi-rbd.yaml
  - name: ceph-csi-rbd-sc
    sources:
      yaml:
        path:
        - /root/ceph-csi-rbd-sc.yaml


---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.0.0
spec:
  local_registry: ""
  persistence:
    storageClass: ""
  authentication:
    jwtSecret: ""
  etcd:
    monitoring: true
    endpointIps: localhost
    port: 2379
    tlsEnable: true
  common:
    es:
      elasticsearchDataVolumeSize: 20Gi
      elasticsearchMasterVolumeSize: 4Gi
      elkPrefix: logstash
      logMaxAge: 7
    mysqlVolumeSize: 20Gi
    minioVolumeSize: 20Gi
    etcdVolumeSize: 20Gi
    openldapVolumeSize: 2Gi
    redisVolumSize: 2Gi
  console:
    enableMultiLogin: false  # enable/disable multi login
    port: 30880
  alerting:
    enabled: true
  auditing:
    enabled: true
  devops:
    enabled: true
    jenkinsMemoryLim: 2Gi
    jenkinsMemoryReq: 1500Mi
    jenkinsVolumeSize: 8Gi
    jenkinsJavaOpts_Xms: 512m
    jenkinsJavaOpts_Xmx: 512m
    jenkinsJavaOpts_MaxRAM: 2g
  events:
    enabled: true
    ruler:
      enabled: true
      replicas: 2
  logging:
    enabled: true
    logsidecarReplicas: 2
  metrics_server:
    enabled: true
  monitoring:
    prometheusMemoryRequest: 400Mi
    prometheusVolumeSize: 20Gi
  multicluster:
    clusterRole: none  # host | member | none
  networkpolicy:
    enabled: false
  notification:
    enabled: true
  openpitrix:
    enabled: true
  servicemesh:
    enabled: true
[root@kubesphere-1 ~]# cat ceph-csi-rbd.yaml 
csiConfig:
  - clusterID: "d2a65b2e-c18f-4c6c-8fef-5ffa789b2a08"
    monitors:
      - "192.168.17.81:6789"     # <--TobeReplaced-->
      - "192.168.17.81:6789"     # <--TobeReplaced-->
      - "192.168.17.83:6789"    # <--TobeReplaced-->

[root@kubesphere-1 ~]# cat ceph-csi-rbd-sc.yaml 
apiVersion: v1
kind: Secret
metadata:
  name: csi-rbd-secret
  namespace: kube-system
stringData:
  userID: admin
  userKey: "AQBaZKVdvbXOKRAA9BtOa8JGn8kWwtBIgKTUUA=="    
  encryptionPassphrase: test_passphrase
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
   name: csi-rbd-sc
   annotations:
     storageclass.beta.kubernetes.io/is-default-class: "true"
     storageclass.kubesphere.io/supported-access-modes: '["ReadWriteOnce","ReadOnlyMany","ReadWriteMany"]'
provisioner: rbd.csi.ceph.com
parameters:
   clusterID: "d2a65b2e-c18f-4c6c-8fef-5ffa789b2a08"
   pool: "kube2"    # <--ToBeReplaced-->
   imageFeatures: layering
   csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
   csi.storage.k8s.io/provisioner-secret-namespace: kube-system
   csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
   csi.storage.k8s.io/controller-expand-secret-namespace: kube-system
   csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
   csi.storage.k8s.io/node-stage-secret-namespace: kube-system
   csi.storage.k8s.io/fstype: ext4
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
   - discard
  • hongming 回复了此帖
  • hongming
    我注意到在开始安装之前,我给wget配置了一个https_proxy的export,以用于下载kk。这可能是罪恶的源头,删除这个export后,恢复正常。

    kopnono Get "https://10.0.3.208:6443/version?timeout=32s": Forbidden 正常情况下这个API 是不做鉴权的,不应该出现403, 你直接curl一下这个API 看看具体的返回信息

    curl -k https://10.0.3.208:6443/version

      hongming

      [root@kubesphere-1 ~]# curl -k https://10.0.3.208:6443/version
      curl: (56) Received HTTP code 403 from proxy after CONNECT

      需要重新安装吗?

      kopnono

      安装是在master上执行的么?
      kubekey目录下有个config,试试那个config可不可以用
      kubectl get node --kubeconfig=./config

        Cauchy
        是在master上执行的,确实不能用

        [root@kubesphere-1 ~]# kubectl get node --kubeconfig=./.kube/config
        Unable to connect to the server: Forbidden

        node 上的可以

        [root@kubesphere-2 ~]# kubectl get node --kubeconfig=./.kube/config
        NAME    STATUS   ROLES    AGE   VERSION
        node1   Ready    master   97m   v1.17.9
        node2   Ready    worker   96m   v1.17.9
        node3   Ready    worker   96m   v1.17.9
        node4   Ready    worker   95m   v1.17.9

        hongming
        我注意到在开始安装之前,我给wget配置了一个https_proxy的export,以用于下载kk。这可能是罪恶的源头,删除这个export后,恢复正常。