ERRO[01:17:06 CST] Failed to init kubernetes cluster: Failed to exec command: sudo -E /bin/sh -c “/usr/local/bin/kubeadm init –config=/etc/kubernetes/kubeadm-config.yaml”
W0905 01:17:05.347872 5003 utils.go:26] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.233.0.10]; the provided value is: [169.254.25.10]
W0905 01:17:05.348208 5003 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.6
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”. Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING FileExisting-socat]: socat not found in system path
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileExisting-conntrack]: conntrack not found in system path
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
To see the stack trace of this error execute with –v=5 or higher: Process exited with status 1 node=10.10.10.21
WARN[01:17:06 CST] Task failed …
WARN[01:17:06 CST] error: interrupted by error
Error: Failed to init kubernetes cluster: interrupted by error
3.0安装失败~~求助
rayzhou2017K零SK壹S
需要装conntrack
rayzhou2017 安装之后继续部署失败
[master-21 10.10.10.21] MSG:
v1.18.6
WARN[13:29:27 CST] Task failed …
WARN[13:29:27 CST] error: Failed to upload kubeadm certs: Failed to exec command: sudo -E /bin/sh -c “/usr/local/bin/kubeadm init phase upload-certs –upload-certs”
I0905 13:28:06.466595 116078 version.go:252] remote version is much newer: v1.19.0; falling back to: stable-1.18
W0905 13:28:07.173851 116078 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[upload-certs] Storing the certificates in Secret “kubeadm-certs” in the “kube-system” Namespace
error execution phase upload-certs: error uploading certs: error creating token: timed out waiting for the condition
To see the stack trace of this error execute with –v=5 or higher: Process exited with status 1
Error: Failed to get cluster status: Failed to upload kubeadm certs: Failed to exec command: sudo -E /bin/sh -c “/usr/local/bin/kubeadm init phase upload-certs –upload-certs”
I0905 13:28:06.466595 116078 version.go:252] remote version is much newer: v1.19.0; falling back to: stable-1.18
W0905 13:28:07.173851 116078 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[upload-certs] Storing the certificates in Secret “kubeadm-certs” in the “kube-system” Namespace
error execution phase upload-certs: error uploading certs: error creating token: timed out waiting for the condition
To see the stack trace of this error execute with –v=5 or higher: Process exited with status 1
Usage:
kk create cluster [flags]
Flags:
-f, –filename string Path to a configuration file
-h, –help help for cluster
–skip-pull-images Skip pre pull images
–with-kubernetes string Specify a supported version of kubernetes
–with-kubesphere Deploy a specific version of kubesphere (default v3.0.0)
-y, –yes Skip pre-check of the installation
Global Flags:
–debug Print detailed information (default true)
Failed to get cluster status: Failed to upload kubeadm certs: Failed to exec command: sudo -E /bin/sh -c “/usr/local/bin/kubeadm init phase upload-certs –upload-certs”
I0905 13:28:06.466595 116078 version.go:252] remote version is much newer: v1.19.0; falling back to: stable-1.18
W0905 13:28:07.173851 116078 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[upload-certs] Storing the certificates in Secret “kubeadm-certs” in the “kube-system” Namespace
error execution phase upload-certs: error uploading certs: error creating token: timed out waiting for the condition
To see the stack trace of this error execute with –v=5 or higher: Process exited with status 1
rayzhou2017K零SK壹S
不支持1.19.0,仔细看一下前提条件
rayzhou2017 什么的1.19版本,docker-ce吗?我安装的时候没有设置过版本的
FeynmanK零SK贰SK壹S
- 已编辑
leejor 请先仔细看清楚安装的前提条件与要求 https://kubesphere.com.cn/en/docs/quick-start/all-in-one-on-linux/
rayzhou2017K零SK壹S
K8s版本是多少?
rayzhou2017 v1.18.6
rayzhou2017K零SK壹S
先./kk delete cluster -f <config yaml>清理一下环境再安装试试
rayzhou2017 好的,我重试一下,vsphere 部署文档有歧义,准备虚拟机应该是8台,VIP不应计算为一台虚拟机。
rayzhou2017
[master-21 10.10.10.21] MSG:
[reset] Reading configuration from the cluster…
[reset] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -oyaml’
W0906 12:58:47.629895 40112 reset.go:99] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s: dial tcp 10.10.10.20:6443: connect: connection refused
[preflight] Running pre-flight checks
W0906 12:58:47.630260 40112 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in “/var/lib/kubelet”
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the “iptables” command.
If your cluster was setup to utilize IPVS, run ipvsadm –clear (or similar)
to reset your system’s IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
rayzhou2017K零SK壹S
贴一下config yaml文件
- 已编辑
rayzhou2017 我换成v1.17.9 安装仍然有问题。
配置文件
`
apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
name: config-sample
spec:
hosts:
- {name: master-21, address: 10.10.10.21, internalAddress: 10.10.10.21, password: an@123@#!}
- {name: master-22, address: 10.10.10.22, internalAddress: 10.10.10.22, password: an@123@#!}
- {name: master-23, address: 10.10.10.23, internalAddress: 10.10.10.23, password: an@123@#!}
- {name: node-24, address: 10.10.10.24, internalAddress: 10.10.10.24, password: an@123@#!}
- {name: node-25, address: 10.10.10.25, internalAddress: 10.10.10.25, password: an@123@#!}
- {name: node-26, address: 10.10.10.26, internalAddress: 10.10.10.26, password: an@123@#!}
- {name: node-27, address: 10.10.10.27, internalAddress: 10.10.10.27, password: an@123@#!}
- {name: node-28, address: 10.10.10.28, internalAddress: 10.10.10.28, password: an@123@#!}
roleGroups:
etcd:
- master-21
- master-22
- master-23
master:
- master-21
- master-22
- master-23
worker:
- node-24
- node-25
- node-26
- node-27
- node-28
controlPlaneEndpoint:
domain: lb.kubesphere.local
# vip
address: "10.10.10.20"
port: "6443"
kubernetes:
version: v1.17.9
imageRepo: kubesphere
clusterName: cluster.local
masqueradeAll: false # masqueradeAll tells kube-proxy to SNAT everything if using the pure iptables proxy mode. [Default: false]
maxPods: 110 # maxPods is the number of pods that can run on this Kubelet. [Default: 110]
nodeCidrMaskSize: 24 # internal network node size allocation. This is the size allocated to each node on your network. [Default: 24]
proxyMode: ipvs # mode specifies which proxy mode to use. [Default: ipvs]
network:
plugin: calico
calico:
ipipMode: Always # IPIP Mode to use for the IPv4 POOL created at start up. If set to a value other than Never, vxlanMode should be set to "Never". [Always | CrossSubnet | Never] [Default: Always]
vxlanMode: Never # VXLAN Mode to use for the IPv4 POOL created at start up. If set to a value other than Never, ipipMode should be set to "Never". [Always | CrossSubnet | Never] [Default: Never]
vethMTU: 1440 # The maximum transmission unit (MTU) setting determines the largest packet size that can be transmitted through your network. [Default: 1440]
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
registry:
registryMirrors: []
insecureRegistries: []
addons: [] # add your persistent storage and LoadBalancer plugin configuration here if you have, see https://kubesphere.io/docs/installing-on-linux/introduction/storage-configuration
rayzhou2017 安装了无数次都不行。脑壳疼。
ZzackzhangK零SK壹S
请问有什么报错吗?
zackzhang 就是上面的错误,感觉是lb通信问题。但是我PING了是通的。
[master-21 10.10.10.21] MSG:
[reset] Reading configuration from the cluster…
[reset] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -oyaml’
W0906 12:58:47.629895 40112 reset.go:99] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s: dial tcp 10.10.10.20:6443: connect: connection refused
[preflight] Running pre-flight checks
W0906 12:58:47.630260 40112 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in “/var/lib/kubelet”
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the “iptables” command.
If your cluster was setup to utilize IPVS, run ipvsadm –clear (or similar)
to reset your system’s IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.