[kube01 10.252.120.103] MSG:
[preflight] Running pre-flight checks
W0714 18:49:08.104047 9577 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in “/var/lib/kubelet”
W0714 18:49:08.107167 9577 cleanupnode.go:99] [reset] Failed to evaluate the “/var/lib/kubelet” directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the “iptables” command.
If your cluster was setup to utilize IPVS, run ipvsadm –clear (or similar)
to reset your system’s IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[kube01 10.252.120.103] MSG:
[preflight] Running pre-flight checks
W0714 18:49:08.520815 9747 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in “/var/lib/kubelet”
W0714 18:49:08.523866 9747 cleanupnode.go:99] [reset] Failed to evaluate the “/var/lib/kubelet” directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the “iptables” command.
If your cluster was setup to utilize IPVS, run ipvsadm –clear (or similar)
to reset your system’s IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
ERRO[18:49:08 CST] Failed to init kubernetes cluster: Failed to exec command: sudo env PATH=$PATH /bin/sh -c “/usr/local/bin/kubeadm init –config=/etc/kubernetes/kubeadm-config.yaml –ignore-preflight-errors=FileExisting-crictl”
W0714 18:49:08.617805 9787 utils.go:69] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.233.0.10]; the provided value is: [169.254.25.10]
W0714 18:49:08.655380 9787 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.9
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”. Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING FileExisting-ebtables]: ebtables not found in system path
[WARNING FileExisting-ethtool]: ethtool not found in system path
[WARNING FileExisting-tc]: tc not found in system path
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileExisting-conntrack]: conntrack not found in system path
[ERROR FileExisting-ip]: ip not found in system path
[ERROR FileExisting-iptables]: iptables not found in system path
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
To see the stack trace of this error execute with –v=5 or higher: Process exited with status 1 node=10.252.120.103
WARN[18:49:08 CST] Task failed …
WARN[18:49:08 CST] error: interrupted by error
Error: Failed to init kubernetes cluster: interrupted by error
Usage:
kk create cluster [flags]
Flags:
–download-cmd string The user defined command to download the necessary binary files. The first param ‘%s’ is output path, the second param ‘%s’, is the URL (default “curl -L -o %s %s”)
-f, –filename string Path to a configuration file
-h, –help help for cluster
–skip-pull-images Skip pre pull images
–with-kubernetes string Specify a supported version of kubernetes (default “v1.19.8”)
–with-kubesphere Deploy a specific version of kubesphere (default v3.1.0)
–with-local-storage Deploy a local PV provisioner
-y, –yes Skip pre-check of the installation
Global Flags:
–debug Print detailed information (default true)
–in-cluster Running inside the cluster
Failed to init kubernetes cluster: interrupted by error
配置文件:
apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: kube01, address: 10.252.120.103, internalAddress: 10.252.120.103, port: 22, user: scapp, password: 111}
- {name: kube02, address: 10.252.120.104, internalAddress: 10.252.120.104, port: 22, user: scapp, password: 111}
- {name: kube03, address: 10.252.120.105, internalAddress: 10.252.120.105, port: 22, user: scapp, password: 111}
- {name: kube04, address: 10.252.120.106, internalAddress: 10.252.120.106, port: 22, user: scapp, password: 111}
- {name: kube05, address: 10.252.120.107, internalAddress: 10.252.120.107, port: 22, user: scapp, password: 111}
- {name: kube06, address: 10.252.120.108, internalAddress: 10.252.120.108, port: 22, user: scapp, password: 111}
- {name: kube07, address: 10.252.120.109, internalAddress: 10.252.120.109, port: 22, user: scapp, password: 111}
- {name: kube08, address: 10.252.120.110, internalAddress: 10.252.120.110, port: 22, user: scapp, password: 111}
- {name: kube09, address: 10.252.120.111, internalAddress: 10.252.120.103, port: 22, user: scapp, password: 111}
- {name: kube10, address: 10.252.120.112, internalAddress: 10.252.120.112, port: 22, user: scapp, password: 111}
- {name: kube11, address: 10.252.120.113, internalAddress: 10.252.120.113, port: 22, user: scapp, password: 111}
roleGroups:
etcd:
- kube01
- kube02
- kube03
master:
- kube01
- kube02
- kube03
worker:
- kube04
- kube05
- kube06
- kube07
- kube08
- kube09
- kube10
- kube11
controlPlaneEndpoint:
domain: lb.kubesphere.local
address: “10.252.120.115”
port: 6443
kubernetes:
version: v1.19.9
imageRepo: kubesphere
clusterName: cluster.local
network:
plugin: calico
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
registry:
registryMirrors: []
insecureRegistries: [10.252.120.115]
addons: []
—
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
name: ks-installer
namespace: kubesphere-system
labels:
version: v3.1.0
spec:
persistence:
storageClass: ""
authentication:
jwtSecret: ""
zone: ""
local_registry: ""
etcd:
monitoring: true
endpointIps: localhost
port: 2379
tlsEnable: true
common:
redis:
enabled: true
redisVolumSize: 2Gi
openldap:
enabled: false
openldapVolumeSize: 2Gi
minioVolumeSize: 20Gi
monitoring:
endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
es:
elasticsearchMasterVolumeSize: 4Gi
elasticsearchDataVolumeSize: 20Gi
logMaxAge: 7
elkPrefix: logstash
basicAuth:
enabled: false
username: ""
password: ""
externalElasticsearchUrl: ""
externalElasticsearchPort: ""
console:
enableMultiLogin: true
port: 30880
alerting:
enabled: true
thanosruler:
replicas: 1
resources: {}
auditing:
enabled: true
devops:
enabled: true
jenkinsMemoryLim: 2Gi
jenkinsMemoryReq: 1500Mi
jenkinsVolumeSize: 8Gi
jenkinsJavaOpts_Xms: 512m
jenkinsJavaOpts_Xmx: 512m
jenkinsJavaOpts_MaxRAM: 2g
events:
enabled: true
ruler:
enabled: true
replicas: 2
logging:
enabled: true
logsidecar:
enabled: true
replicas: 2
metrics_server:
enabled: false
monitoring:
storageClass: ""
prometheusMemoryRequest: 400Mi
prometheusVolumeSize: 20Gi
multicluster:
clusterRole: none
network:
networkpolicy:
enabled: true
ippool:
type: calico
topology:
type: weave-scope
openpitrix:
store:
enabled: true
servicemesh:
enabled: true
kubeedge:
enabled: true
cloudCore:
nodeSelector: {“node-role.kubernetes.io/worker”: ""}
tolerations: []
cloudhubPort: “10000”
cloudhubQuicPort: “10001”
cloudhubHttpsPort: “10002”
cloudstreamPort: “10003”
tunnelPort: “10004”
cloudHub:
advertiseAddress:
- ""
nodeLimit: “100”
service:
cloudhubNodePort: “30000”
cloudhubQuicNodePort: “30001”
cloudhubHttpsNodePort: “30002”
cloudstreamNodePort: “30003”
tunnelNodePort: “30004”
edgeWatcher:
nodeSelector: {“node-role.kubernetes.io/worker”: ""}
tolerations: []
edgeWatcherAgent:
nodeSelector: {“node-role.kubernetes.io/worker”: ""}
tolerations: []
操作系统是Centos7.6 docker是手动二进制安装
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.io
[Service]
Type=notify
WorkingDirectory=/data/k8s/docker
Environment=“PATH=/usr/local/bin:/bin:/sbin:/usr/bin:/usr/sbin”
ExecStart=/usr/local/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
Restart=on-failure
RestartSec=5
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target