ARN[12:15:00 CST] Task failed …
WARN[12:15:00 CST] error: Failed to start etcd cluster: Failed to exec command: sudo -E /bin/sh -c “export ETCDCTL_API=2;export ETCDCTL_CERT_FILE=‘/etc/ssl/etcd/ssl/admin-node2.pem’;export ETCDCTL_KEY_FILE=‘/etc/ssl/etcd/ssl/admin-node2-key.pem’;export ETCDCTL_CA_FILE=‘/etc/ssl/etcd/ssl/ca.pem’;/usr/local/bin/etcdctl –endpoints=https://192.168.1.113:2379,https://192.168.1.114:2379,https://192.168.1.166:2379 cluster-health | grep -q ‘cluster is healthy’”
: Process exited with status 1
Error: Failed to start etcd cluster: Failed to start etcd cluster: Failed to exec command: sudo -E /bin/sh -c “export ETCDCTL_API=2;export ETCDCTL_CERT_FILE=‘/etc/ssl/etcd/ssl/admin-node2.pem’;export ETCDCTL_KEY_FILE=‘/etc/ssl/etcd/ssl/admin-node2-key.pem’;export ETCDCTL_CA_FILE=‘/etc/ssl/etcd/ssl/ca.pem’;/usr/local/bin/etcdctl –endpoints=https://192.168.1.113:2379,https://192.168.1.114:2379,https://192.168.1.166:2379 cluster-health | grep -q ‘cluster is healthy’”
: Process exited with status 1
Usage:
kk add nodes [flags]
Flags:
–download-cmd string The user defined command to download the necessary binary files. The first param ‘%s’ is output path, the second param ‘%s’, is the URL (default “curl -L -o %s %s”)
-f, –filename string Path to a configuration file
-h, –help help for nodes
–skip-pull-images Skip pre pull images
-y, –yes Skip pre-check of the installation
Global Flags:
–debug Print detailed information (default true)
–in-cluster Running inside the cluster
Failed to start etcd cluster: Failed to start etcd cluster: Failed to exec command: sudo -E /bin/sh -c “export ETCDCTL_API=2;export ETCDCTL_CERT_FILE=‘/etc/ssl/etcd/ssl/admin-node2.pem’;export ETCDCTL_KEY_FILE=‘/etc/ssl/etcd/ssl/admin-node2-key.pem’;export ETCDCTL_CA_FILE=‘/etc/ssl/etcd/ssl/ca.pem’;/usr/local/bin/etcdctl –endpoints=https://192.168.1.113:2379,https://192.168.1.114:2379,https://192.168.1.166:2379 cluster-health | grep -q ‘cluster is healthy’”
: Process exited with status 1
配置文件:
`apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
name: sample
spec:
hosts:
You should complete the ssh information of the hosts
- {name: node6, address: 192.168.1.113, internalAddress: 192.168.1.113}
- {name: node1, address: 192.168.1.114, internalAddress: 192.168.1.114}
- {name: node2, address: 192.168.1.166, internalAddress: 192.168.1.166}
roleGroups:
etcd:- node6
- node1
- node2
master: - node6
- node1
worker: - node6
- node1
controlPlaneEndpoint:If loadbalancer was used, ‘address’ should be set to loadbalancer’s ip.
domain: lb.kubesphere.local
address: ""
port: 6443
kubernetes:
version: v1.20.4
clusterName: cluster.local
proxyMode: ipvs
masqueradeAll: false
maxPods: 120
nodeCidrMaskSize: 24
network:
plugin: calico
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
registry:
privateRegistry: ""
`