WARN[17:17:46 CST] error: Failed to start etcd cluster: Failed to exec command: sudo -E /bin/sh -c “export ETCDCTL_API=2;export ETCDCTL_CERT_FILE=‘/etc/ssl/etcd/ssl/admin-master.pem’;export ETCDCTL_KEY_FILE=‘/etc/ssl/etcd/ssl/admin-master-key.pem’;export ETCDCTL_CA_FILE=‘/etc/ssl/etcd/ssl/ca.pem’;/usr/local/bin/etcdctl –endpoints=https://116.196.101.194:2379 cluster-health | grep -q ‘cluster is healthy’”
Error: client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 116.196.101.194:2379: connect: connection refused

error #0: dial tcp 116.196.101.194:2379: connect: connection refused: Process exited with status 1
Error: Failed to start etcd cluster: Failed to start etcd cluster: Failed to exec command: sudo -E /bin/sh -c “export ETCDCTL_API=2;export ETCDCTL_CERT_FILE=‘/etc/ssl/etcd/ssl/admin-master.pem’;export ETCDCTL_KEY_FILE=‘/etc/ssl/etcd/ssl/admin-master-key.pem’;export ETCDCTL_CA_FILE=‘/etc/ssl/etcd/ssl/ca.pem’;/usr/local/bin/etcdctl –endpoints=https://116.196.101.194:2379 cluster-health | grep -q ‘cluster is healthy’”
Error: client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 116.196.101.194:2379: connect: connection refused

具体信息见附件

    Forest-L
    apiVersion: kubekey.kubesphere.io/v1alpha1
    kind: Cluster
    metadata:
    name: jdd-kubesphere
    spec:
    hosts:

    • {name: master, address: 116.196.101.194, internalAddress: 116.196.101.194, user: admin, password: Kubesphere@1}
    • {name: node1, address: 116.196.105.242, internalAddress: 116.196.105.242, user: admin, password: Kubesphere@1}
      roleGroups:
      etcd:
      • master
        master:
      • master
        worker:
      • node1
        controlPlaneEndpoint:
        domain: lb.kubesphere.local
        address: ""
        port: “6443”
        kubernetes:
        version: v1.18.6
        imageRepo: kubesphere
        clusterName: cluster.local
        network:
        plugin: calico
        kubePodsCIDR: 10.233.64.0/18
        kubeServiceCIDR: 10.233.0.0/18
        registry:
        registryMirrors: []
        insecureRegistries: []
        addons: []


    apiVersion: installer.kubesphere.io/v1alpha1
    kind: ClusterConfiguration
    metadata:
    name: ks-installer
    namespace: kubesphere-system
    labels:
    version: v3.0.0
    spec:
    local_registry: ""
    persistence:
    storageClass: ""
    authentication:
    jwtSecret: ""
    etcd:
    monitoring: true
    endpointIps: localhost
    port: 2379
    tlsEnable: true
    common:
    es:
    elasticsearchDataVolumeSize: 20Gi
    elasticsearchMasterVolumeSize: 4Gi
    elkPrefix: logstash
    logMaxAge: 7
    mysqlVolumeSize: 20Gi
    minioVolumeSize: 20Gi
    etcdVolumeSize: 20Gi
    openldapVolumeSize: 2Gi
    redisVolumSize: 2Gi
    console:
    enableMultiLogin: false # enable/disable multi login
    port: 30880
    alerting:
    enabled: false
    auditing:
    enabled: false
    devops:
    enabled: false
    jenkinsMemoryLim: 2Gi
    jenkinsMemoryReq: 1500Mi
    jenkinsVolumeSize: 8Gi
    jenkinsJavaOpts_Xms: 512m
    jenkinsJavaOpts_Xmx: 512m
    jenkinsJavaOpts_MaxRAM: 2g
    events:
    enabled: false
    ruler:
    enabled: true
    replicas: 2
    logging:
    enabled: false
    logsidecarReplicas: 2
    metrics_server:
    enabled: true
    monitoring:
    prometheusMemoryRequest: 400Mi
    prometheusVolumeSize: 20Gi
    multicluster:
    clusterRole: none # host | member | none
    networkpolicy:
    enabled: false
    notification:
    enabled: true
    openpitrix:
    enabled: true
    servicemesh:
    enabled: true

    [root@master servers]# ./kk create cluster -f config-sample.yaml
    +——–+——+——+———+———-+——-+——-+———–+——–+————+————-+——————+————–+
    | name | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time |
    +——–+——+——+———+———-+——-+——-+———–+——–+————+————-+——————+————–+
    | node1 | y | y | y | y | y | y | y | y | | | | CST 18:18:40 |
    | master | y | y | y | y | y | y | y | y | | | | CST 18:18:40 |
    +——–+——+——+———+———-+——-+——-+———–+——–+————+————-+——————+————–+

    This is a simple check of your environment.
    Before installation, you should ensure that your machines meet all requirements specified at
    https://github.com/kubesphere/kubekey#requirements-and-recommendations

    Continue this installation? [yes/no]: yes
    INFO[18:18:43 CST] Downloading Installation Files
    INFO[18:18:43 CST] Downloading kubeadm …
    INFO[18:18:43 CST] Downloading kubelet …
    INFO[18:18:45 CST] Downloading kubectl …
    INFO[18:18:45 CST] Downloading helm …
    INFO[18:18:46 CST] Downloading kubecni …
    INFO[18:18:46 CST] Configurating operating system …
    [node1 116.196.105.242] MSG:
    net.ipv4.ip_forward = 1
    net.bridge.bridge-nf-call-arptables = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_local_reserved_ports = 30000-32767
    [master 116.196.101.194] MSG:
    net.ipv4.ip_forward = 1
    net.bridge.bridge-nf-call-arptables = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_local_reserved_ports = 30000-32767
    INFO[18:18:48 CST] Installing docker …
    INFO[18:18:48 CST] Start to download images on all nodes
    [node1] Downloading image: kubesphere/pause:3.2
    [master] Downloading image: kubesphere/etcd:v3.3.12
    [node1] Downloading image: kubesphere/kube-proxy:v1.18.6
    [master] Downloading image: kubesphere/pause:3.2
    [node1] Downloading image: coredns/coredns:1.6.9
    [master] Downloading image: kubesphere/kube-apiserver:v1.18.6
    [node1] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
    [master] Downloading image: kubesphere/kube-controller-manager:v1.18.6
    [node1] Downloading image: calico/kube-controllers:v3.15.1
    [master] Downloading image: kubesphere/kube-scheduler:v1.18.6
    [node1] Downloading image: calico/cni:v3.15.1
    [master] Downloading image: kubesphere/kube-proxy:v1.18.6
    [node1] Downloading image: calico/node:v3.15.1
    [master] Downloading image: coredns/coredns:1.6.9
    [master] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
    [node1] Downloading image: calico/pod2daemon-flexvol:v3.15.1
    [master] Downloading image: calico/kube-controllers:v3.15.1
    [master] Downloading image: calico/cni:v3.15.1
    [master] Downloading image: calico/node:v3.15.1
    [master] Downloading image: calico/pod2daemon-flexvol:v3.15.1
    INFO[18:19:39 CST] Generating etcd certs
    INFO[18:19:40 CST] Synchronizing etcd certs
    INFO[18:19:40 CST] Creating etcd service
    INFO[18:19:41 CST] Starting etcd cluster
    [master 116.196.101.194] MSG:
    Configuration file already exists
    Waiting for etcd to start
    Waiting for etcd to start
    Waiting for etcd to start
    Waiting for etcd to start
    Waiting for etcd to start
    Waiting for etcd to start
    Waiting for etcd to start
    Waiting for etcd to start
    Waiting for etcd to start
    Waiting for etcd to start
    Waiting for etcd to start
    Waiting for etcd to start
    Waiting for etcd to start
    Waiting for etcd to start
    Waiting for etcd to start
    Waiting for etcd to start
    Waiting for etcd to start
    Waiting for etcd to start
    Waiting for etcd to start
    Waiting for etcd to start
    WARN[18:21:18 CST] Task failed …
    WARN[18:21:18 CST] error: Failed to start etcd cluster: Failed to exec command: sudo -E /bin/sh -c “export ETCDCTL_API=2;export ETCDCTL_CERT_FILE=‘/etc/ssl/etcd/ssl/admin-master.pem’;export ETCDCTL_KEY_FILE=‘/etc/ssl/etcd/ssl/admin-master-key.pem’;export ETCDCTL_CA_FILE=‘/etc/ssl/etcd/ssl/ca.pem’;/usr/local/bin/etcdctl –endpoints=https://116.196.101.194:2379 cluster-health | grep -q ‘cluster is healthy’”
    Error: client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 116.196.101.194:2379: connect: connection refused

    error #0: dial tcp 116.196.101.194:2379: connect: connection refused: Process exited with status 1
    Error: Failed to start etcd cluster: Failed to start etcd cluster: Failed to exec command: sudo -E /bin/sh -c “export ETCDCTL_API=2;export ETCDCTL_CERT_FILE=‘/etc/ssl/etcd/ssl/admin-master.pem’;export ETCDCTL_KEY_FILE=‘/etc/ssl/etcd/ssl/admin-master-key.pem’;export ETCDCTL_CA_FILE=‘/etc/ssl/etcd/ssl/ca.pem’;/usr/local/bin/etcdctl –endpoints=https://116.196.101.194:2379 cluster-health | grep -q ‘cluster is healthy’”
    Error: client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 116.196.101.194:2379: connect: connection refused

    error #0: dial tcp 116.196.101.194:2379: connect: connection refused: Process exited with status 1
    Usage:
    kk create cluster [flags]

    Flags:
    -f, –filename string Path to a configuration file
    -h, –help help for cluster
    –skip-pull-images Skip pre pull images
    –with-kubernetes string Specify a supported version of kubernetes
    –with-kubesphere Deploy a specific version of kubesphere (default v3.0.0)
    -y, –yes Skip pre-check of the installation

    Global Flags:
    –debug Print detailed information (default true)

    Failed to start etcd cluster: Failed to start etcd cluster: Failed to exec command: sudo -E /bin/sh -c “export ETCDCTL_API=2;export ETCDCTL_CERT_FILE=‘/etc/ssl/etcd/ssl/admin-master.pem’;export ETCDCTL_KEY_FILE=‘/etc/ssl/etcd/ssl/admin-master-key.pem’;export ETCDCTL_CA_FILE=‘/etc/ssl/etcd/ssl/ca.pem’;/usr/local/bin/etcdctl –endpoints=https://116.196.101.194:2379 cluster-health | grep -q ‘cluster is healthy’”
    Error: client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 116.196.101.194:2379: connect: connection refused

    error #0: dial tcp 116.196.101.194:2379: connect: connection refused: Process exited with status 1

      Forest-L

      [root@master servers]# ./kk delete cluster
      Are you sure to delete this cluster? [yes/no]: yes
      INFO[20:52:35 CST] Resetting kubernetes cluster ...             
      [master 10.0.0.14] MSG:
      sudo -E /bin/sh -c "iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && ip link del kube-ipvs0 && ip link del nodelocaldns"
      INFO[20:52:36 CST] Successful.                                  
      [root@master servers]# ./kk create cluster -f config-sample.yaml 
      +--------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
      | name   | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time         |
      +--------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
      | node1  | y    | y    | y       | y        | y     | y     | y         | y      |            |             |                  | CST 20:52:55 |
      | master | y    | y    | y       | y        | y     | y     | y         | y      |            |             |                  | CST 20:52:55 |
      +--------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
      
      This is a simple check of your environment.
      Before installation, you should ensure that your machines meet all requirements specified at
      https://github.com/kubesphere/kubekey#requirements-and-recommendations
      
      Continue this installation? [yes/no]: yes
      INFO[20:52:57 CST] Downloading Installation Files               
      INFO[20:52:57 CST] Downloading kubeadm ...                      
      INFO[20:52:58 CST] Downloading kubelet ...                      
      INFO[20:52:59 CST] Downloading kubectl ...                      
      INFO[20:52:59 CST] Downloading helm ...                         
      INFO[20:53:00 CST] Downloading kubecni ...                      
      INFO[20:53:00 CST] Configurating operating system ...           
      [node1 116.196.105.242] MSG:
      net.ipv4.ip_forward = 1
      net.bridge.bridge-nf-call-arptables = 1
      net.bridge.bridge-nf-call-ip6tables = 1
      net.bridge.bridge-nf-call-iptables = 1
      net.ipv4.ip_local_reserved_ports = 30000-32767
      [master 116.196.101.194] MSG:
      net.ipv4.ip_forward = 1
      net.bridge.bridge-nf-call-arptables = 1
      net.bridge.bridge-nf-call-ip6tables = 1
      net.bridge.bridge-nf-call-iptables = 1
      net.ipv4.ip_local_reserved_ports = 30000-32767
      INFO[20:53:02 CST] Installing docker ...                        
      INFO[20:53:02 CST] Start to download images on all nodes        
      [node1] Downloading image: kubesphere/pause:3.2
      [master] Downloading image: kubesphere/etcd:v3.3.12
      [master] Downloading image: kubesphere/pause:3.2
      [node1] Downloading image: kubesphere/kube-proxy:v1.18.6
      [master] Downloading image: kubesphere/kube-apiserver:v1.18.6
      [node1] Downloading image: coredns/coredns:1.6.9
      [master] Downloading image: kubesphere/kube-controller-manager:v1.18.6
      [node1] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
      [master] Downloading image: kubesphere/kube-scheduler:v1.18.6
      [node1] Downloading image: calico/kube-controllers:v3.15.1
      [master] Downloading image: kubesphere/kube-proxy:v1.18.6
      [node1] Downloading image: calico/cni:v3.15.1
      [node1] Downloading image: calico/node:v3.15.1
      [master] Downloading image: coredns/coredns:1.6.9
      [node1] Downloading image: calico/pod2daemon-flexvol:v3.15.1
      [master] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
      [master] Downloading image: calico/kube-controllers:v3.15.1
      [master] Downloading image: calico/cni:v3.15.1
      [master] Downloading image: calico/node:v3.15.1
      [master] Downloading image: calico/pod2daemon-flexvol:v3.15.1
      INFO[20:54:16 CST] Generating etcd certs                        
      INFO[20:54:17 CST] Synchronizing etcd certs                     
      INFO[20:54:17 CST] Creating etcd service                        
      INFO[20:54:19 CST] Starting etcd cluster                        
      [master 116.196.101.194] MSG:
      Configuration file will be created
      INFO[20:54:19 CST] Refreshing etcd configuration                
      Waiting for etcd to start
      Waiting for etcd to start
      Waiting for etcd to start
      Waiting for etcd to start
      Waiting for etcd to start
      Waiting for etcd to start
      Waiting for etcd to start
      Waiting for etcd to start
      Waiting for etcd to start
      Waiting for etcd to start
      Waiting for etcd to start
      Waiting for etcd to start
      Waiting for etcd to start
      Waiting for etcd to start
      Waiting for etcd to start
      Waiting for etcd to start
      Waiting for etcd to start
      Waiting for etcd to start
      Waiting for etcd to start
      Waiting for etcd to start
      ERRO[20:55:56 CST] Failed to start etcd cluster: Failed to exec command: sudo -E /bin/sh -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-master.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-master-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://116.196.101.194:2379 cluster-health | grep -q 'cluster is healthy'" 
      Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 116.196.101.194:2379: connect: connection refused
      
      error #0: dial tcp 116.196.101.194:2379: connect: connection refused: Process exited with status 1  node=116.196.101.194
      WARN[20:55:56 CST] Task failed ...                              
      WARN[20:55:56 CST] error: interrupted by error                  
      Error: Failed to refresh etcd configuration: interrupted by error
      Usage:
        kk create cluster [flags]
      
      Flags:
        -f, --filename string          Path to a configuration file
        -h, --help                     help for cluster
            --skip-pull-images         Skip pre pull images
            --with-kubernetes string   Specify a supported version of kubernetes
            --with-kubesphere          Deploy a specific version of kubesphere (default v3.0.0)
        -y, --yes                      Skip pre-check of the installation
      
      Global Flags:
            --debug   Print detailed information (default true)
      
      Failed to refresh etcd configuration: interrupted by error
      • Jeff 回复了此帖

        linyi

        error #0: dial tcp 116.196.101.194:2379: connect: connection refused: Process exited with status 1  node=116.196.101.194

        看这个错误信息,确定节点端口都是已经通的么

          Jeff
          etcd服务都没有起来,端口肯定不通。防火墙已经关了

          6 天 后

          Jeff
          使用v1.18.6不成功,使用v1.17.9 可以。:::::