• 安装部署
  • v4.1.2 離線安裝報錯 etcd health check failed

  • 已编辑

@Cauchy

15:13:34 CST [ETCDConfigureModule] Health check on exist etcd
15:13:34 CST message: [coverity-ms]
etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.1.30.21:2379 cluster-health | grep -q 'cluster is healthy'" 
Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 172.1.30.21:2379: connect: connection refused

error #0: dial tcp 172.1.30.21:2379: connect: connection refused: Process exited with status 1
15:13:34 CST retry: [coverity-ms]
15:13:39 CST message: [coverity-ms]
etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.1.30.21:2379 cluster-health | grep -q 'cluster is healthy'" 
Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 172.1.30.21:2379: connect: connection refused

error #0: dial tcp 172.1.30.21:2379: connect: connection refused: Process exited with status 1
15:13:39 CST retry: [coverity-ms]
15:13:45 CST message: [coverity-ms]
etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.1.30.21:2379 cluster-health | grep -q 'cluster is healthy'" 
Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 172.1.30.21:2379: connect: connection refused

error #0: dial tcp 172.1.30.21:2379: connect: connection refused: Process exited with status 1
15:13:45 CST retry: [coverity-ms]
15:13:50 CST message: [coverity-ms]
etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.1.30.21:2379 cluster-health | grep -q 'cluster is healthy'" 
Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 172.1.30.21:2379: connect: connection refused

error #0: dial tcp 172.1.30.21:2379: connect: connection refused: Process exited with status 1
15:13:50 CST retry: [coverity-ms]
15:13:55 CST message: [coverity-ms]
etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.1.30.21:2379 cluster-health | grep -q 'cluster is healthy'" 
Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 172.1.30.21:2379: connect: connection refused

error #0: dial tcp 172.1.30.21:2379: connect: connection refused: Process exited with status 1
15:13:55 CST retry: [coverity-ms]
15:14:00 CST message: [coverity-ms]
etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.1.30.21:2379 cluster-health | grep -q 'cluster is healthy'" 
Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 172.1.30.21:2379: connect: connection refused

error #0: dial tcp 172.1.30.21:2379: connect: connection refused: Process exited with status 1
15:14:00 CST retry: [coverity-ms]
15:14:05 CST message: [coverity-ms]
etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.1.30.21:2379 cluster-health | grep -q 'cluster is healthy'" 
Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 172.1.30.21:2379: connect: connection refused

error #0: dial tcp 172.1.30.21:2379: connect: connection refused: Process exited with status 1
15:14:05 CST retry: [coverity-ms]
15:14:10 CST message: [coverity-ms]
etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.1.30.21:2379 cluster-health | grep -q 'cluster is healthy'" 
Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 172.1.30.21:2379: connect: connection refused

error #0: dial tcp 172.1.30.21:2379: connect: connection refused: Process exited with status 1
15:14:10 CST retry: [coverity-ms]
15:14:15 CST message: [coverity-ms]
etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.1.30.21:2379 cluster-health | grep -q 'cluster is healthy'" 
Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 172.1.30.21:2379: connect: connection refused

error #0: dial tcp 172.1.30.21:2379: connect: connection refused: Process exited with status 1
15:14:15 CST retry: [coverity-ms]
15:14:20 CST message: [coverity-ms]
etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.1.30.21:2379 cluster-health | grep -q 'cluster is healthy'" 
Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 172.1.30.21:2379: connect: connection refused

error #0: dial tcp 172.1.30.21:2379: connect: connection refused: Process exited with status 1
15:14:20 CST retry: [coverity-ms]
15:14:25 CST message: [coverity-ms]
etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.1.30.21:2379 cluster-health | grep -q 'cluster is healthy'" 
Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 172.1.30.21:2379: connect: connection refused

error #0: dial tcp 172.1.30.21:2379: connect: connection refused: Process exited with status 1
15:14:25 CST retry: [coverity-ms]
15:14:30 CST message: [coverity-ms]
etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.1.30.21:2379 cluster-health | grep -q 'cluster is healthy'" 
Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 172.1.30.21:2379: connect: connection refused

error #0: dial tcp 172.1.30.21:2379: connect: connection refused: Process exited with status 1
15:14:30 CST retry: [coverity-ms]
15:14:36 CST message: [coverity-ms]
etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.1.30.21:2379 cluster-health | grep -q 'cluster is healthy'" 
Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 172.1.30.21:2379: connect: connection refused

error #0: dial tcp 172.1.30.21:2379: connect: connection refused: Process exited with status 1
15:14:36 CST retry: [coverity-ms]
15:14:41 CST message: [coverity-ms]
etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.1.30.21:2379 cluster-health | grep -q 'cluster is healthy'" 
Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 172.1.30.21:2379: connect: connection refused

error #0: dial tcp 172.1.30.21:2379: connect: connection refused: Process exited with status 1
15:14:41 CST retry: [coverity-ms]
15:14:46 CST message: [coverity-ms]
etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.1.30.21:2379 cluster-health | grep -q 'cluster is healthy'" 
Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 172.1.30.21:2379: connect: connection refused

error #0: dial tcp 172.1.30.21:2379: connect: connection refused: Process exited with status 1
15:14:46 CST retry: [coverity-ms]
15:14:51 CST message: [coverity-ms]
etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.1.30.21:2379 cluster-health | grep -q 'cluster is healthy'" 
Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 172.1.30.21:2379: connect: connection refused

error #0: dial tcp 172.1.30.21:2379: connect: connection refused: Process exited with status 1
15:14:51 CST retry: [coverity-ms]
15:14:56 CST message: [coverity-ms]
etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.1.30.21:2379 cluster-health | grep -q 'cluster is healthy'" 
Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 172.1.30.21:2379: connect: connection refused

error #0: dial tcp 172.1.30.21:2379: connect: connection refused: Process exited with status 1
15:14:56 CST retry: [coverity-ms]
15:15:01 CST message: [coverity-ms]
etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.1.30.21:2379 cluster-health | grep -q 'cluster is healthy'" 
Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 172.1.30.21:2379: connect: connection refused

error #0: dial tcp 172.1.30.21:2379: connect: connection refused: Process exited with status 1
15:15:01 CST retry: [coverity-ms]
15:15:06 CST message: [coverity-ms]
etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.1.30.21:2379 cluster-health | grep -q 'cluster is healthy'" 
Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 172.1.30.21:2379: connect: connection refused

error #0: dial tcp 172.1.30.21:2379: connect: connection refused: Process exited with status 1
15:15:06 CST retry: [coverity-ms]
15:15:11 CST message: [coverity-ms]
etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.1.30.21:2379 cluster-health | grep -q 'cluster is healthy'" 
Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 172.1.30.21:2379: connect: connection refused

error #0: dial tcp 172.1.30.21:2379: connect: connection refused: Process exited with status 1
15:15:11 CST failed: [coverity-ms]
error: Pipeline[CreateClusterPipeline] execute failed: Module[ETCDConfigureModule] exec failed: 
failed: [coverity-ms] [ExistETCDHealthCheck] exec failed after 20 retries: etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.1.30.21:2379 cluster-health | grep -q 'cluster is healthy'" 
Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 172.1.30.21:2379: connect: connection refused

error #0: dial tcp 172.1.30.21:2379: connect: connection refused: Process exited with status 1

    錯誤訊息:

    15:13:34 CST [ETCDConfigureModule] Health check on exist etcd
    
    15:13:34 CST message: [coverity-ms]
    
    etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.1.30.21:2379 cluster-health | grep -q 'cluster is healthy'" 
    
    Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 172.1.30.21:2379: connect: connection refused
    
    error #0: dial tcp 172.1.30.21:2379: connect: connection refused: Process exited with status 1
    
    15:13:34 CST retry: [coverity-ms]
    
    15:13:39 CST message: [coverity-ms]
    
    etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.1.30.21:2379 cluster-health | grep -q 'cluster is healthy'" 
    
    Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 172.1.30.21:2379: connect: connection refused
    
    error #0: dial tcp 172.1.30.21:2379: connect: connection refused: Process exited with status 1
    
    15:13:39 CST retry: [coverity-ms]
    
    15:13:45 CST message: [coverity-ms]
    
    etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.1.30.21:2379 cluster-health | grep -q 'cluster is healthy'" 
    
    Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 172.1.30.21:2379: connect: connection refused
    
    error #0: dial tcp 172.1.30.21:2379: connect: connection refused: Process exited with status 1
    
    15:13:45 CST retry: [coverity-ms]
    
    15:13:50 CST message: [coverity-ms]
    
    etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.1.30.21:2379 cluster-health | grep -q 'cluster is healthy'" 
    
    Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 172.1.30.21:2379: connect: connection refused
    
    error #0: dial tcp 172.1.30.21:2379: connect: connection refused: Process exited with status 1
    
    15:13:50 CST retry: [coverity-ms]
    
    15:13:55 CST message: [coverity-ms]
    
    etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.1.30.21:2379 cluster-health | grep -q 'cluster is healthy'" 
    
    Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 172.1.30.21:2379: connect: connection refused
    
    error #0: dial tcp 172.1.30.21:2379: connect: connection refused: Process exited with status 1
    
    15:13:55 CST retry: [coverity-ms]
    
    15:14:00 CST message: [coverity-ms]
    
    etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.1.30.21:2379 cluster-health | grep -q 'cluster is healthy'" 
    
    Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 172.1.30.21:2379: connect: connection refused
    
    error #0: dial tcp 172.1.30.21:2379: connect: connection refused: Process exited with status 1
    
    15:14:00 CST retry: [coverity-ms]
    
    15:14:05 CST message: [coverity-ms]
    
    etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.1.30.21:2379 cluster-health | grep -q 'cluster is healthy'" 
    
    Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 172.1.30.21:2379: connect: connection refused
    
    error #0: dial tcp 172.1.30.21:2379: connect: connection refused: Process exited with status 1
    
    15:14:05 CST retry: [coverity-ms]
    
    15:14:10 CST message: [coverity-ms]
    
    etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.1.30.21:2379 cluster-health | grep -q 'cluster is healthy'" 
    
    Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 172.1.30.21:2379: connect: connection refused
    
    error #0: dial tcp 172.1.30.21:2379: connect: connection refused: Process exited with status 1
    
    15:14:10 CST retry: [coverity-ms]
    
    15:14:15 CST message: [coverity-ms]
    
    etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.1.30.21:2379 cluster-health | grep -q 'cluster is healthy'" 
    
    Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 172.1.30.21:2379: connect: connection refused
    
    error #0: dial tcp 172.1.30.21:2379: connect: connection refused: Process exited with status 1
    
    15:14:15 CST retry: [coverity-ms]
    
    15:14:20 CST message: [coverity-ms]
    
    etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.1.30.21:2379 cluster-health | grep -q 'cluster is healthy'" 
    
    Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 172.1.30.21:2379: connect: connection refused
    
    error #0: dial tcp 172.1.30.21:2379: connect: connection refused: Process exited with status 1
    
    15:14:20 CST retry: [coverity-ms]
    
    15:14:25 CST message: [coverity-ms]
    
    etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.1.30.21:2379 cluster-health | grep -q 'cluster is healthy'" 
    
    Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 172.1.30.21:2379: connect: connection refused
    
    error #0: dial tcp 172.1.30.21:2379: connect: connection refused: Process exited with status 1
    
    15:14:25 CST retry: [coverity-ms]
    
    15:14:30 CST message: [coverity-ms]
    
    etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.1.30.21:2379 cluster-health | grep -q 'cluster is healthy'" 
    
    Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 172.1.30.21:2379: connect: connection refused
    
    error #0: dial tcp 172.1.30.21:2379: connect: connection refused: Process exited with status 1
    
    15:14:30 CST retry: [coverity-ms]
    
    15:14:36 CST message: [coverity-ms]
    
    etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.1.30.21:2379 cluster-health | grep -q 'cluster is healthy'" 
    
    Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 172.1.30.21:2379: connect: connection refused
    
    error #0: dial tcp 172.1.30.21:2379: connect: connection refused: Process exited with status 1
    
    15:14:36 CST retry: [coverity-ms]
    
    15:14:41 CST message: [coverity-ms]
    
    etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.1.30.21:2379 cluster-health | grep -q 'cluster is healthy'" 
    
    Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 172.1.30.21:2379: connect: connection refused
    
    error #0: dial tcp 172.1.30.21:2379: connect: connection refused: Process exited with status 1
    
    15:14:41 CST retry: [coverity-ms]
    
    15:14:46 CST message: [coverity-ms]
    
    etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.1.30.21:2379 cluster-health | grep -q 'cluster is healthy'" 
    
    Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 172.1.30.21:2379: connect: connection refused
    
    error #0: dial tcp 172.1.30.21:2379: connect: connection refused: Process exited with status 1
    
    15:14:46 CST retry: [coverity-ms]
    
    15:14:51 CST message: [coverity-ms]
    
    etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.1.30.21:2379 cluster-health | grep -q 'cluster is healthy'" 
    
    Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 172.1.30.21:2379: connect: connection refused
    
    error #0: dial tcp 172.1.30.21:2379: connect: connection refused: Process exited with status 1
    
    15:14:51 CST retry: [coverity-ms]
    
    15:14:56 CST message: [coverity-ms]
    
    etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.1.30.21:2379 cluster-health | grep -q 'cluster is healthy'" 
    
    Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 172.1.30.21:2379: connect: connection refused
    
    error #0: dial tcp 172.1.30.21:2379: connect: connection refused: Process exited with status 1
    
    15:14:56 CST retry: [coverity-ms]
    
    15:15:01 CST message: [coverity-ms]
    
    etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.1.30.21:2379 cluster-health | grep -q 'cluster is healthy'" 
    
    Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 172.1.30.21:2379: connect: connection refused
    
    error #0: dial tcp 172.1.30.21:2379: connect: connection refused: Process exited with status 1
    
    15:15:01 CST retry: [coverity-ms]
    
    15:15:06 CST message: [coverity-ms]
    
    etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.1.30.21:2379 cluster-health | grep -q 'cluster is healthy'" 
    
    Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 172.1.30.21:2379: connect: connection refused
    
    error #0: dial tcp 172.1.30.21:2379: connect: connection refused: Process exited with status 1
    
    15:15:06 CST retry: [coverity-ms]
    
    15:15:11 CST message: [coverity-ms]
    
    etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.1.30.21:2379 cluster-health | grep -q 'cluster is healthy'" 
    
    Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 172.1.30.21:2379: connect: connection refused
    
    error #0: dial tcp 172.1.30.21:2379: connect: connection refused: Process exited with status 1
    
    15:15:11 CST failed: [coverity-ms]
    
    error: Pipeline[CreateClusterPipeline] execute failed: Module[ETCDConfigureModule] exec failed: 
    
    failed: [coverity-ms] [ExistETCDHealthCheck] exec failed after 20 retries: etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-coverity-ms-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://172.1.30.21:2379 cluster-health | grep -q 'cluster is healthy'" 
    
    Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 172.1.30.21:2379: connect: connection refused
    
    error #0: dial tcp 172.1.30.21:2379: connect: connection refused: Process exited with status 1

    Chain KUBE-FIREWALL (2 references)
    target prot opt source destination
    DROP all – !127.0.0.0/8 127.0.0.0/8 /* block incoming localnet connections */ ! ctstate RELATED,ESTABLISHED,DNAT

    我的防火牆是關閉的

    cici 报错里面 172.1.30.21 这个ip是什么,你贴的集群参数里面没有这个ip啊

    • cici 回复了此帖
      • 已编辑

      sylvia
      部署的那台機器的IP
      我有換過機器做,這邊放完整的

      apiVersion: kubekey.kubesphere.io/v1alpha2
      kind: Cluster
      metadata:
        name: sample
      spec:
        hosts:
          - {name: coverity-ms, address: 172.1.30.21, internalAddress: 172.1.30.21, user: h00283, password: "@Cc901109"}
        roleGroups:
          etcd:
          - coverity-ms
          control-plane: 
          - coverity-ms
          worker:
          - coverity-ms
        controlPlaneEndpoint:
          ## Internal loadbalancer for apiservers 
          # internalLoadbalancer: haproxy
      
          domain: lb.kubesphere.local
          address: ""
          port: 6443
        kubernetes:
          version: v1.28.0
          clusterName: cluster.local
          autoRenewCerts: true
          containerManager: containerd
        etcd:
          type: kubekey
        network:
          plugin: calico
          kubePodsCIDR: 10.233.64.0/18
          kubeServiceCIDR: 10.233.0.0/18
          ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
          multusCNI:
            enabled: false
        registry:
          privateRegistry: ""
          namespaceOverride: ""
          registryMirrors: []
          insecureRegistries: []
        addons: []

        cici 查一下你这台机器 2379 端口情况,再看下etcd 运行状态

        • cici 回复了此帖
          • 已编辑

          sylvia

          h00283@coverity-ms:~/kubesphere$ sudo netstat -tuln | grep 2379
          h00283@coverity-ms:~/kubesphere$ sudo systemctl status etcd
          ● etcd.service - etcd
               Loaded: loaded (/etc/systemd/system/etcd.service; enabled; vendor preset: enabled)
               Active: activating (auto-restart) (Result: exit-code) since Sat 2025-02-08 16:32:38 CST; 2s ago
              Process: 33563 ExecStart=/usr/local/bin/etcd (code=exited, status=1/FAILURE)
             Main PID: 33563 (code=exited, status=1/FAILURE)
                  CPU: 25ms
          h00283@coverity-ms:~/kubesphere$ sudo journalctl -f -u etcd
          [sudo] password for h00283: 
          Feb 08 16:55:01 coverity-ms etcd[34961]: {"level":"info","ts":"2025-02-08T16:55:01.66247+0800","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.1.30.21:2379"]}
          Feb 08 16:55:01 coverity-ms etcd[34961]: {"level":"info","ts":"2025-02-08T16:55:01.662596+0800","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.13","git-sha":"c9063a0dc","go-version":"go1.21.8","go-os":"linux","go-arch":"amd64","max-cpu-set":16,"max-cpu-available":16,"member-initialized":false,"name":"etcd-coverity-ms","data-dir":"/var/lib/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/etcd/member","force-new-cluster":false,"heartbeat-interval":"250ms","election-timeout":"5s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.1.30.21:2380"],"listen-peer-urls":["https://172.1.30.21:2380"],"advertise-client-urls":["https://172.1.30.21:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.1.30.21:2379"],"listen-metrics-urls":[],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"etcd-coverity-ms=https://172.1.30.21:2380","initial-cluster-state":"existing","initial-cluster-token":"k8s_etcd","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"8h0m0s","auto-compaction-interval":"8h0m0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
          Feb 08 16:55:01 coverity-ms etcd[34961]: {"level":"warn","ts":"2025-02-08T16:55:01.662674+0800","caller":"fileutil/fileutil.go:53","msg":"check file permission","error":"directory \"/var/lib/etcd\" exist, but the permission is \"drwxr-xr-x\". The recommended permission is \"-rwx------\" to prevent possible unprivileged access to the data"}
          Feb 08 16:55:01 coverity-ms etcd[34961]: {"level":"info","ts":"2025-02-08T16:55:01.663982+0800","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/etcd/member/snap/db","took":"1.178268ms"}
          Feb 08 16:55:01 coverity-ms etcd[34961]: {"level":"info","ts":"2025-02-08T16:55:01.665027+0800","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"etcd-coverity-ms","data-dir":"/var/lib/etcd","advertise-peer-urls":["https://172.1.30.21:2380"],"advertise-client-urls":["https://172.1.30.21:2379"]}
          Feb 08 16:55:01 coverity-ms etcd[34961]: {"level":"info","ts":"2025-02-08T16:55:01.665092+0800","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"etcd-coverity-ms","data-dir":"/var/lib/etcd","advertise-peer-urls":["https://172.1.30.21:2380"],"advertise-client-urls":["https://172.1.30.21:2379"]}
          Feb 08 16:55:01 coverity-ms etcd[34961]: {"level":"fatal","ts":"2025-02-08T16:55:01.66511+0800","caller":"etcdmain/etcd.go:204","msg":"discovery failed","error":"cannot fetch cluster info from peer urls: could not retrieve cluster information from the given URLs","stacktrace":"go.etcd.io/etcd/server/v3/etcdmain.startEtcdOrProxyV2\n\tgo.etcd.io/etcd/server/v3/etcdmain/etcd.go:204\ngo.etcd.io/etcd/server/v3/etcdmain.Main\n\tgo.etcd.io/etcd/server/v3/etcdmain/main.go:40\nmain.main\n\tgo.etcd.io/etcd/server/v3/main.go:31\nruntime.main\n\truntime/proc.go:267"}
          Feb 08 16:55:01 coverity-ms systemd[1]: etcd.service: Main process exited, code=exited, status=1/FAILURE
          Feb 08 16:55:01 coverity-ms systemd[1]: etcd.service: Failed with result 'exit-code'.
          Feb 08 16:55:01 coverity-ms systemd[1]: Failed to start etcd.

            cici 看起来像 etcd 没起的来

            cici 分析下 etcd 日志吧,找下报错

            • cici 回复了此帖

              sylvia

              h00283@coverity-ms:~/kubesphere$ sudo journalctl -f -u etcd
              [sudo] password for h00283: 
              Feb 08 16:55:01 coverity-ms etcd[34961]: {"level":"info","ts":"2025-02-08T16:55:01.66247+0800","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.1.30.21:2379"]}
              Feb 08 16:55:01 coverity-ms etcd[34961]: {"level":"info","ts":"2025-02-08T16:55:01.662596+0800","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.13","git-sha":"c9063a0dc","go-version":"go1.21.8","go-os":"linux","go-arch":"amd64","max-cpu-set":16,"max-cpu-available":16,"member-initialized":false,"name":"etcd-coverity-ms","data-dir":"/var/lib/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/etcd/member","force-new-cluster":false,"heartbeat-interval":"250ms","election-timeout":"5s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.1.30.21:2380"],"listen-peer-urls":["https://172.1.30.21:2380"],"advertise-client-urls":["https://172.1.30.21:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.1.30.21:2379"],"listen-metrics-urls":[],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"etcd-coverity-ms=https://172.1.30.21:2380","initial-cluster-state":"existing","initial-cluster-token":"k8s_etcd","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"8h0m0s","auto-compaction-interval":"8h0m0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
              Feb 08 16:55:01 coverity-ms etcd[34961]: {"level":"warn","ts":"2025-02-08T16:55:01.662674+0800","caller":"fileutil/fileutil.go:53","msg":"check file permission","error":"directory \"/var/lib/etcd\" exist, but the permission is \"drwxr-xr-x\". The recommended permission is \"-rwx------\" to prevent possible unprivileged access to the data"}
              Feb 08 16:55:01 coverity-ms etcd[34961]: {"level":"info","ts":"2025-02-08T16:55:01.663982+0800","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/etcd/member/snap/db","took":"1.178268ms"}
              Feb 08 16:55:01 coverity-ms etcd[34961]: {"level":"info","ts":"2025-02-08T16:55:01.665027+0800","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"etcd-coverity-ms","data-dir":"/var/lib/etcd","advertise-peer-urls":["https://172.1.30.21:2380"],"advertise-client-urls":["https://172.1.30.21:2379"]}
              Feb 08 16:55:01 coverity-ms etcd[34961]: {"level":"info","ts":"2025-02-08T16:55:01.665092+0800","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"etcd-coverity-ms","data-dir":"/var/lib/etcd","advertise-peer-urls":["https://172.1.30.21:2380"],"advertise-client-urls":["https://172.1.30.21:2379"]}
              Feb 08 16:55:01 coverity-ms etcd[34961]: {"level":"fatal","ts":"2025-02-08T16:55:01.66511+0800","caller":"etcdmain/etcd.go:204","msg":"discovery failed","error":"cannot fetch cluster info from peer urls: could not retrieve cluster information from the given URLs","stacktrace":"go.etcd.io/etcd/server/v3/etcdmain.startEtcdOrProxyV2\n\tgo.etcd.io/etcd/server/v3/etcdmain/etcd.go:204\ngo.etcd.io/etcd/server/v3/etcdmain.Main\n\tgo.etcd.io/etcd/server/v3/etcdmain/main.go:40\nmain.main\n\tgo.etcd.io/etcd/server/v3/main.go:31\nruntime.main\n\truntime/proc.go:267"}
              Feb 08 16:55:01 coverity-ms systemd[1]: etcd.service: Main process exited, code=exited, status=1/FAILURE
              Feb 08 16:55:01 coverity-ms systemd[1]: etcd.service: Failed with result 'exit-code'.
              Feb 08 16:55:01 coverity-ms systemd[1]: Failed to start etcd.

              日誌:
              h00283@coverity-ms:~/kubesphere$ sudo journalctl -f -u etcd
              [sudo] password for h00283:
              Feb 08 16:55:01 coverity-ms etcd[34961]: {“level”:“info”,“ts”:“2025-02-08T16:55:01.66247+0800”,“caller”:“embed/etcd.go:135”,“msg”:“configuring client listeners”,“listen-client-urls”:[“https://127.0.0.1:2379”,“https://172.1.30.21:2379”]}
              Feb 08 16:55:01 coverity-ms etcd[34961]: {“level”:“info”,“ts”:“2025-02-08T16:55:01.662596+0800”,“caller”:“embed/etcd.go:308”,“msg”:“starting an etcd server”,“etcd-version”:“3.5.13”,“git-sha”:“c9063a0dc”,“go-version”:“go1.21.8”,“go-os”:“linux”,“go-arch”:“amd64”,“max-cpu-set”:16,“max-cpu-available”:16,“member-initialized”:false,“name”:“etcd-coverity-ms”,“data-dir”:“/var/lib/etcd”,“wal-dir”:"",“wal-dir-dedicated”:"",“member-dir”:“/var/lib/etcd/member”,“force-new-cluster”:false,“heartbeat-interval”:“250ms”,“election-timeout”:“5s”,“initial-election-tick-advance”:true,“snapshot-count”:10000,“max-wals”:5,“max-snapshots”:5,“snapshot-catchup-entries”:5000,“initial-advertise-peer-urls”:[“https://172.1.30.21:2380”],“listen-peer-urls”:[“https://172.1.30.21:2380”],“advertise-client-urls”:[“https://172.1.30.21:2379”],“listen-client-urls”:[“https://127.0.0.1:2379”,“https://172.1.30.21:2379”],“listen-metrics-urls”:[],“cors”:[“”],“host-whitelist”:[“”],“initial-cluster”:“etcd-coverity-ms=https://172.1.30.21:2380”,“initial-cluster-state”:“existing”,“initial-cluster-token”:“k8s_etcd”,“quota-backend-bytes”:2147483648,“max-request-bytes”:1572864,“max-concurrent-streams”:4294967295,“pre-vote”:true,“initial-corrupt-check”:false,“corrupt-check-time-interval”:“0s”,“compact-check-time-enabled”:false,“compact-check-time-interval”:“1m0s”,“auto-compaction-mode”:“periodic”,“auto-compaction-retention”:“8h0m0s”,“auto-compaction-interval”:“8h0m0s”,“discovery-url”:"",“discovery-proxy”:"",“downgrade-check-interval”:“5s”}
              Feb 08 16:55:01 coverity-ms etcd[34961]: {“level”:“warn”,“ts”:“2025-02-08T16:55:01.662674+0800”,“caller”:“fileutil/fileutil.go:53”,“msg”:“check file permission”,“error”:“directory \”/var/lib/etcd\" exist, but the permission is \“drwxr-xr-x\”. The recommended permission is \“-rwx——\” to prevent possible unprivileged access to the data"}
              Feb 08 16:55:01 coverity-ms etcd[34961]: {“level”:“info”,“ts”:“2025-02-08T16:55:01.663982+0800”,“caller”:“etcdserver/backend.go:81”,“msg”:“opened backend db”,“path”:“/var/lib/etcd/member/snap/db”,“took”:“1.178268ms”}
              Feb 08 16:55:01 coverity-ms etcd[34961]: {“level”:“info”,“ts”:“2025-02-08T16:55:01.665027+0800”,“caller”:“embed/etcd.go:375”,“msg”:“closing etcd server”,“name”:“etcd-coverity-ms”,“data-dir”:“/var/lib/etcd”,“advertise-peer-urls”:[“https://172.1.30.21:2380”],“advertise-client-urls”:[“https://172.1.30.21:2379”]}
              Feb 08 16:55:01 coverity-ms etcd[34961]: {“level”:“info”,“ts”:“2025-02-08T16:55:01.665092+0800”,“caller”:“embed/etcd.go:377”,“msg”:“closed etcd server”,“name”:“etcd-coverity-ms”,“data-dir”:“/var/lib/etcd”,“advertise-peer-urls”:[“https://172.1.30.21:2380”],“advertise-client-urls”:[“https://172.1.30.21:2379”]}
              Feb 08 16:55:01 coverity-ms etcd[34961]: {“level”:“fatal”,“ts”:“2025-02-08T16:55:01.66511+0800”,“caller”:“etcdmain/etcd.go:204”,“msg”:“discovery failed”,“error”:“cannot fetch cluster info from peer urls: could not retrieve cluster information from the given URLs”,“stacktrace”:“go.etcd.io/etcd/server/v3/etcdmain.startEtcdOrProxyV2\n\tgo.etcd.io/etcd/server/v3/etcdmain/etcd.go:204\ngo.etcd.io/etcd/server/v3/etcdmain.Main\n\tgo.etcd.io/etcd/server/v3/etcdmain/main.go:40\nmain.main\n\tgo.etcd.io/etcd/server/v3/main.go:31\nruntime.main\n\truntime/proc.go:267”}
              Feb 08 16:55:01 coverity-ms systemd[1]: etcd.service: Main process exited, code=exited, status=1/FAILURE
              Feb 08 16:55:01 coverity-ms systemd[1]: etcd.service: Failed with result ‘exit-code’.
              Feb 08 16:55:01 coverity-ms systemd[1]: Failed to start etcd.

              @sylvia

              h00283@coverity-ms:~/kubesphere$ sudo journalctl -f -u etcd
              [sudo] password for h00283: 
              Feb 08 16:55:01 coverity-ms etcd[34961]: {"level":"info","ts":"2025-02-08T16:55:01.66247+0800","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.1.30.21:2379"]}
              Feb 08 16:55:01 coverity-ms etcd[34961]: {"level":"info","ts":"2025-02-08T16:55:01.662596+0800","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.13","git-sha":"c9063a0dc","go-version":"go1.21.8","go-os":"linux","go-arch":"amd64","max-cpu-set":16,"max-cpu-available":16,"member-initialized":false,"name":"etcd-coverity-ms","data-dir":"/var/lib/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/etcd/member","force-new-cluster":false,"heartbeat-interval":"250ms","election-timeout":"5s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.1.30.21:2380"],"listen-peer-urls":["https://172.1.30.21:2380"],"advertise-client-urls":["https://172.1.30.21:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.1.30.21:2379"],"listen-metrics-urls":[],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"etcd-coverity-ms=https://172.1.30.21:2380","initial-cluster-state":"existing","initial-cluster-token":"k8s_etcd","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"8h0m0s","auto-compaction-interval":"8h0m0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
              Feb 08 16:55:01 coverity-ms etcd[34961]: {"level":"warn","ts":"2025-02-08T16:55:01.662674+0800","caller":"fileutil/fileutil.go:53","msg":"check file permission","error":"directory \"/var/lib/etcd\" exist, but the permission is \"drwxr-xr-x\". The recommended permission is \"-rwx------\" to prevent possible unprivileged access to the data"}
              Feb 08 16:55:01 coverity-ms etcd[34961]: {"level":"info","ts":"2025-02-08T16:55:01.663982+0800","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/etcd/member/snap/db","took":"1.178268ms"}
              Feb 08 16:55:01 coverity-ms etcd[34961]: {"level":"info","ts":"2025-02-08T16:55:01.665027+0800","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"etcd-coverity-ms","data-dir":"/var/lib/etcd","advertise-peer-urls":["https://172.1.30.21:2380"],"advertise-client-urls":["https://172.1.30.21:2379"]}
              Feb 08 16:55:01 coverity-ms etcd[34961]: {"level":"info","ts":"2025-02-08T16:55:01.665092+0800","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"etcd-coverity-ms","data-dir":"/var/lib/etcd","advertise-peer-urls":["https://172.1.30.21:2380"],"advertise-client-urls":["https://172.1.30.21:2379"]}
              Feb 08 16:55:01 coverity-ms etcd[34961]: {"level":"fatal","ts":"2025-02-08T16:55:01.66511+0800","caller":"etcdmain/etcd.go:204","msg":"discovery failed","error":"cannot fetch cluster info from peer urls: could not retrieve cluster information from the given URLs","stacktrace":"go.etcd.io/etcd/server/v3/etcdmain.startEtcdOrProxyV2\n\tgo.etcd.io/etcd/server/v3/etcdmain/etcd.go:204\ngo.etcd.io/etcd/server/v3/etcdmain.Main\n\tgo.etcd.io/etcd/server/v3/etcdmain/main.go:40\nmain.main\n\tgo.etcd.io/etcd/server/v3/main.go:31\nruntime.main\n\truntime/proc.go:267"}
              Feb 08 16:55:01 coverity-ms systemd[1]: etcd.service: Main process exited, code=exited, status=1/FAILURE
              Feb 08 16:55:01 coverity-ms systemd[1]: etcd.service: Failed with result 'exit-code'.
              Feb 08 16:55:01 coverity-ms systemd[1]: Failed to start etcd.

              sylvia 我編輯在上方

              h00283@coverity-ms:~/kubesphere$ sudo journalctl -f -u etcd
              [sudo] password for h00283: 
              Feb 08 16:55:01 coverity-ms etcd[34961]: {"level":"info","ts":"2025-02-08T16:55:01.66247+0800","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.1.30.21:2379"]}
              Feb 08 16:55:01 coverity-ms etcd[34961]: {"level":"info","ts":"2025-02-08T16:55:01.662596+0800","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.13","git-sha":"c9063a0dc","go-version":"go1.21.8","go-os":"linux","go-arch":"amd64","max-cpu-set":16,"max-cpu-available":16,"member-initialized":false,"name":"etcd-coverity-ms","data-dir":"/var/lib/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/etcd/member","force-new-cluster":false,"heartbeat-interval":"250ms","election-timeout":"5s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.1.30.21:2380"],"listen-peer-urls":["https://172.1.30.21:2380"],"advertise-client-urls":["https://172.1.30.21:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.1.30.21:2379"],"listen-metrics-urls":[],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"etcd-coverity-ms=https://172.1.30.21:2380","initial-cluster-state":"existing","initial-cluster-token":"k8s_etcd","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"8h0m0s","auto-compaction-interval":"8h0m0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
              Feb 08 16:55:01 coverity-ms etcd[34961]: {"level":"warn","ts":"2025-02-08T16:55:01.662674+0800","caller":"fileutil/fileutil.go:53","msg":"check file permission","error":"directory \"/var/lib/etcd\" exist, but the permission is \"drwxr-xr-x\". The recommended permission is \"-rwx------\" to prevent possible unprivileged access to the data"}
              Feb 08 16:55:01 coverity-ms etcd[34961]: {"level":"info","ts":"2025-02-08T16:55:01.663982+0800","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/etcd/member/snap/db","took":"1.178268ms"}
              Feb 08 16:55:01 coverity-ms etcd[34961]: {"level":"info","ts":"2025-02-08T16:55:01.665027+0800","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"etcd-coverity-ms","data-dir":"/var/lib/etcd","advertise-peer-urls":["https://172.1.30.21:2380"],"advertise-client-urls":["https://172.1.30.21:2379"]}
              Feb 08 16:55:01 coverity-ms etcd[34961]: {"level":"info","ts":"2025-02-08T16:55:01.665092+0800","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"etcd-coverity-ms","data-dir":"/var/lib/etcd","advertise-peer-urls":["https://172.1.30.21:2380"],"advertise-client-urls":["https://172.1.30.21:2379"]}
              Feb 08 16:55:01 coverity-ms etcd[34961]: {"level":"fatal","ts":"2025-02-08T16:55:01.66511+0800","caller":"etcdmain/etcd.go:204","msg":"discovery failed","error":"cannot fetch cluster info from peer urls: could not retrieve cluster information from the given URLs","stacktrace":"go.etcd.io/etcd/server/v3/etcdmain.startEtcdOrProxyV2\n\tgo.etcd.io/etcd/server/v3/etcdmain/etcd.go:204\ngo.etcd.io/etcd/server/v3/etcdmain.Main\n\tgo.etcd.io/etcd/server/v3/etcdmain/main.go:40\nmain.main\n\tgo.etcd.io/etcd/server/v3/main.go:31\nruntime.main\n\truntime/proc.go:267"}
              Feb 08 16:55:01 coverity-ms systemd[1]: etcd.service: Main process exited, code=exited, status=1/FAILURE
              Feb 08 16:55:01 coverity-ms systemd[1]: etcd.service: Failed with result 'exit-code'.
              Feb 08 16:55:01 coverity-ms systemd[1]: Failed to start etcd.

              有人可以協助幫忙嗎

              • CauchyK零SK壹S

              可以先尝试卸载一下,然后再重装试试看,还是不行的话再看看 etcd 日志
              卸载可以用 ./kk delete cluster -f xxx.yaml

              • cici 回复了此帖

                Cauchy 卸載過了一樣沒辦法

                現在看起來etcd沒辦法裝

                不確定什麼原因突然就沒這個錯誤了

                • 已编辑

                請問現在一直卡在init cluster using kubeadm 該怎麼做?

                13:51:02 CST [InitKubernetesModule] Init cluster using kubeadm
                14:13:32 CST stdout: [coverity-ms]
                W0210 13:51:02.303972    9282 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
                [init] Using Kubernetes version: v1.28.0
                [preflight] Running pre-flight checks
                [preflight] Pulling images required for setting up a Kubernetes cluster
                [preflight] This might take a minute or two, depending on the speed of your internet connection
                [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
                W0210 14:02:01.427098    9282 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.8" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "kubesphere/pause:3.9" as the CRI sandbox image.
                        [WARNING ImagePull]: failed to pull image kubesphere/kube-apiserver:v1.28.0: output: E0210 13:54:01.691356    9407 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = DeadlineExceeded desc = failed to pull and unpack image \"docker.io/kubesphere/kube-apiserver:v1.28.0\": failed to resolve reference \"docker.io/kubesphere/kube-apiserver:v1.28.0\": failed to authorize: failed to fetch anonymous token: Get \"https://auth.docker.io/token?scope=repository%!A(MISSING)kubesphere%!F(MISSING)kube-apiserver%!A(MISSING)pull&service=registry.docker.io\": dial tcp 3.94.224.37:443: i/o timeout" image="kubesphere/kube-apiserver:v1.28.0"
                time="2025-02-10T13:54:01+08:00" level=fatal msg="pulling image: rpc error: code = DeadlineExceeded desc = failed to pull and unpack image \"docker.io/kubesphere/kube-apiserver:v1.28.0\": failed to resolve reference \"docker.io/kubesphere/kube-apiserver:v1.28.0\": failed to authorize: failed to fetch anonymous token: Get \"https://auth.docker.io/token?scope=repository%!A(MISSING)kubesphere%!F(MISSING)kube-apiserver%!A(MISSING)pull&service=registry.docker.io\": dial tcp 3.94.224.37:443: i/o timeout"
                , error: exit status 1
                        [WARNING ImagePull]: failed to pull image kubesphere/kube-controller-manager:v1.28.0: output: E0210 13:56:41.015272    9487 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kubesphere/kube-controller-manager:v1.28.0\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://docker-images-prod.6aa30f8b08e16409b46e0173d6de2f56.r2.cloudflarestorage.com/registry-v2/docker/registry/v2/blobs/sha256/4b/4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=f1baa2dd9b876aeb89efebbfc9e5d5f4%2F20250210%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20250210T055630Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=da96e5ce4ceb76b34be79fd7b2a2803430266da58aa7709d86fd3dac8e7def9d\": net/http: TLS handshake timeout" image="kubesphere/kube-controller-manager:v1.28.0"
                time="2025-02-10T13:56:41+08:00" level=fatal msg="pulling image: failed to pull and unpack image \"docker.io/kubesphere/kube-controller-manager:v1.28.0\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://docker-images-prod.6aa30f8b08e16409b46e0173d6de2f56.r2.cloudflarestorage.com/registry-v2/docker/registry/v2/blobs/sha256/4b/4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=f1baa2dd9b876aeb89efebbfc9e5d5f4%2F20250210%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20250210T055630Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=da96e5ce4ceb76b34be79fd7b2a2803430266da58aa7709d86fd3dac8e7def9d\": net/http: TLS handshake timeout"
                , error: exit status 1
                        [WARNING ImagePull]: failed to pull image kubesphere/kube-scheduler:v1.28.0: output: E0210 13:59:11.180038    9566 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kubesphere/kube-scheduler:v1.28.0\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://docker-images-prod.6aa30f8b08e16409b46e0173d6de2f56.r2.cloudflarestorage.com/registry-v2/docker/registry/v2/blobs/sha256/f6/f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=f1baa2dd9b876aeb89efebbfc9e5d5f4%2F20250210%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20250210T055901Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=b056026a752e2085934e266a8b0522a05983a7497a1504da32f2ff393515f164\": net/http: TLS handshake timeout" image="kubesphere/kube-scheduler:v1.28.0"
                time="2025-02-10T13:59:11+08:00" level=fatal msg="pulling image: failed to pull and unpack image \"docker.io/kubesphere/kube-scheduler:v1.28.0\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://docker-images-prod.6aa30f8b08e16409b46e0173d6de2f56.r2.cloudflarestorage.com/registry-v2/docker/registry/v2/blobs/sha256/f6/f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=f1baa2dd9b876aeb89efebbfc9e5d5f4%2F20250210%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20250210T055901Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=b056026a752e2085934e266a8b0522a05983a7497a1504da32f2ff393515f164\": net/http: TLS handshake timeout"
                , error: exit status 1
                        [WARNING ImagePull]: failed to pull image kubesphere/kube-proxy:v1.28.0: output: E0210 14:02:01.404848    9644 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kubesphere/kube-proxy:v1.28.0\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://production.cloudflare.docker.com/registry-v2/docker/registry/v2/blobs/sha256/ea/ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a/data?expires=1739170311&signature=xP7oC3XNxUDmlYXy71tSLQBqlJc%3D&version=2\": net/http: TLS handshake timeout" image="kubesphere/kube-proxy:v1.28.0"
                time="2025-02-10T14:02:01+08:00" level=fatal msg="pulling image: failed to pull and unpack image \"docker.io/kubesphere/kube-proxy:v1.28.0\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://production.cloudflare.docker.com/registry-v2/docker/registry/v2/blobs/sha256/ea/ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a/data?expires=1739170311&signature=xP7oC3XNxUDmlYXy71tSLQBqlJc%3D&version=2\": net/http: TLS handshake timeout"
                , error: exit status 1
                        [WARNING ImagePull]: failed to pull image kubesphere/pause:3.9: output: E0210 14:05:41.084287    9737 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kubesphere/pause:3.9\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://production.cloudflare.docker.com/registry-v2/docker/registry/v2/blobs/sha256/e6/e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c/data?expires=1739170530&signature=AkrKij8WFu9ksi4JaGQGAj%2BxIoI%3D&version=2\": net/http: TLS handshake timeout" image="kubesphere/pause:3.9"
                time="2025-02-10T14:05:41+08:00" level=fatal msg="pulling image: failed to pull and unpack image \"docker.io/kubesphere/pause:3.9\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://production.cloudflare.docker.com/registry-v2/docker/registry/v2/blobs/sha256/e6/e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c/data?expires=1739170530&signature=AkrKij8WFu9ksi4JaGQGAj%2BxIoI%3D&version=2\": net/http: TLS handshake timeout"
                , error: exit status 1
                        [WARNING ImagePull]: failed to pull image coredns/coredns:1.9.3: output: E0210 14:09:29.816684    9809 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/coredns/coredns:1.9.3\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://production.cloudflare.docker.com/registry-v2/docker/registry/v2/blobs/sha256/51/5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a/data?expires=1739170753&signature=SPv%2Fsv93eXwYzLF7U0LXu%2BsEAAY%3D&version=2\": net/http: TLS handshake timeout" image="coredns/coredns:1.9.3"
                time="2025-02-10T14:09:29+08:00" level=fatal msg="pulling image: failed to pull and unpack image \"docker.io/coredns/coredns:1.9.3\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://production.cloudflare.docker.com/registry-v2/docker/registry/v2/blobs/sha256/51/5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a/data?expires=1739170753&signature=SPv%2Fsv93eXwYzLF7U0LXu%2BsEAAY%3D&version=2\": net/http: TLS handshake timeout"
                , error: exit status 1
                [certs] Using certificateDir folder "/etc/kubernetes/pki"
                [certs] Generating "ca" certificate and key
                [certs] Generating "apiserver" certificate and key
                [certs] apiserver serving cert is signed for DNS names [coverity-ms coverity-ms.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost] and IPs [10.233.0.1 172.1.30.21 127.0.0.1]
                [certs] Generating "apiserver-kubelet-client" certificate and key
                [certs] Generating "front-proxy-ca" certificate and key
                [certs] Generating "front-proxy-client" certificate and key
                [certs] External etcd mode: Skipping etcd/ca certificate authority generation
                [certs] External etcd mode: Skipping etcd/server certificate generation
                [certs] External etcd mode: Skipping etcd/peer certificate generation
                [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
                [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
                [certs] Generating "sa" key and public key
                [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
                [kubeconfig] Writing "admin.conf" kubeconfig file
                [kubeconfig] Writing "kubelet.conf" kubeconfig file
                [kubeconfig] Writing "controller-manager.conf" kubeconfig file
                [kubeconfig] Writing "scheduler.conf" kubeconfig file
                [control-plane] Using manifest folder "/etc/kubernetes/manifests"
                [control-plane] Creating static Pod manifest for "kube-apiserver"
                [control-plane] Creating static Pod manifest for "kube-controller-manager"
                [control-plane] Creating static Pod manifest for "kube-scheduler"
                [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
                [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
                [kubelet-start] Starting the kubelet
                [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
                [kubelet-check] Initial timeout of 40s passed.
                
                Unfortunately, an error has occurred:
                        timed out waiting for the condition
                
                This error is likely caused by:
                        - The kubelet is not running
                        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
                
                If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                        - 'systemctl status kubelet'
                        - 'journalctl -xeu kubelet'
                
                Additionally, a control plane component may have crashed or exited when started by the container runtime.
                To troubleshoot, list all containers using your preferred container runtimes CLI.
                Here is one example how you may list all running Kubernetes containers by using crictl:
                        - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
                        Once you have found the failing container, you can inspect its logs with:
                        - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
                error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
                To see the stack trace of this error execute with --v=5 or higher
                14:13:33 CST stdout: [coverity-ms]
                [reset] Reading configuration from the cluster...
                [reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
                W0210 14:13:33.119954    9920 reset.go:120] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get "https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s": dial tcp 172.1.30.21:6443: connect: connection refused
                [preflight] Running pre-flight checks
                W0210 14:13:33.120038    9920 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
                [reset] Deleted contents of the etcd data directory: /var/lib/etcd
                [reset] Stopping the kubelet service
                [reset] Unmounting mounted directories in "/var/lib/kubelet"
                [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
                [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
                
                The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
                
                The reset process does not reset or clean up iptables rules or IPVS tables.
                If you wish to reset iptables, you must do so manually by using the "iptables" command.
                
                If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
                to reset your system's IPVS tables.
                
                The reset process does not clean your kubeconfig files and you must remove them manually.
                Please, check the contents of the $HOME/.kube/config file.
                14:13:33 CST message: [coverity-ms]
                init kubernetes cluster failed: Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=FileExisting-crictl,ImagePull" 
                W0210 13:51:02.303972    9282 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
                [init] Using Kubernetes version: v1.28.0
                [preflight] Running pre-flight checks
                [preflight] Pulling images required for setting up a Kubernetes cluster
                [preflight] This might take a minute or two, depending on the speed of your internet connection
                [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
                W0210 14:02:01.427098    9282 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.8" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "kubesphere/pause:3.9" as the CRI sandbox image.
                        [WARNING ImagePull]: failed to pull image kubesphere/kube-apiserver:v1.28.0: output: E0210 13:54:01.691356    9407 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = DeadlineExceeded desc = failed to pull and unpack image \"docker.io/kubesphere/kube-apiserver:v1.28.0\": failed to resolve reference \"docker.io/kubesphere/kube-apiserver:v1.28.0\": failed to authorize: failed to fetch anonymous token: Get \"https://auth.docker.io/token?scope=repository%!!(MISSING)A(MISSING)kubesphere%!!(MISSING)F(MISSING)kube-apiserver%!!(MISSING)A(MISSING)pull&service=registry.docker.io\": dial tcp 3.94.224.37:443: i/o timeout" image="kubesphere/kube-apiserver:v1.28.0"
                time="2025-02-10T13:54:01+08:00" level=fatal msg="pulling image: rpc error: code = DeadlineExceeded desc = failed to pull and unpack image \"docker.io/kubesphere/kube-apiserver:v1.28.0\": failed to resolve reference \"docker.io/kubesphere/kube-apiserver:v1.28.0\": failed to authorize: failed to fetch anonymous token: Get \"https://auth.docker.io/token?scope=repository%!!(MISSING)A(MISSING)kubesphere%!!(MISSING)F(MISSING)kube-apiserver%!!(MISSING)A(MISSING)pull&service=registry.docker.io\": dial tcp 3.94.224.37:443: i/o timeout"
                , error: exit status 1
                        [WARNING ImagePull]: failed to pull image kubesphere/kube-controller-manager:v1.28.0: output: E0210 13:56:41.015272    9487 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kubesphere/kube-controller-manager:v1.28.0\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://docker-images-prod.6aa30f8b08e16409b46e0173d6de2f56.r2.cloudflarestorage.com/registry-v2/docker/registry/v2/blobs/sha256/4b/4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=f1baa2dd9b876aeb89efebbfc9e5d5f4%!F(MISSING)20250210%!F(MISSING)auto%!F(MISSING)s3%!F(MISSING)aws4_request&X-Amz-Date=20250210T055630Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=da96e5ce4ceb76b34be79fd7b2a2803430266da58aa7709d86fd3dac8e7def9d\": net/http: TLS handshake timeout" image="kubesphere/kube-controller-manager:v1.28.0"
                time="2025-02-10T13:56:41+08:00" level=fatal msg="pulling image: failed to pull and unpack image \"docker.io/kubesphere/kube-controller-manager:v1.28.0\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://docker-images-prod.6aa30f8b08e16409b46e0173d6de2f56.r2.cloudflarestorage.com/registry-v2/docker/registry/v2/blobs/sha256/4b/4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=f1baa2dd9b876aeb89efebbfc9e5d5f4%!F(MISSING)20250210%!F(MISSING)auto%!F(MISSING)s3%!F(MISSING)aws4_request&X-Amz-Date=20250210T055630Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=da96e5ce4ceb76b34be79fd7b2a2803430266da58aa7709d86fd3dac8e7def9d\": net/http: TLS handshake timeout"
                , error: exit status 1
                        [WARNING ImagePull]: failed to pull image kubesphere/kube-scheduler:v1.28.0: output: E0210 13:59:11.180038    9566 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kubesphere/kube-scheduler:v1.28.0\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://docker-images-prod.6aa30f8b08e16409b46e0173d6de2f56.r2.cloudflarestorage.com/registry-v2/docker/registry/v2/blobs/sha256/f6/f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=f1baa2dd9b876aeb89efebbfc9e5d5f4%!F(MISSING)20250210%!F(MISSING)auto%!F(MISSING)s3%!F(MISSING)aws4_request&X-Amz-Date=20250210T055901Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=b056026a752e2085934e266a8b0522a05983a7497a1504da32f2ff393515f164\": net/http: TLS handshake timeout" image="kubesphere/kube-scheduler:v1.28.0"
                time="2025-02-10T13:59:11+08:00" level=fatal msg="pulling image: failed to pull and unpack image \"docker.io/kubesphere/kube-scheduler:v1.28.0\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://docker-images-prod.6aa30f8b08e16409b46e0173d6de2f56.r2.cloudflarestorage.com/registry-v2/docker/registry/v2/blobs/sha256/f6/f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=f1baa2dd9b876aeb89efebbfc9e5d5f4%!F(MISSING)20250210%!F(MISSING)auto%!F(MISSING)s3%!F(MISSING)aws4_request&X-Amz-Date=20250210T055901Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=b056026a752e2085934e266a8b0522a05983a7497a1504da32f2ff393515f164\": net/http: TLS handshake timeout"
                , error: exit status 1
                        [WARNING ImagePull]: failed to pull image kubesphere/kube-proxy:v1.28.0: output: E0210 14:02:01.404848    9644 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kubesphere/kube-proxy:v1.28.0\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://production.cloudflare.docker.com/registry-v2/docker/registry/v2/blobs/sha256/ea/ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a/data?expires=1739170311&signature=xP7oC3XNxUDmlYXy71tSLQBqlJc%!D(MISSING)&version=2\": net/http: TLS handshake timeout" image="kubesphere/kube-proxy:v1.28.0"
                time="2025-02-10T14:02:01+08:00" level=fatal msg="pulling image: failed to pull and unpack image \"docker.io/kubesphere/kube-proxy:v1.28.0\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://production.cloudflare.docker.com/registry-v2/docker/registry/v2/blobs/sha256/ea/ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a/data?expires=1739170311&signature=xP7oC3XNxUDmlYXy71tSLQBqlJc%!D(MISSING)&version=2\": net/http: TLS handshake timeout"
                , error: exit status 1
                        [WARNING ImagePull]: failed to pull image kubesphere/pause:3.9: output: E0210 14:05:41.084287    9737 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kubesphere/pause:3.9\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://production.cloudflare.docker.com/registry-v2/docker/registry/v2/blobs/sha256/e6/e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c/data?expires=1739170530&signature=AkrKij8WFu9ksi4JaGQGAj%!B(MISSING)xIoI%!D(MISSING)&version=2\": net/http: TLS handshake timeout" image="kubesphere/pause:3.9"
                time="2025-02-10T14:05:41+08:00" level=fatal msg="pulling image: failed to pull and unpack image \"docker.io/kubesphere/pause:3.9\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://production.cloudflare.docker.com/registry-v2/docker/registry/v2/blobs/sha256/e6/e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c/data?expires=1739170530&signature=AkrKij8WFu9ksi4JaGQGAj%!B(MISSING)xIoI%!D(MISSING)&version=2\": net/http: TLS handshake timeout"
                , error: exit status 1
                        [WARNING ImagePull]: failed to pull image coredns/coredns:1.9.3: output: E0210 14:09:29.816684    9809 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/coredns/coredns:1.9.3\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://production.cloudflare.docker.com/registry-v2/docker/registry/v2/blobs/sha256/51/5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a/data?expires=1739170753&signature=SPv%!F(MISSING)sv93eXwYzLF7U0LXu%!B(MISSING)sEAAY%!D(MISSING)&version=2\": net/http: TLS handshake timeout" image="coredns/coredns:1.9.3"
                time="2025-02-10T14:09:29+08:00" level=fatal msg="pulling image: failed to pull and unpack image \"docker.io/coredns/coredns:1.9.3\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://production.cloudflare.docker.com/registry-v2/docker/registry/v2/blobs/sha256/51/5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a/data?expires=1739170753&signature=SPv%!F(MISSING)sv93eXwYzLF7U0LXu%!B(MISSING)sEAAY%!D(MISSING)&version=2\": net/http: TLS handshake timeout"
                , error: exit status 1
                [certs] Using certificateDir folder "/etc/kubernetes/pki"
                [certs] Generating "ca" certificate and key
                [certs] Generating "apiserver" certificate and key
                [certs] apiserver serving cert is signed for DNS names [coverity-ms coverity-ms.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost] and IPs [10.233.0.1 172.1.30.21 127.0.0.1]
                [certs] Generating "apiserver-kubelet-client" certificate and key
                [certs] Generating "front-proxy-ca" certificate and key
                [certs] Generating "front-proxy-client" certificate and key
                [certs] External etcd mode: Skipping etcd/ca certificate authority generation
                [certs] External etcd mode: Skipping etcd/server certificate generation
                [certs] External etcd mode: Skipping etcd/peer certificate generation
                [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
                [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
                [certs] Generating "sa" key and public key
                [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
                [kubeconfig] Writing "admin.conf" kubeconfig file
                [kubeconfig] Writing "kubelet.conf" kubeconfig file
                [kubeconfig] Writing "controller-manager.conf" kubeconfig file
                [kubeconfig] Writing "scheduler.conf" kubeconfig file
                [control-plane] Using manifest folder "/etc/kubernetes/manifests"
                [control-plane] Creating static Pod manifest for "kube-apiserver"
                [control-plane] Creating static Pod manifest for "kube-controller-manager"
                [control-plane] Creating static Pod manifest for "kube-scheduler"
                [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
                [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
                [kubelet-start] Starting the kubelet
                [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
                [kubelet-check] Initial timeout of 40s passed.
                
                Unfortunately, an error has occurred:
                        timed out waiting for the condition
                
                This error is likely caused by:
                        - The kubelet is not running
                        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
                
                If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                        - 'systemctl status kubelet'
                        - 'journalctl -xeu kubelet'
                
                Additionally, a control plane component may have crashed or exited when started by the container runtime.
                To troubleshoot, list all containers using your preferred container runtimes CLI.
                Here is one example how you may list all running Kubernetes containers by using crictl:
                        - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
                        Once you have found the failing container, you can inspect its logs with:
                        - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
                error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
                To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1