• KSV
  • error #0: x509: certificate is valid for 127.0.0.1

The problem:

./kk create cluster -f config-sample.yaml


| | / / | | | | / /

| |/ / _ _| |__ ___| |/ / ___ _ _

| \| | | | '_ \ / _ \ \ / _ \ | | |

| |\ \ || | |) | __/ |\ \ __/ |_| |

\| \/\,|./ \\| \/\|\__, |

                                __/ |

                               |___/

14:12:12 UTC [GreetingsModule] Greetings

14:12:12 UTC message: [worker4]

Greetings, KubeKey!

14:12:13 UTC message: [worker1]

Greetings, KubeKey!

14:12:13 UTC message: [worker3]

Greetings, KubeKey!

14:12:14 UTC message: [worker2]

Greetings, KubeKey!

14:12:14 UTC message: [master]

Greetings, KubeKey!

14:12:14 UTC success: [worker4]

14:12:14 UTC success: [worker1]

14:12:14 UTC success: [worker3]

14:12:14 UTC success: [worker2]

14:12:14 UTC success: [master]

14:12:14 UTC [NodePreCheckModule] A pre-check on nodes

14:12:14 UTC success: [worker1]

14:12:14 UTC success: [worker3]

14:12:14 UTC success: [worker4]

14:12:14 UTC success: [master]

14:12:14 UTC success: [worker2]

14:12:14 UTC [ConfirmModule] Display confirmation form

+———+——+——+———+———-+——-+——-+———+———–+——–+———+————+————+————-+——————+————–+

| name | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker | containerd | nfs client | ceph client | glusterfs client | time |

+———+——+——+———+———-+——-+——-+———+———–+——–+———+————+————+————-+——————+————–+

| master | y | y | y | y | y | y | | y | | 20.10.8 | v1.4.9 | | | | UTC 14:12:14 |

| worker1 | y | y | y | y | y | y | | y | | 20.10.8 | v1.4.9 | | | | UTC 14:12:14 |

| worker2 | y | y | y | y | y | y | | y | | 20.10.8 | v1.4.9 | | | | UTC 14:12:14 |

| worker3 | y | y | y | y | y | y | | y | | 20.10.8 | v1.4.9 | | | | UTC 14:12:14 |

| worker4 | y | y | y | y | y | y | | y | | 20.10.8 | v1.4.9 | | | | UTC 14:12:14 |

+———+——+——+———+———-+——-+——-+———+———–+——–+———+————+————+————-+——————+————–+

This is a simple check of your environment.

Before installation, ensure that your machines meet all requirements specified at

https://github.com/kubesphere/kubekey#requirements-and-recommendations

Continue this installation? [yes/no]: yes

14:12:17 UTC success: [LocalHost]

14:12:17 UTC [NodeBinariesModule] Download installation binaries

14:12:17 UTC message: [localhost]

downloading amd64 kubeadm v1.23.10 …

14:12:19 UTC message: [localhost]

kubeadm is existed

14:12:19 UTC message: [localhost]

downloading amd64 kubelet v1.23.10 …

14:12:23 UTC message: [localhost]

kubelet is existed

14:12:23 UTC message: [localhost]

downloading amd64 kubectl v1.23.10 …

14:12:25 UTC message: [localhost]

kubectl is existed

14:12:25 UTC message: [localhost]

downloading amd64 helm v3.9.0 …

14:12:27 UTC message: [localhost]

helm is existed

14:12:27 UTC message: [localhost]

downloading amd64 kubecni v1.2.0 …

14:12:28 UTC message: [localhost]

kubecni is existed

14:12:28 UTC message: [localhost]

downloading amd64 crictl v1.24.0 …

14:12:29 UTC message: [localhost]

crictl is existed

14:12:29 UTC message: [localhost]

downloading amd64 etcd v3.4.13 …

14:12:29 UTC message: [localhost]

etcd is existed

14:12:29 UTC message: [localhost]

downloading amd64 docker 20.10.8 …

14:12:31 UTC message: [localhost]

docker is existed

14:12:31 UTC message: [localhost]

downloading amd64 calicoctl v3.23.2 …

14:12:33 UTC message: [localhost]

calicoctl is existed

14:12:33 UTC success: [LocalHost]

14:12:33 UTC [ConfigureOSModule] Get OS release

14:12:33 UTC success: [worker4]

14:12:33 UTC success: [worker2]

14:12:33 UTC success: [master]

14:12:33 UTC success: [worker3]

14:12:33 UTC success: [worker1]

14:12:33 UTC [ConfigureOSModule] Prepare to init OS

14:12:34 UTC success: [worker3]

14:12:34 UTC success: [worker1]

14:12:34 UTC success: [master]

14:12:34 UTC success: [worker4]

14:12:34 UTC success: [worker2]

14:12:34 UTC [ConfigureOSModule] Generate init os script

14:12:34 UTC success: [master]

14:12:34 UTC success: [worker3]

14:12:34 UTC success: [worker2]

14:12:34 UTC success: [worker4]

14:12:34 UTC success: [worker1]

14:12:34 UTC [ConfigureOSModule] Exec init os script

14:12:36 UTC stdout: [worker3]

net.ipv4.conf.default.rp_filter = 1

net.ipv4.conf.all.rp_filter = 1

net.ipv4.ip_forward = 1

net.bridge.bridge-nf-call-arptables = 1

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_local_reserved_ports = 30000-32767

net.core.netdev_max_backlog = 65535

net.core.rmem_max = 33554432

net.core.wmem_max = 33554432

net.core.somaxconn = 32768

net.ipv4.tcp_max_syn_backlog = 1048576

net.ipv4.neigh.default.gc_thresh1 = 512

net.ipv4.neigh.default.gc_thresh2 = 2048

net.ipv4.neigh.default.gc_thresh3 = 4096

net.ipv4.tcp_retries2 = 15

net.ipv4.tcp_max_tw_buckets = 1048576

net.ipv4.tcp_max_orphans = 65535

net.ipv4.udp_rmem_min = 131072

net.ipv4.udp_wmem_min = 131072

net.ipv4.conf.all.arp_accept = 1

net.ipv4.conf.default.arp_accept = 1

net.ipv4.conf.all.arp_ignore = 1

net.ipv4.conf.default.arp_ignore = 1

vm.max_map_count = 262144

vm.swappiness = 0

vm.overcommit_memory = 0

fs.inotify.max_user_instances = 524288

fs.inotify.max_user_watches = 524288

fs.pipe-max-size = 4194304

fs.aio-max-nr = 262144

kernel.pid_max = 65535

kernel.watchdog_thresh = 5

kernel.hung_task_timeout_secs = 5

14:12:36 UTC stdout: [worker4]

net.ipv4.conf.default.rp_filter = 1

net.ipv4.conf.all.rp_filter = 1

net.ipv4.ip_forward = 1

net.bridge.bridge-nf-call-arptables = 1

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_local_reserved_ports = 30000-32767

net.core.netdev_max_backlog = 65535

net.core.rmem_max = 33554432

net.core.wmem_max = 33554432

net.core.somaxconn = 32768

net.ipv4.tcp_max_syn_backlog = 1048576

net.ipv4.neigh.default.gc_thresh1 = 512

net.ipv4.neigh.default.gc_thresh2 = 2048

net.ipv4.neigh.default.gc_thresh3 = 4096

net.ipv4.tcp_retries2 = 15

net.ipv4.tcp_max_tw_buckets = 1048576

net.ipv4.tcp_max_orphans = 65535

net.ipv4.udp_rmem_min = 131072

net.ipv4.udp_wmem_min = 131072

net.ipv4.conf.all.arp_accept = 1

net.ipv4.conf.default.arp_accept = 1

net.ipv4.conf.all.arp_ignore = 1

net.ipv4.conf.default.arp_ignore = 1

vm.max_map_count = 262144

vm.swappiness = 0

vm.overcommit_memory = 0

fs.inotify.max_user_instances = 524288

fs.inotify.max_user_watches = 524288

fs.pipe-max-size = 4194304

fs.aio-max-nr = 262144

kernel.pid_max = 65535

kernel.watchdog_thresh = 5

kernel.hung_task_timeout_secs = 5

14:12:37 UTC stdout: [worker1]

net.ipv4.conf.default.rp_filter = 1

net.ipv4.conf.all.rp_filter = 1

net.ipv4.ip_forward = 1

net.bridge.bridge-nf-call-arptables = 1

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_local_reserved_ports = 30000-32767

net.core.netdev_max_backlog = 65535

net.core.rmem_max = 33554432

net.core.wmem_max = 33554432

net.core.somaxconn = 32768

net.ipv4.tcp_max_syn_backlog = 1048576

net.ipv4.neigh.default.gc_thresh1 = 512

net.ipv4.neigh.default.gc_thresh2 = 2048

net.ipv4.neigh.default.gc_thresh3 = 4096

net.ipv4.tcp_retries2 = 15

net.ipv4.tcp_max_tw_buckets = 1048576

net.ipv4.tcp_max_orphans = 65535

net.ipv4.udp_rmem_min = 131072

net.ipv4.udp_wmem_min = 131072

net.ipv4.conf.all.arp_accept = 1

net.ipv4.conf.default.arp_accept = 1

net.ipv4.conf.all.arp_ignore = 1

net.ipv4.conf.default.arp_ignore = 1

vm.max_map_count = 262144

vm.swappiness = 0

vm.overcommit_memory = 0

fs.inotify.max_user_instances = 524288

fs.inotify.max_user_watches = 524288

fs.pipe-max-size = 4194304

fs.aio-max-nr = 262144

kernel.pid_max = 65535

kernel.watchdog_thresh = 5

kernel.hung_task_timeout_secs = 5

14:12:37 UTC stdout: [worker2]

net.ipv4.conf.default.rp_filter = 1

net.ipv4.conf.all.rp_filter = 1

net.ipv4.ip_forward = 1

net.bridge.bridge-nf-call-arptables = 1

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_local_reserved_ports = 30000-32767

net.core.netdev_max_backlog = 65535

net.core.rmem_max = 33554432

net.core.wmem_max = 33554432

net.core.somaxconn = 32768

net.ipv4.tcp_max_syn_backlog = 1048576

net.ipv4.neigh.default.gc_thresh1 = 512

net.ipv4.neigh.default.gc_thresh2 = 2048

net.ipv4.neigh.default.gc_thresh3 = 4096

net.ipv4.tcp_retries2 = 15

net.ipv4.tcp_max_tw_buckets = 1048576

net.ipv4.tcp_max_orphans = 65535

net.ipv4.udp_rmem_min = 131072

net.ipv4.udp_wmem_min = 131072

net.ipv4.conf.all.arp_accept = 1

net.ipv4.conf.default.arp_accept = 1

net.ipv4.conf.all.arp_ignore = 1

net.ipv4.conf.default.arp_ignore = 1

vm.max_map_count = 262144

vm.swappiness = 0

vm.overcommit_memory = 0

fs.inotify.max_user_instances = 524288

fs.inotify.max_user_watches = 524288

fs.pipe-max-size = 4194304

fs.aio-max-nr = 262144

kernel.pid_max = 65535

kernel.watchdog_thresh = 5

kernel.hung_task_timeout_secs = 5

14:12:37 UTC stdout: [master]

net.ipv4.conf.default.rp_filter = 1

net.ipv4.conf.all.rp_filter = 1

net.ipv4.ip_forward = 1

net.bridge.bridge-nf-call-arptables = 1

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_local_reserved_ports = 30000-32767

net.core.netdev_max_backlog = 65535

net.core.rmem_max = 33554432

net.core.wmem_max = 33554432

net.core.somaxconn = 32768

net.ipv4.tcp_max_syn_backlog = 1048576

net.ipv4.neigh.default.gc_thresh1 = 512

net.ipv4.neigh.default.gc_thresh2 = 2048

net.ipv4.neigh.default.gc_thresh3 = 4096

net.ipv4.tcp_retries2 = 15

net.ipv4.tcp_max_tw_buckets = 1048576

net.ipv4.tcp_max_orphans = 65535

net.ipv4.udp_rmem_min = 131072

net.ipv4.udp_wmem_min = 131072

net.ipv4.conf.all.arp_accept = 1

net.ipv4.conf.default.arp_accept = 1

net.ipv4.conf.all.arp_ignore = 1

net.ipv4.conf.default.arp_ignore = 1

vm.max_map_count = 262144

vm.swappiness = 0

vm.overcommit_memory = 0

fs.inotify.max_user_instances = 524288

fs.inotify.max_user_watches = 524288

fs.pipe-max-size = 4194304

fs.aio-max-nr = 262144

kernel.pid_max = 65535

kernel.watchdog_thresh = 5

kernel.hung_task_timeout_secs = 5

14:12:37 UTC success: [worker3]

14:12:37 UTC success: [worker4]

14:12:37 UTC success: [worker1]

14:12:37 UTC success: [worker2]

14:12:37 UTC success: [master]

14:12:37 UTC [ConfigureOSModule] configure the ntp server for each node

14:12:37 UTC skipped: [worker4]

14:12:37 UTC skipped: [master]

14:12:37 UTC skipped: [worker1]

14:12:37 UTC skipped: [worker2]

14:12:37 UTC skipped: [worker3]

14:12:37 UTC [KubernetesStatusModule] Get kubernetes cluster status

14:12:37 UTC success: [master]

14:12:37 UTC [InstallContainerModule] Sync docker binaries

14:12:38 UTC skipped: [master]

14:12:38 UTC skipped: [worker4]

14:12:38 UTC skipped: [worker3]

14:12:38 UTC skipped: [worker2]

14:12:38 UTC skipped: [worker1]

14:12:38 UTC [InstallContainerModule] Generate docker service

14:12:38 UTC skipped: [worker1]

14:12:38 UTC skipped: [master]

14:12:38 UTC skipped: [worker2]

14:12:38 UTC skipped: [worker3]

14:12:38 UTC skipped: [worker4]

14:12:38 UTC [InstallContainerModule] Generate docker config

14:12:38 UTC skipped: [worker1]

14:12:38 UTC skipped: [worker2]

14:12:38 UTC skipped: [master]

14:12:38 UTC skipped: [worker3]

14:12:38 UTC skipped: [worker4]

14:12:38 UTC [InstallContainerModule] Enable docker

14:12:38 UTC skipped: [worker1]

14:12:38 UTC skipped: [master]

14:12:38 UTC skipped: [worker2]

14:12:38 UTC skipped: [worker3]

14:12:38 UTC skipped: [worker4]

14:12:38 UTC [InstallContainerModule] Add auths to container runtime

14:12:38 UTC skipped: [worker1]

14:12:38 UTC skipped: [worker2]

14:12:38 UTC skipped: [master]

14:12:38 UTC skipped: [worker3]

14:12:38 UTC skipped: [worker4]

14:12:38 UTC [PullModule] Start to pull images on all nodes

14:12:38 UTC message: [worker1]

downloading image: kubesphere/pause:3.6

14:12:38 UTC message: [worker4]

downloading image: kubesphere/pause:3.6

14:12:38 UTC message: [worker2]

downloading image: kubesphere/pause:3.6

14:12:38 UTC message: [worker3]

downloading image: kubesphere/pause:3.6

14:12:38 UTC message: [master]

downloading image: kubesphere/pause:3.6

14:12:42 UTC message: [worker3]

downloading image: kubesphere/kube-proxy:v1.23.10

14:12:42 UTC message: [worker2]

downloading image: kubesphere/kube-proxy:v1.23.10

14:12:42 UTC message: [master]

downloading image: kubesphere/kube-apiserver:v1.23.10

14:12:42 UTC message: [worker4]

downloading image: kubesphere/kube-proxy:v1.23.10

14:12:42 UTC message: [worker1]

downloading image: kubesphere/kube-proxy:v1.23.10

14:12:43 UTC message: [worker3]

downloading image: coredns/coredns:1.8.6

14:12:43 UTC message: [worker2]

downloading image: coredns/coredns:1.8.6

14:12:43 UTC message: [master]

downloading image: kubesphere/kube-controller-manager:v1.23.10

14:12:43 UTC message: [worker1]

downloading image: coredns/coredns:1.8.6

14:12:43 UTC message: [worker4]

downloading image: coredns/coredns:1.8.6

14:12:45 UTC message: [master]

downloading image: kubesphere/kube-scheduler:v1.23.10

14:12:46 UTC message: [worker3]

downloading image: kubesphere/k8s-dns-node-cache:1.15.12

14:12:46 UTC message: [worker2]

downloading image: kubesphere/k8s-dns-node-cache:1.15.12

14:12:46 UTC message: [worker1]

downloading image: kubesphere/k8s-dns-node-cache:1.15.12

14:12:46 UTC message: [worker4]

downloading image: kubesphere/k8s-dns-node-cache:1.15.12

14:12:46 UTC message: [master]

downloading image: kubesphere/kube-proxy:v1.23.10

14:12:47 UTC message: [master]

downloading image: coredns/coredns:1.8.6

14:12:48 UTC message: [worker3]

downloading image: calico/kube-controllers:v3.23.2

14:12:48 UTC message: [worker2]

downloading image: calico/kube-controllers:v3.23.2

14:12:48 UTC message: [worker4]

downloading image: calico/kube-controllers:v3.23.2

14:12:48 UTC message: [worker1]

downloading image: calico/kube-controllers:v3.23.2

14:12:50 UTC message: [master]

downloading image: kubesphere/k8s-dns-node-cache:1.15.12

14:12:50 UTC message: [worker3]

downloading image: calico/cni:v3.23.2

14:12:50 UTC message: [worker4]

downloading image: calico/cni:v3.23.2

14:12:50 UTC message: [worker1]

downloading image: calico/cni:v3.23.2

14:12:50 UTC message: [worker2]

downloading image: calico/cni:v3.23.2

14:12:52 UTC message: [master]

downloading image: calico/kube-controllers:v3.23.2

14:12:52 UTC message: [worker3]

downloading image: calico/node:v3.23.2

14:12:52 UTC message: [worker1]

downloading image: calico/node:v3.23.2

14:12:52 UTC message: [worker4]

downloading image: calico/node:v3.23.2

14:12:52 UTC message: [worker2]

downloading image: calico/node:v3.23.2

14:12:54 UTC message: [master]

downloading image: calico/cni:v3.23.2

14:12:54 UTC message: [worker3]

downloading image: calico/pod2daemon-flexvol:v3.23.2

14:12:54 UTC message: [worker4]

downloading image: calico/pod2daemon-flexvol:v3.23.2

14:12:55 UTC message: [worker1]

downloading image: calico/pod2daemon-flexvol:v3.23.2

14:12:55 UTC message: [worker2]

downloading image: calico/pod2daemon-flexvol:v3.23.2

14:12:56 UTC message: [master]

downloading image: calico/node:v3.23.2

14:12:57 UTC message: [worker3]

downloading image: library/haproxy:2.3

14:12:57 UTC message: [worker1]

downloading image: library/haproxy:2.3

14:12:57 UTC message: [worker4]

downloading image: library/haproxy:2.3

14:12:57 UTC message: [worker2]

downloading image: library/haproxy:2.3

14:12:58 UTC message: [master]

downloading image: calico/pod2daemon-flexvol:v3.23.2

14:13:01 UTC success: [worker3]

14:13:01 UTC success: [worker1]

14:13:01 UTC success: [worker4]

14:13:01 UTC success: [worker2]

14:13:01 UTC success: [master]

14:13:01 UTC [ETCDPreCheckModule] Get etcd status

14:13:01 UTC success: [master]

14:13:01 UTC [CertsModule] Fetch etcd certs

14:13:01 UTC success: [master]

14:13:01 UTC [CertsModule] Generate etcd Certs

[certs] Using existing ca certificate authority

[certs] Using existing admin-master certificate and key on disk

[certs] Using existing member-master certificate and key on disk

[certs] Using existing node-master certificate and key on disk

14:13:01 UTC success: [LocalHost]

14:13:01 UTC [CertsModule] Synchronize certs file

14:13:01 UTC success: [master]

14:13:01 UTC [CertsModule] Synchronize certs file to master

14:13:01 UTC skipped: [master]

14:13:01 UTC [InstallETCDBinaryModule] Install etcd using binary

14:13:03 UTC success: [master]

14:13:03 UTC [InstallETCDBinaryModule] Generate etcd service

14:13:03 UTC success: [master]

14:13:03 UTC [InstallETCDBinaryModule] Generate access address

14:13:03 UTC success: [master]

14:13:03 UTC [ETCDConfigureModule] Health check on exist etcd

14:13:03 UTC skipped: [master]

14:13:03 UTC [ETCDConfigureModule] Generate etcd.env config on new etcd

14:13:03 UTC success: [master]

14:13:03 UTC [ETCDConfigureModule] Refresh etcd.env config on all etcd

14:13:03 UTC success: [master]

14:13:03 UTC [ETCDConfigureModule] Restart etcd

14:13:07 UTC success: [master]

14:13:07 UTC [ETCDConfigureModule] Health check on all etcd

14:13:07 UTC message: [master]

etcd health check failed: Failed to exec command: sudo -E /bin/bash -c “export ETCDCTL_API=2;export ETCDCTL_CERT_FILE=‘/etc/ssl/etcd/ssl/admin-master.pem’;export ETCDCTL_KEY_FILE=‘/etc/ssl/etcd/ssl/admin-master-key.pem’;export ETCDCTL_CA_FILE=‘/etc/ssl/etcd/ssl/ca.pem’;/usr/local/bin/etcdctl –endpoints=https://172.31.0.129:2379 cluster-health | grep -q ‘cluster is healthy’”

Error: client: etcd cluster is unavailable or misconfigured; error #0: x509: certificate is valid for 127.0.0.1, ::1, 185.32.15.139, 185.32.14.86, 185.32.15.176, 185.32.14.83, 185.32.15.145, not 172.31.0.129

error #0: x509: certificate is valid for 127.0.0.1, ::1, 185.32.15.139, 185.32.14.86, 185.32.15.176, 185.32.14.83, 185.32.15.145, not 172.31.0.129: Process exited with status 1

My config-sample.yaml :

apiVersion: kubekey.kubesphere.io/v1alpha2

kind: Cluster

metadata:

name: sample

spec:

hosts:

  • {name: master, address: 185.32.15.139, internalAddress: 172.31.0.129, user: ec2-user, privateKeyPath: “/home/ec2-user/ssh1.pem”}

  • {name: worker1, address: 185.32.14.86, internalAddress: 172.31.0.133, user: ec2-user, privateKeyPath: “/home/ec2-user/ssh1.pem”}

  • {name: worker2, address: 185.32.15.176, internalAddress: 172.31.0.130, user: ec2-user, privateKeyPath: “/home/ec2-user/ssh1.pem”}

  • {name: worker3, address: 185.32.14.83, internalAddress: 172.31.0.131, user: ec2-user, privateKeyPath: “/home/ec2-user/ssh1.pem”}

  • {name: worker4, address: 185.32.15.145, internalAddress: 172.31.0.132, user: ec2-user, privateKeyPath: “/home/ec2-user/ssh1.pem”}

    roleGroups:

    etcd:

    • master

      control-plane:

    • master

      worker:

    • worker1

    • worker2

    • worker3

    • worker4

    controlPlaneEndpoint:

    internalLoadbalancer: haproxy

    domain: lb.kubesphere.local

    address: ""

    port: 6443

    kubernetes:

    version: 1.23.10

    clusterName: cluster.local

    autoRenewCerts: true

    containerManager: docker

    etcd:

    type: kubekey

    network:

    plugin: calico

    kubePodsCIDR: 10.233.64.0/18

    kubeServiceCIDR: 10.233.0.0/18

    multus support. https://github.com/k8snetworkplumbingwg/multus-cni

    multusCNI:

    enabled: false

    registry:

    privateRegistry: ""

    namespaceOverride: ""

    registryMirrors: []

    insecureRegistries: []

    addons: []

---

apiVersion: installer.kubesphere.io/v1alpha1

kind: ClusterConfiguration

metadata:

name: ks-installer

namespace: kubesphere-system

labels:

version: v3.3.2

spec:

persistence:

storageClass: ""

authentication:

jwtSecret: ""

zone: ""

local_registry: ""

namespace_override: ""

dev_tag: ""

etcd:

monitoring: false

endpointIps: localhost

port: 2379

tlsEnable: true

common:

core:

  console:

    enableMultiLogin: true

    port: 30880

    type: NodePort

# apiserver:

#  resources: {}

# controllerManager:

#  resources: {}

redis:

  enabled: false

  volumeSize: 2Gi

openldap:

  enabled: false

  volumeSize: 2Gi

minio:

  volumeSize: 20Gi

monitoring:

  # type: external

  endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090

  GPUMonitoring:

    enabled: false

gpu:

  kinds:

  - resourceName: "nvidia.com/gpu"

    resourceType: "GPU"

    default: true

es:

  # master:

  #   volumeSize: 4Gi

  #   replicas: 1

  #   resources: {}

  # data:

  #   volumeSize: 20Gi

  #   replicas: 1

  #   resources: {}

  logMaxAge: 7

  elkPrefix: logstash

  basicAuth:

    enabled: false

    username: ""

    password: ""

  externalElasticsearchHost: ""

  externalElasticsearchPort: ""

alerting:

enabled: false

# thanosruler:

#   replicas: 1

#   resources: {}

auditing:

enabled: false

# operator:

#   resources: {}

# webhook:

#   resources: {}

devops:

enabled: false

# resources: {}

jenkinsMemoryLim: 8Gi

jenkinsMemoryReq: 4Gi

jenkinsVolumeSize: 8Gi

events:

enabled: false

# operator:

#   resources: {}

# exporter:

#   resources: {}

# ruler:

#   enabled: true

#   replicas: 2

#   resources: {}

logging:

enabled: false

logsidecar:

  enabled: true

  replicas: 2

  # resources: {}

metrics_server:

enabled: false

monitoring:

storageClass: ""

node_exporter:

  port: 9100

  # resources: {}

# kube_rbac_proxy:

#   resources: {}

# kube_state_metrics:

#   resources: {}

# prometheus:

#   replicas: 1

#   volumeSize: 20Gi

#   resources: {}

#   operator:

#     resources: {}

# alertmanager:

#   replicas: 1

#   resources: {}

# notification_manager:

#   resources: {}

#   operator:

#     resources: {}

#   proxy:

#     resources: {}

gpu:

  nvidia_dcgm_exporter:

    enabled: false

    # resources: {}

multicluster:

clusterRole: none

network:

networkpolicy:

  enabled: false

ippool:

  type: none

topology:

  type: none

openpitrix:

store:

  enabled: false

servicemesh:

enabled: false

istio:

  components:

    ingressGateways:

    - name: istio-ingressgateway

      enabled: false

    cni:

      enabled: false

edgeruntime:

enabled: false

kubeedge:

  enabled: false

  cloudCore:

    cloudHub:

      advertiseAddress:

        - ""

    service:

      cloudhubNodePort: "30000"

      cloudhubQuicNodePort: "30001"

      cloudhubHttpsNodePort: "30002"

      cloudstreamNodePort: "30003"

      tunnelNodePort: "30004"

    # resources: {}

    # hostNetWork: false

  iptables-manager:

    enabled: true

    mode: "external"

    # resources: {}

  # edgeService:

  #   resources: {}

terminal:

timeout: 600

What should ı do?

1 年 后

我重装了一遍 kubekey,就能正常创建集群了,应该是 kubekey 中会保留旧集群的证书,最好只进行一次性安装。这边推荐以后每次创建集群前都重装一次 kubekey,以免出现问题。