k8s 集群安装成功,每次到了安装 kubesphere 的时候,总是等很久,然后就出现下面的错误,提示超时
namespace/kubesphere-system unchanged
serviceaccount/ks-installer unchanged
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io unchanged
clusterrole.rbac.authorization.k8s.io/ks-installer unchanged
clusterrolebinding.rbac.authorization.k8s.io/ks-installer unchanged
deployment.apps/ks-installer unchanged
clusterconfiguration.installer.kubesphere.io/ks-installer unchanged
WARN[15:59:51 CST] Task failed ...
WARN[15:59:51 CST] error: KubeSphere startup timeout.
Error: Failed to deploy **kubesphere: KubeSphere startup timeout.**

    leons9th

    看下日志

    kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

      Cauchy

      failed: [localhost] (item={'path': 'ks-apiserver', 'file': 'ks-apiserver.yml'}) => {"ansible_loop_var": "item", "changed": true, "cmd": ["/usr/local/bin/kubectl", "apply", "-f", "/kubesphere/kubesphere/ks-apiserver/ks-apiserver.yml", "--force"], "delta": "0:01:49.465708", "end": "2021-05-10 02:31:55.197076", "failed_when_result": true, "item": {"file": "ks-apiserver.yml", "path": "ks-apiserver"}, "msg": "non-zero return code", "rc": 1, "start": "2021-05-10 02:30:05.731368", "stderr": "error: error when creating \"/kubesphere/kubesphere/ks-apiserver/ks-apiserver.yml\": Post https://10.233.0.1:443/apis/apps/v1/namespaces/kubesphere-system/deployments: stream error: stream ID 7; INTERNAL_ERROR", "stderr_lines": ["error: error when creating \"/kubesphere/kubesphere/ks-apiserver/ks-apiserver.yml\": Post https://10.233.0.1:443/apis/apps/v1/namespaces/kubesphere-system/deployments: stream error: stream ID 7; INTERNAL_ERROR"], "stdout": "service/ks-apiserver created", "stdout_lines": ["service/ks-apiserver created"]}
      changed: [localhost] => (item={'path': 'ks-controller-manager', 'file': 'ks-controller-manager.yaml'})
      changed: [localhost] => (item={'path': 'ks-console', 'file': 'ks-console-config.yml'})
      changed: [localhost] => (item={'path': 'ks-console', 'file': 'sample-bookinfo-configmap.yaml'})
      changed: [localhost] => (item={'path': 'ks-console', 'file': 'ks-console-deployment.yml'})
      
      PLAY RECAP *********************************************************************
      localhost                  : ok=27   changed=21   unreachable=0    failed=1    skipped=14   rescued=0    ignored=0
      • Jeff 回复了此帖

        进到ks-installer里apply下那个yaml试下

        kubectl exec-it -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') /bin/bash
        kubectl apply -f /kubesphere/kubesphere/ks-apiserver/ks-apiserver.yml

          Cauchy
          参数异常了

          kubectl exec-it -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') /bin/bash
          Error: unknown command "exec-it" for "kubectl"
          Run 'kubectl --help' for usage.

          看到kubectl exec -it 连起来了,重新执行了下 ok 了,现在进入里面执行了 yml 文件提示如下

          bash-5.1$ kubectl apply -f /kubesphere/kubesphere/ks-apiserver/ks-apiserver.yml
          deployment.apps/ks-apiserver created
          service/ks-apiserver unchanged

          Cauchy 请问执行成功后怎么操作,是再装一遍还是执行什么命令

            leons9th

            重启下ks-installer

            kubectl rollout restart deploy -n kubesphere-system ks-installer

            然后再关注下ks-installer的日志

              Cauchy
              重启后出现了新的错误,连接丢失,不确定具体是什么地址丢失了

              Start installing monitoring
              Start installing multicluster
              Start installing openpitrix
              Start installing network
              **************************************************
              Waiting for all tasks to be completed ...
              task multicluster status is successful  (1/4)
              task openpitrix status is successful  (2/4)
              task network status is successful  (3/4)
              
              error: http2: client connection lost

              重新查看日志,发现 monitoring 启动失败

              Start installing monitoring
              Start installing multicluster
              Start installing openpitrix
              Start installing network
              **************************************************
              Waiting for all tasks to be completed ...
              task multicluster status is successful  (1/4)
              task openpitrix status is successful  (2/4)
              task network status is successful  (3/4)
              task monitoring status is failed  (4/4)
              **************************************************
              Collecting installation results ...
              
              
              Task 'monitoring' failed:
              ******************************************************************************************************************************************************
              {
                "counter": 75,
                "created": "2021-05-10T09:29:39.402331",
                "end_line": 74,
                "event": "runner_on_failed",
                "event_data": {
                  "duration": 117.614838,
                  "end": "2021-05-10T09:29:39.402089",
                  "event_loop": null,
                  "host": "localhost",
                  "ignore_errors": null,
                  "play": "localhost",
                  "play_pattern": "localhost",
                  "play_uuid": "42453aca-4651-eb6e-4fd2-000000000005",
                  "playbook": "/kubesphere/playbooks/monitoring.yaml",
                  "playbook_uuid": "661aa835-1511-4178-ba20-17547f31be9b",
                  "remote_addr": "127.0.0.1",
                  "res": {
                    "_ansible_no_log": false,
                    "changed": true,
                    "cmd": "/usr/local/bin/kubectl apply -f /kubesphere/kubesphere/prometheus/node-exporter --force",
                    "delta": "0:01:57.271006",
                    "end": "2021-05-10 05:29:39.379806",
                    "invocation": {
                      "module_args": {
                        "_raw_params": "/usr/local/bin/kubectl apply -f /kubesphere/kubesphere/prometheus/node-exporter --force",
                        "_uses_shell": true,
                        "argv": null,
                        "chdir": null,
                        "creates": null,
                        "executable": null,
                        "removes": null,
                        "stdin": null,
                        "stdin_add_newline": true,
                        "strip_empty_ends": true,
                        "warn": true
                      }
                    },
                    "msg": "non-zero return code",
                    "rc": 1,
                    "start": "2021-05-10 05:27:42.108800",
                    "stderr": "error when retrieving current configuration of:\nResource: \"rbac.authorization.k8s.io/v1, Resource=clusterroles\", GroupVersionKind: \"rbac.authorization.k8s.io/v1, Kind=ClusterRole\"\nName: \"kubesphere-node-exporter\", Namespace: \"\"\nfrom server for: \"/kubesphere/kubesphere/prometheus/node-exporter/node-exporter-clusterRole.yaml\": Get https://10.233.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/kubesphere-node-exporter: http2: client connection lost\nerror when retrieving current configuration of:\nResource: \"rbac.authorization.k8s.io/v1, Resource=clusterrolebindings\", GroupVersionKind: \"rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding\"\nName: \"kubesphere-node-exporter\", Namespace: \"\"\nfrom server for: \"/kubesphere/kubesphere/prometheus/node-exporter/node-exporter-clusterRoleBinding.yaml\": Get https://10.233.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/kubesphere-node-exporter: net/http: TLS handshake timeout\nerror when retrieving current configuration of:\nResource: \"apps/v1, Resource=daemonsets\", GroupVersionKind: \"apps/v1, Kind=DaemonSet\"\nName: \"node-exporter\", Namespace: \"kubesphere-monitoring-system\"\nfrom server for: \"/kubesphere/kubesphere/prometheus/node-exporter/node-exporter-daemonset.yaml\": Get https://10.233.0.1:443/apis/apps/v1/namespaces/kubesphere-monitoring-system/daemonsets/node-exporter: net/http: TLS handshake timeout\nerror when retrieving current configuration of:\nResource: \"/v1, Resource=services\", GroupVersionKind: \"/v1, Kind=Service\"\nName: \"node-exporter\", Namespace: \"kubesphere-monitoring-system\"\nfrom server for: \"/kubesphere/kubesphere/prometheus/node-exporter/node-exporter-service.yaml\": Get https://10.233.0.1:443/api/v1/namespaces/kubesphere-monitoring-system/services/node-exporter: net/http: TLS handshake timeout\nerror when retrieving current configuration of:\nResource: \"/v1, Resource=serviceaccounts\", GroupVersionKind: \"/v1, Kind=ServiceAccount\"\nName: \"node-exporter\", Namespace: \"kubesphere-monitoring-system\"\nfrom server for: \"/kubesphere/kubesphere/prometheus/node-exporter/node-exporter-serviceAccount.yaml\": Get https://10.233.0.1:443/api/v1/namespaces/kubesphere-monitoring-system/serviceaccounts/node-exporter: net/http: TLS handshake timeout\nerror when retrieving current configuration of:\nResource: \"monitoring.coreos.com/v1, Resource=servicemonitors\", GroupVersionKind: \"monitoring.coreos.com/v1, Kind=ServiceMonitor\"\nName: \"node-exporter\", Namespace: \"kubesphere-monitoring-system\"\nfrom server for: \"/kubesphere/kubesphere/prometheus/node-exporter/node-exporter-serviceMonitor.yaml\": Get https://10.233.0.1:443/apis/monitoring.coreos.com/v1/namespaces/kubesphere-monitoring-system/servicemonitors/node-exporter: dial tcp 10.233.0.1:443: i/o timeout",
                    "stderr_lines": [
                      "error when retrieving current configuration of:",
                      "Resource: \"rbac.authorization.k8s.io/v1, Resource=clusterroles\", GroupVersionKind: \"rbac.authorization.k8s.io/v1, Kind=ClusterRole\"",
                      "Name: \"kubesphere-node-exporter\", Namespace: \"\"",
                      "from server for: \"/kubesphere/kubesphere/prometheus/node-exporter/node-exporter-clusterRole.yaml\": Get https://10.233.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/kubesphere-node-exporter: http2: client connection lost",
                      "error when retrieving current configuration of:",
                      "Resource: \"rbac.authorization.k8s.io/v1, Resource=clusterrolebindings\", GroupVersionKind: \"rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding\"",
                      "Name: \"kubesphere-node-exporter\", Namespace: \"\"",
                      "from server for: \"/kubesphere/kubesphere/prometheus/node-exporter/node-exporter-clusterRoleBinding.yaml\": Get https://10.233.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/kubesphere-node-exporter: net/http: TLS handshake timeout",
                      "error when retrieving current configuration of:",
                      "Resource: \"apps/v1, Resource=daemonsets\", GroupVersionKind: \"apps/v1, Kind=DaemonSet\"",
                      "Name: \"node-exporter\", Namespace: \"kubesphere-monitoring-system\"",
                      "from server for: \"/kubesphere/kubesphere/prometheus/node-exporter/node-exporter-daemonset.yaml\": Get https://10.233.0.1:443/apis/apps/v1/namespaces/kubesphere-monitoring-system/daemonsets/node-exporter: net/http: TLS handshake timeout",
                      "error when retrieving current configuration of:",
                      "Resource: \"/v1, Resource=services\", GroupVersionKind: \"/v1, Kind=Service\"",
                      "Name: \"node-exporter\", Namespace: \"kubesphere-monitoring-system\"",
                      "from server for: \"/kubesphere/kubesphere/prometheus/node-exporter/node-exporter-service.yaml\": Get https://10.233.0.1:443/api/v1/namespaces/kubesphere-monitoring-system/services/node-exporter: net/http: TLS handshake timeout",
                      "error when retrieving current configuration of:",
                      "Resource: \"/v1, Resource=serviceaccounts\", GroupVersionKind: \"/v1, Kind=ServiceAccount\"",
                      "Name: \"node-exporter\", Namespace: \"kubesphere-monitoring-system\"",
                      "from server for: \"/kubesphere/kubesphere/prometheus/node-exporter/node-exporter-serviceAccount.yaml\": Get https://10.233.0.1:443/api/v1/namespaces/kubesphere-monitoring-system/serviceaccounts/node-exporter: net/http: TLS handshake timeout",
                      "error when retrieving current configuration of:",
                      "Resource: \"monitoring.coreos.com/v1, Resource=servicemonitors\", GroupVersionKind: \"monitoring.coreos.com/v1, Kind=ServiceMonitor\"",
                      "Name: \"node-exporter\", Namespace: \"kubesphere-monitoring-system\"",
                      "from server for: \"/kubesphere/kubesphere/prometheus/node-exporter/node-exporter-serviceMonitor.yaml\": Get https://10.233.0.1:443/apis/monitoring.coreos.com/v1/namespaces/kubesphere-monitoring-system/servicemonitors/node-exporter: dial tcp 10.233.0.1:443: i/o timeout"
                    ],
                    "stdout": "",
                    "stdout_lines": []
                  },
                  "role": "ks-monitor",
                  "start": "2021-05-10T09:27:41.787251",
                  "task": "Monitoring | Installing node-exporter",
                  "task_action": "command",
                  "task_args": "",
                  "task_path": "/kubesphere/installer/roles/ks-monitor/tasks/node-exporter.yaml:2",
                  "task_uuid": "42453aca-4651-eb6e-4fd2-000000000033",
                  "uuid": "7213fbbc-9a99-4995-8c76-c7d45a908a11"
                },
                "parent_uuid": "42453aca-4651-eb6e-4fd2-000000000033",
                "pid": 3959,
                "runner_ident": "monitoring",
                "start_line": 73,
                "stdout": "fatal: [localhost]: FAILED! => {\"changed\": true, \"cmd\": \"/usr/local/bin/kubectl apply -f /kubesphere/kubesphere/prometheus/node-exporter --force\", \"delta\": \"0:01:57.271006\", \"end\": \"2021-05-10 05:29:39.379806\", \"msg\": \"non-zero return code\", \"rc\": 1, \"start\": \"2021-05-10 05:27:42.108800\", \"stderr\": \"error when retrieving current configuration of:\\nResource: \\\"rbac.authorization.k8s.io/v1, Resource=clusterroles\\\", GroupVersionKind: \\\"rbac.authorization.k8s.io/v1, Kind=ClusterRole\\\"\\nName: \\\"kubesphere-node-exporter\\\", Namespace: \\\"\\\"\\nfrom server for: \\\"/kubesphere/kubesphere/prometheus/node-exporter/node-exporter-clusterRole.yaml\\\": Get https://10.233.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/kubesphere-node-exporter: http2: client connection lost\\nerror when retrieving current configuration of:\\nResource: \\\"rbac.authorization.k8s.io/v1, Resource=clusterrolebindings\\\", GroupVersionKind: \\\"rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding\\\"\\nName: \\\"kubesphere-node-exporter\\\", Namespace: \\\"\\\"\\nfrom server for: \\\"/kubesphere/kubesphere/prometheus/node-exporter/node-exporter-clusterRoleBinding.yaml\\\": Get https://10.233.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/kubesphere-node-exporter: net/http: TLS handshake timeout\\nerror when retrieving current configuration of:\\nResource: \\\"apps/v1, Resource=daemonsets\\\", GroupVersionKind: \\\"apps/v1, Kind=DaemonSet\\\"\\nName: \\\"node-exporter\\\", Namespace: \\\"kubesphere-monitoring-system\\\"\\nfrom server for: \\\"/kubesphere/kubesphere/prometheus/node-exporter/node-exporter-daemonset.yaml\\\": Get https://10.233.0.1:443/apis/apps/v1/namespaces/kubesphere-monitoring-system/daemonsets/node-exporter: net/http: TLS handshake timeout\\nerror when retrieving current configuration of:\\nResource: \\\"/v1, Resource=services\\\", GroupVersionKind: \\\"/v1, Kind=Service\\\"\\nName: \\\"node-exporter\\\", Namespace: \\\"kubesphere-monitoring-system\\\"\\nfrom server for: \\\"/kubesphere/kubesphere/prometheus/node-exporter/node-exporter-service.yaml\\\": Get https://10.233.0.1:443/api/v1/namespaces/kubesphere-monitoring-system/services/node-exporter: net/http: TLS handshake timeout\\nerror when retrieving current configuration of:\\nResource: \\\"/v1, Resource=serviceaccounts\\\", GroupVersionKind: \\\"/v1, Kind=ServiceAccount\\\"\\nName: \\\"node-exporter\\\", Namespace: \\\"kubesphere-monitoring-system\\\"\\nfrom server for: \\\"/kubesphere/kubesphere/prometheus/node-exporter/node-exporter-serviceAccount.yaml\\\": Get https://10.233.0.1:443/api/v1/namespaces/kubesphere-monitoring-system/serviceaccounts/node-exporter: net/http: TLS handshake timeout\\nerror when retrieving current configuration of:\\nResource: \\\"monitoring.coreos.com/v1, Resource=servicemonitors\\\", GroupVersionKind: \\\"monitoring.coreos.com/v1, Kind=ServiceMonitor\\\"\\nName: \\\"node-exporter\\\", Namespace: \\\"kubesphere-monitoring-system\\\"\\nfrom server for: \\\"/kubesphere/kubesphere/prometheus/node-exporter/node-exporter-serviceMonitor.yaml\\\": Get https://10.233.0.1:443/apis/monitoring.coreos.com/v1/namespaces/kubesphere-monitoring-system/servicemonitors/node-exporter: dial tcp 10.233.0.1:443: i/o timeout\", \"stderr_lines\": [\"error when retrieving current configuration of:\", \"Resource: \\\"rbac.authorization.k8s.io/v1, Resource=clusterroles\\\", GroupVersionKind: \\\"rbac.authorization.k8s.io/v1, Kind=ClusterRole\\\"\", \"Name: \\\"kubesphere-node-exporter\\\", Namespace: \\\"\\\"\", \"from server for: \\\"/kubesphere/kubesphere/prometheus/node-exporter/node-exporter-clusterRole.yaml\\\": Get https://10.233.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/kubesphere-node-exporter: http2: client connection lost\", \"error when retrieving current configuration of:\", \"Resource: \\\"rbac.authorization.k8s.io/v1, Resource=clusterrolebindings\\\", GroupVersionKind: \\\"rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding\\\"\", \"Name: \\\"kubesphere-node-exporter\\\", Namespace: \\\"\\\"\", \"from server for: \\\"/kubesphere/kubesphere/prometheus/node-exporter/node-exporter-clusterRoleBinding.yaml\\\": Get https://10.233.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/kubesphere-node-exporter: net/http: TLS handshake timeout\", \"error when retrieving current configuration of:\", \"Resource: \\\"apps/v1, Resource=daemonsets\\\", GroupVersionKind: \\\"apps/v1, Kind=DaemonSet\\\"\", \"Name: \\\"node-exporter\\\", Namespace: \\\"kubesphere-monitoring-system\\\"\", \"from server for: \\\"/kubesphere/kubesphere/prometheus/node-exporter/node-exporter-daemonset.yaml\\\": Get https://10.233.0.1:443/apis/apps/v1/namespaces/kubesphere-monitoring-system/daemonsets/node-exporter: net/http: TLS handshake timeout\", \"error when retrieving current configuration of:\", \"Resource: \\\"/v1, Resource=services\\\", GroupVersionKind: \\\"/v1, Kind=Service\\\"\", \"Name: \\\"node-exporter\\\", Namespace: \\\"kubesphere-monitoring-system\\\"\", \"from server for: \\\"/kubesphere/kubesphere/prometheus/node-exporter/node-exporter-service.yaml\\\": Get https://10.233.0.1:443/api/v1/namespaces/kubesphere-monitoring-system/services/node-exporter: net/http: TLS handshake timeout\", \"error when retrieving current configuration of:\", \"Resource: \\\"/v1, Resource=serviceaccounts\\\", GroupVersionKind: \\\"/v1, Kind=ServiceAccount\\\"\", \"Name: \\\"node-exporter\\\", Namespace: \\\"kubesphere-monitoring-system\\\"\", \"from server for: \\\"/kubesphere/kubesphere/prometheus/node-exporter/node-exporter-serviceAccount.yaml\\\": Get https://10.233.0.1:443/api/v1/namespaces/kubesphere-monitoring-system/serviceaccounts/node-exporter: net/http: TLS handshake timeout\", \"error when retrieving current configuration of:\", \"Resource: \\\"monitoring.coreos.com/v1, Resource=servicemonitors\\\", GroupVersionKind: \\\"monitoring.coreos.com/v1, Kind=ServiceMonitor\\\"\", \"Name: \\\"node-exporter\\\", Namespace: \\\"kubesphere-monitoring-system\\\"\", \"from server for: \\\"/kubesphere/kubesphere/prometheus/node-exporter/node-exporter-serviceMonitor.yaml\\\": Get https://10.233.0.1:443/apis/monitoring.coreos.com/v1/namespaces/kubesphere-monitoring-system/servicemonitors/node-exporter: dial tcp 10.233.0.1:443: i/o timeout\"], \"stdout\": \"\", \"stdout_lines\": []}",
                "uuid": "7213fbbc-9a99-4995-8c76-c7d45a908a11"
              }
              ******************************************************************************************************************************************************
              Failed to ansible-playbook ks-config.yaml
              E0510 05:36:26.988333       1 reflector.go:284] pkg/mod/k8s.io/client-go@v0.0.0-20190411052641-7a6b4715b709/tools/cache/reflector.go:99: Failed to watch *unstructured.Unstructured: Get "https://10.233.0.1:443/apis/installer.kubesphere.io/v1alpha1/namespaces/kubesphere-system/clusterconfigurations?fieldSelector=metadata.name%3Dks-installer&resourceVersion=48786&timeoutSeconds=499&watch=true": dial tcp 10.233.0.1:443: connect: connection refused
              E0510 05:36:27.990356       1 reflector.go:131] pkg/mod/k8s.io/client-go@v0.0.0-20190411052641-7a6b4715b709/tools/cache/reflector.go:99: Failed to list *unstructured.Unstructured: Get "https://10.233.0.1:443/apis/installer.kubesphere.io/v1alpha1/namespaces/kubesphere-system/clusterconfigurations?fieldSelector=metadata.name%3Dks-installer&limit=500&resourceVersion=0": dial tcp 10.233.0.1:443: connect: connection refused
              E0510 05:36:28.991146       1 reflector.go:131] pkg/mod/k8s.io/client-go@v0.0.0-20190411052641-7a6b4715b709/tools/cache/reflector.go:99: Failed to list *unstructured.Unstructured: Get "https://10.233.0.1:443/apis/installer.kubesphere.io/v1alpha1/namespaces/kubesphere-system/clusterconfigurations?fieldSelector=metadata.name%3Dks-installer&limit=500&resourceVersion=0": dial tcp 10.233.0.1:443: connect: connection refused
              E0510 05:36:29.991965       1 reflector.go:131] pkg/mod/k8s.io/client-go@v0.0.0-20190411052641-7a6b4715b709/tools/cache/reflector.go:99: Failed to list *unstructured.Unstructured: Get "https://10.233.0.1:443/apis/installer.kubesphere.io/v1alpha1/namespaces/kubesphere-system/clusterconfigurations?fieldSelector=metadata.name%3Dks-installer&limit=500&resourceVersion=0": dial tcp 10.233.0.1:443: connect: connection refused
              E0510 05:36:30.992636       1 reflector.go:131] pkg/mod/k8s.io/client-go@v0.0.0-20190411052641-7a6b4715b709/tools/cache/reflector.go:99: Failed to list *unstructured.Unstructured: Get "https://10.233.0.1:443/apis/installer.kubesphere.io/v1alpha1/namespaces/kubesphere-system/clusterconfigurations?fieldSelector=metadata.name%3Dks-installer&limit=500&resourceVersion=0": dial tcp 10.233.0.1:443: connect: connection refused
              E0510 05:36:31.993368       1 reflector.go:131] pkg/mod/k8s.io/client-go@v0.0.0-20190411052641-7a6b4715b709/tools/cache/reflector.go:99: Failed to list *unstructured.Unstructured: Get "https://10.233.0.1:443/apis/installer.kubesphere.io/v1alpha1/namespaces/kubesphere-system/clusterconfigurations?fieldSelector=metadata.name%3Dks-installer&limit=500&resourceVersion=0": dial tcp 10.233.0.1:443: connect: connection refused
              E0510 05:36:32.994235       1 reflector.go:131] pkg/mod/k8s.io/client-go@v0.0.0-20190411052641-7a6b4715b709/tools/cache/reflector.go:99: Failed to list *unstructured.Unstructured: Get "https://10.233.0.1:443/apis/installer.kubesphere.io/v1alpha1/namespaces/kubesphere-system/clusterconfigurations?fieldSelector=metadata.name%3Dks-installer&limit=500&resourceVersion=0": dial tcp 10.233.0.1:443: connect: connection refused
              E0510 05:36:33.998121       1 reflector.go:131] pkg/mod/k8s.io/client-go@v0.0.0-20190411052641-7a6b4715b709/tools/cache/reflector.go:99: Failed to list *unstructured.Unstructured: Get "https://10.233.0.1:443/apis/installer.kubesphere.io/v1alpha1/namespaces/kubesphere-system/clusterconfigurations?fieldSelector=metadata.name%3Dks-installer&limit=500&resourceVersion=0": dial tcp 10.233.0.1:443: connect: connection refused
              E0510 05:36:34.998777       1 reflector.go:131] pkg/mod/k8s.io/client-go@v0.0.0-20190411052641-7a6b4715b709/tools/cache/reflector.go:99: Failed to list *unstructured.Unstructured: Get "https://10.233.0.1:443/apis/installer.kubesphere.io/v1alpha1/namespaces/kubesphere-system/clusterconfigurations?fieldSelector=metadata.name%3Dks-installer&limit=500&resourceVersion=0": dial tcp 10.233.0.1:443: connect: connection refused
              E0510 05:36:35.999464       1 reflector.go:131] pkg/mod/k8s.io/client-go@v0.0.0-20190411052641-7a6b4715b709/tools/cache/reflector.go:99: Failed to list *unstructured.Unstructured: Get "https://10.233.0.1:443/apis/installer.kubesphere.io/v1alpha1/namespaces/kubesphere-system/clusterconfigurations?fieldSelector=metadata.name%3Dks-installer&limit=500&resourceVersion=0": dial tcp 10.233.0.1:443: connect: connection refused
              E0510 05:36:37.000144       1 reflector.go:131] pkg/mod/k8s.io/client-go@v0.0.0-20190411052641-7a6b4715b709/tools/cache/reflector.go:99: Failed to list *unstructured.Unstructured: Get "https://10.233.0.1:443/apis/installer.kubesphere.io/v1alpha1/namespaces/kubesphere-system/clusterconfigurations?fieldSelector=metadata.name%3Dks-installer&limit=500&resourceVersion=0": dial tcp 10.233.0.1:443: connect: connection refused
              E0510 05:36:38.000891       1 reflector.go:131] pkg/mod/k8s.io/client-go@v0.0.0-20190411052641-7a6b4715b709/tools/cache/reflector.go:99: Failed to list *unstructured.Unstructured: Get "https://10.233.0.1:443/apis/installer.kubesphere.io/v1alpha1/namespaces/kubesphere-system/clusterconfigurations?fieldSelector=metadata.name%3Dks-installer&limit=500&resourceVersion=0": dial tcp 10.233.0.1:443: connect: connection refused
              E0510 05:36:39.001655       1 reflector.go:131] pkg/mod/k8s.io/client-go@v0.0.0-20190411052641-7a6b4715b709/tools/cache/reflector.go:99: Failed to list *unstructured.Unstructured: Get "https://10.233.0.1:443/apis/installer.kubesphere.io/v1alpha1/namespaces/kubesphere-system/clusterconfigurations?fieldSelector=metadata.name%3Dks-installer&limit=500&resourceVersion=0": dial tcp 10.233.0.1:443: connect: connection refused
              E0510 05:36:40.002446       1 reflector.go:131] pkg/mod/k8s.io/client-go@v0.0.0-20190411052641-7a6b4715b709/tools/cache/reflector.go:99: Failed to list *unstructured.Unstructured: Get "https://10.233.0.1:443/apis/installer.kubesphere.io/v1alpha1/namespaces/kubesphere-system/clusterconfigurations?fieldSelector=metadata.name%3Dks-installer&limit=500&resourceVersion=0": dial tcp 10.233.0.1:443: connect: connection refused
              E0510 05:36:41.003137       1 reflector.go:131] pkg/mod/k8s.io/client-go@v0.0.0-20190411052641-7a6b4715b709/tools/cache/reflector.go:99: Failed to list *unstructured.Unstructured: Get "https://10.233.0.1:443/apis/installer.kubesphere.io/v1alpha1/namespaces/kubesphere-system/clusterconfigurations?fieldSelector=metadata.name%3Dks-installer&limit=500&resourceVersion=0": dial tcp 10.233.0.1:443: connect: connection refused
              E0510 05:36:42.005667       1 reflector.go:131] pkg/mod/k8s.io/client-go@v0.0.0-20190411052641-7a6b4715b709/tools/cache/reflector.go:99: Failed to list *unstructured.Unstructured: Get "https://10.233.0.1:443/apis/installer.kubesphere.io/v1alpha1/namespaces/kubesphere-system/clusterconfigurations?fieldSelector=metadata.name%3Dks-installer&limit=500&resourceVersion=0": dial tcp 10.233.0.1:443: connect: connection refused

                Cauchy
                现在已经安装成功,提示我访问 30880 的 web 端

                但是访问不了,连接超时是为什么呢

                用 telnet 能够连接的上,服务器资源也还够 CPU 使用了 6%,内存只用了 2G

                我服务器有 16G 内存,6核 CPU,120G 硬盘

                防火墙已经全部关闭 telnet 可以访问 curl 访问不了

                pod 信息

                NAME                                     READY   STATUS    RESTARTS   AGE
                ks-apiserver-7d4f466f75-5wc5v            1/1     Running   0          29m
                ks-console-5576fccbb8-4mqpt              1/1     Running   0          20h
                ks-controller-manager-7d9b98f467-r2km6   1/1     Running   0          29m
                ks-installer-8d7b5cfc4-lkffh             1/1     Running   0          32m
                NAMESPACE                      NAME                                               READY   STATUS      RESTARTS   AGE   IP                NODE      NOMINATED NODE   READINESS GATES
                kube-system                    calico-kube-controllers-8545b68dd4-8bzjr           1/1     Running     0          22h   10.233.90.1       node1     <none>           <none>
                kube-system                    calico-node-bhf2k                                  1/1     Running     0          22h   192.168.113.121   node1     <none>           <none>
                kube-system                    calico-node-k2r5k                                  1/1     Running     1          22h   192.168.113.120   master1   <none>           <none>
                kube-system                    calico-node-t7dbs                                  1/1     Running     0          22h   192.168.113.122   node2     <none>           <none>
                kube-system                    coredns-867b49865c-krk2l                           1/1     Running     1          22h   10.233.97.5       master1   <none>           <none>
                kube-system                    coredns-867b49865c-txt7n                           1/1     Running     1          22h   10.233.97.4       master1   <none>           <none>
                kube-system                    init-pvc-1dcd1a64-38c6-4250-8043-1f91ccc025a5      0/1     Completed   0          17h   10.233.96.4       node2     <none>           <none>
                kube-system                    init-pvc-c6e7bfc1-5485-4a59-89d8-691015562d39      0/1     Completed   0          17h   10.233.90.8       node1     <none>           <none>
                kube-system                    kube-apiserver-master1                             1/1     Running     2          22h   192.168.113.120   master1   <none>           <none>
                kube-system                    kube-controller-manager-master1                    1/1     Running     6          22h   192.168.113.120   master1   <none>           <none>
                kube-system                    kube-proxy-dj8nx                                   1/1     Running     1          22h   192.168.113.120   master1   <none>           <none>
                kube-system                    kube-proxy-mqmgw                                   1/1     Running     0          22h   192.168.113.121   node1     <none>           <none>
                kube-system                    kube-proxy-qhxwt                                   1/1     Running     0          22h   192.168.113.122   node2     <none>           <none>
                kube-system                    kube-scheduler-master1                             1/1     Running     5          22h   192.168.113.120   master1   <none>           <none>
                kube-system                    nodelocaldns-bkdlp                                 1/1     Running     0          22h   192.168.113.122   node2     <none>           <none>
                kube-system                    nodelocaldns-rsdn6                                 1/1     Running     0          22h   192.168.113.121   node1     <none>           <none>
                kube-system                    nodelocaldns-xkl8q                                 1/1     Running     1          22h   192.168.113.120   master1   <none>           <none>
                kube-system                    openebs-localpv-provisioner-c46f4fbd5-ntkw8        1/1     Running     57         20h   10.233.96.1       node2     <none>           <none>
                kube-system                    snapshot-controller-0                              1/1     Running     0          20h   10.233.96.2       node2     <none>           <none>
                kubesphere-controls-system     default-http-backend-76d9fb4bb7-pkkl7              1/1     Running     0          20h   10.233.90.4       node1     <none>           <none>
                kubesphere-controls-system     kubectl-admin-7b69cb97d5-6brvg                     1/1     Running     0          13h   10.233.90.12      node1     <none>           <none>
                kubesphere-monitoring-system   alertmanager-main-0                                2/2     Running     0          17h   10.233.90.9       node1     <none>           <none>
                kubesphere-monitoring-system   alertmanager-main-1                                2/2     Running     0          17h   10.233.96.5       node2     <none>           <none>
                kubesphere-monitoring-system   alertmanager-main-2                                2/2     Running     0          17h   10.233.96.6       node2     <none>           <none>
                kubesphere-monitoring-system   kube-state-metrics-687c7c4d86-nb4d4                3/3     Running     2          17h   10.233.90.7       node1     <none>           <none>
                kubesphere-monitoring-system   node-exporter-525tt                                2/2     Running     0          17h   192.168.113.121   node1     <none>           <none>
                kubesphere-monitoring-system   node-exporter-7dqgb                                2/2     Running     0          17h   192.168.113.122   node2     <none>           <none>
                kubesphere-monitoring-system   node-exporter-vgrhq                                2/2     Running     0          17h   192.168.113.120   master1   <none>           <none>
                kubesphere-monitoring-system   notification-manager-deployment-7bd887ffb4-pwjg8   1/1     Running     0          13h   10.233.90.13      node1     <none>           <none>
                kubesphere-monitoring-system   notification-manager-deployment-7bd887ffb4-vxhrp   1/1     Running     0          13h   10.233.96.8       node2     <none>           <none>
                kubesphere-monitoring-system   notification-manager-operator-78595d8666-d9vfn     2/2     Running     1          13h   10.233.90.11      node1     <none>           <none>
                kubesphere-monitoring-system   prometheus-k8s-0                                   0/3     Pending     0          17h   <none>            <none>    <none>           <none>
                kubesphere-monitoring-system   prometheus-k8s-1                                   0/3     Pending     0          17h   <none>            <none>    <none>           <none>
                kubesphere-monitoring-system   prometheus-operator-d7fdfccbf-7p2pv                2/2     Running     0          17h   10.233.96.3       node2     <none>           <none>
                kubesphere-system              ks-apiserver-7d4f466f75-5wc5v                      1/1     Running     0          29m   10.233.97.8       master1   <none>           <none>
                kubesphere-system              ks-console-5576fccbb8-4mqpt                        1/1     Running     0          20h   10.233.90.3       node1     <none>           <none>
                kubesphere-system              ks-controller-manager-7d9b98f467-r2km6             1/1     Running     0          29m   10.233.97.9       master1   <none>           <none>
                kubesphere-system              ks-installer-8d7b5cfc4-lkffh                       1/1     Running     0          33m   10.233.90.15      node1     <none>           <none>
                • warn 回复了此帖

                  重装了一次,成功了但还是访问不了,都是 telnet 可以 curl 不可以
                  Cauchy @Jeff

                  #####################################################
                  ###              Welcome to KubeSphere!           ###
                  #####################################################
                  
                  Console: http://192.168.113.120:30880
                  Account: admin
                  Password: P@88w0rd
                  
                  NOTES:
                    1. After you log into the console, please check the
                       monitoring status of service components in
                       "Cluster Management". If any service is not
                       ready, please wait patiently until all components
                       are up and running.
                    2. Please change the default password after login.
                  
                  #####################################################
                  https://kubesphere.io             2021-05-11 12:06:41
                  #####################################################

                  pod 信息

                  [root@master1 kubesphere]# kubectl get pods -n kubesphere-system
                  NAME                                    READY   STATUS      RESTARTS   AGE
                  ks-apiserver-75958c7c85-wqdkr           1/1     Running     0          27m
                  ks-console-5576fccbb8-ssgqs             1/1     Running     0          29m
                  ks-controller-manager-fb858c67b-gsv29   1/1     Running     3          27m
                  ks-installer-7fb74c656c-8887s           1/1     Running     0          34m
                  minio-f69748945-j7qhm                   1/1     Running     0          31m
                  openldap-0                              1/1     Running     1          31m
                  openpitrix-import-job-kc22l             0/1     Completed   0          29m
                  redis-658988fc5b-rfnnf                  1/1     Running     0          32m
                    leons9th 更改标题为「安装 Kubesphere3.1.0 成功后无法访问 console 页面
                    2 个月 后

                    leons9th 现在已经安装成功

                    怎么解决的?我的也是一直KubeSphere startup timeout

                    • warn 回复了此帖

                      warn

                      `clusterconfiguration.installer.kubesphere.io/ks-installer unchanged
                      WARN[17:35:42 HKT] Task failed …
                      WARN[17:35:42 HKT] error: KubeSphere startup timeout.
                      Error: Failed to deploy kubesphere: KubeSphere startup timeout.
                      Usage:
                      kk create cluster [flags]

                      Flags:
                      –download-cmd string The user defined command to download the necessary binary files. The first param ‘%s’ is output path, the second param ‘%s’, is the URL (default “curl -L -o %s %s”)
                      -f, –filename string Path to a configuration file
                      -h, –help help for cluster
                      –skip-pull-images Skip pre pull images
                      –with-kubernetes string Specify a supported version of kubernetes (default “v1.19.8”)
                      –with-kubesphere Deploy a specific version of kubesphere (default v3.1.0)
                      –with-local-storage Deploy a local PV provisioner
                      -y, –yes Skip pre-check of the installation

                      Global Flags:
                      –debug Print detailed information (default true)
                      –in-cluster Running inside the cluster

                      Failed to deploy kubesphere: KubeSphere startup timeout.`

                      9 天 后

                      同样的问题 Failed to deploy kubesphere: KubeSphere startup timeout.

                      是否被防火墙拦住了,看看防火墙关了没