magese
可以先尝试着看下minio-make-bucket-job-s5n7h那个pod日志,一般是环境dns或者存储问题,也有可能集群节点之间时间不同步。

    Cauchy
    job日志如下

    [centos@k8s-node1 ~]$ kubectl logs minio-make-bucket-job-s5n7h -n kubesphere-system --tail=100
    Connecting to Minio server: http://minio:9000
    mc: <ERROR> Unable to initialize new config from the provided credentials. Get http://minio:9000/probe-bucket-sign-nhxof1bbipkq/?location=: dial tcp: i/o timeout.
    "Failed attempts: 1"
    mc: <ERROR> Unable to initialize new config from the provided credentials. Get http://minio:9000/probe-bucket-sign-mo0x33zvocb6/?location=: dial tcp: i/o timeout.
    "Failed attempts: 2"
    mc: <ERROR> Unable to initialize new config from the provided credentials. Get http://minio:9000/probe-bucket-sign-wr0i4qwpswv5/?location=: dial tcp: i/o timeout.
    "Failed attempts: 3"

    集群节点的时间确认都是一致的。

    如何确认是否为DNS问题呢?

    Cauchy
    /etc/resolv.conf文件配置如下:

    ; generated by /usr/sbin/dhclient-script
    search ap-east-1.compute.internal
    nameserver 172.31.0.2

      rayzhou2017
      1.存储是按照文档安装的openebs,pod都是running,上面有贴。

      [centos@k8s-node1 ~]$ kubectl get sc
      NAME                         PROVISIONER                                                RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
      openebs-device               openebs.io/local                                           Delete          WaitForFirstConsumer   false                  4d19h
      openebs-hostpath (default)   openebs.io/local                                           Delete          WaitForFirstConsumer   false                  4d19h
      openebs-jiva-default         openebs.io/provisioner-iscsi                               Delete          Immediate              false                  4d19h
      openebs-snapshot-promoter    volumesnapshot.external-storage.k8s.io/snapshot-promoter   Delete          Immediate              false                  4d19h

      2.DNS没有进行过修改,楼上贴了配置。

      3.helm版本也换过好几次了,也执行过helm del –purge ks-minio重装很多次了,都不行。

      我修改/etc/resolv.conf的配置为

      nameserver 8.8.8.8

      再次进行最小化安装,ks-apigateway直接起不来了。。。

      [centos@k8s-node1 k8s]$ kubectl get pod -n kubesphere-system
      NAME                                     READY   STATUS             RESTARTS   AGE
      ks-account-596657f8c6-t6lwh              1/1     Running            2          6m7s
      ks-apigateway-78bcdc8ffc-hlrdg           0/1     CrashLoopBackOff   5          6m8s
      ks-apiserver-5b548d7c5c-p7bpv            1/1     Running            0          6m7s
      ks-console-78bcf96dbf-xvrnk              1/1     Running            0          6m3s
      ks-controller-manager-696986f8d9-4qjkx   1/1     Running            0          6m6s
      ks-installer-75b8d89dff-28jz5            1/1     Running            0          7m28s
      openldap-0                               1/1     Running            0          6m28s
      redis-6fd6c6d6f9-vk6k6                   1/1     Running            0          6m37s

      查看ks-apigateway日志如下:

      [centos@k8s-node1 k8s]$ kubectl logs ks-apigateway-78bcdc8ffc-hlrdg -n kubesphere-system
      2020/06/30 08:17:01 [INFO][cache:0xc00078c050] Started certificate maintenance routine
      [DEV NOTICE] Registered directive 'authenticate' before 'jwt'
      [DEV NOTICE] Registered directive 'authentication' before 'jwt'
      [DEV NOTICE] Registered directive 'swagger' before 'jwt'
      Activating privacy features... done.
      E0630 08:17:06.752403       1 redis.go:51] unable to reach redis hostdial tcp: i/o timeout
      2020/06/30 08:17:06 dial tcp: i/o timeout

      头都要炸了,大佬们救救我吧 Forest-L @Cauchy @rayzhou2017

        magese ks-account已经起来了,等等ks-apigateway应该就正常了,不想等的话可以直接把那个pod删掉重新拉起。

        所有节点的dns都需要有效哦

          妈耶,我要哭了。我又把/etc/resolv.conf修改回原来的配置,重启coredns。然鹅ks-account和ks-apigateway无限失败重启。

          kubesphere-system              ks-account-596657f8c6-pklvp                    1/1     Running            4          7m55s
          kubesphere-system              ks-apigateway-78bcdc8ffc-z49d6                 0/1     CrashLoopBackOff   6          7m57s
          kubesphere-system              ks-apiserver-5b548d7c5c-nv2wp                  1/1     Running            0          7m56s
          kubesphere-system              ks-console-78bcf96dbf-l9rz9                    1/1     Running            0          7m52s
          kubesphere-system              ks-controller-manager-696986f8d9-98xp5         1/1     Running            0          7m55s
          kubesphere-system              ks-installer-75b8d89dff-cd4kk                  1/1     Running            0          9m18s
          kubesphere-system              openldap-0                                     1/1     Running            0          8m16s
          kubesphere-system              redis-6fd6c6d6f9-g6q5b                         1/1     Running            0          8m26s

          Cauchy
          现在ks-account和ks-apigateway一直在Error、CrashLoopBackOff、Running之间徘徊。

            hongming
            试了一下kubectl -n kube-system edit configmap coredns 配置中没有proxy和upstream。安装依旧失败…

            # Please edit the object below. Lines beginning with a '#' will be ignored,
            # and an empty file will abort the edit. If an error occurs while saving this file will be
            # reopened with the relevant failures.
            #
            apiVersion: v1
            data:
              Corefile: |
                .:53 {
                    errors
                    health {
                       lameduck 5s
                    }
                    ready
                    kubernetes cluster.local in-addr.arpa ip6.arpa {
                       pods insecure
                       fallthrough in-addr.arpa ip6.arpa
                       ttl 30
                    }
                    prometheus :9153
                    forward . /etc/resolv.conf
                    cache 30
                    loop
                    reload
                    loadbalance
                }
            kind: ConfigMap
            metadata:
              creationTimestamp: "2020-06-25T06:31:18Z"
              name: coredns
              namespace: kube-system
              resourceVersion: "175"
              selfLink: /api/v1/namespaces/kube-system/configmaps/coredns
              uid: 8fd545c4-9718-4537-a2fc-ebd6139547a

              Cauchy
              我再次发送了邮件,麻烦有空时查收一下,十分感谢!

                又重装了一遍k8s,然后最小化安装成功,但是监控中心不显示CPU、RAM信息。

                查看prometheus状态有个请求超时

                查看ks-apiserver日志也全是请求超时

                ...
                
                E0701 09:41:37.153095 1 prometheus.go:61] Get http://prometheus-k8s.kubesphere-monitoring-system.svc:9090/api/v1/query_range?end=1593596482.969&query=round%28sum+by+%28namespace%2C+pod%2C+container%29+%28irate%28container_cpu_usage_seconds_total%7Bjob%3D%22kubelet%22%2C+container%21%3D%22POD%22%2C+container%21%3D%22%22%2C+image%21%3D%22%22%2C+pod%3D%22minio-845b7bd867-wc8fb%22%2C+namespace%3D%22kubesphere-system%22%2C+container%3D%22minio%22%7D%5B5m%5D%29%29%2C+0.001%29&start=1593578482.969&step=300s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
                
                E0701 09:41:37.153149 1 prometheus.go:61] Get http://prometheus-k8s.kubesphere-monitoring-system.svc:9090/api/v1/query_range?end=1593596482.969&query=sum+by+%28namespace%2C+pod%2C+container%29+%28container_memory_working_set_bytes%7Bjob%3D%22kubelet%22%2C+container%21%3D%22POD%22%2C+container%21%3D%22%22%2C+image%21%3D%22%22%2C+pod%3D%22minio-845b7bd867-wc8fb%22%2C+namespace%3D%22kubesphere-system%22%2C+container%3D%22minio%22%7D%29&start=1593578482.969&step=300s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
                
                E0701 09:41:48.518936 1 prometheus.go:61] Get http://prometheus-k8s.kubesphere-monitoring-system.svc:9090/api/v1/query_range?end=1593596494.492&query=round%28sum+by+%28namespace%2C+pod%29+%28irate%28container_cpu_usage_seconds_total%7Bjob%3D%22kubelet%22%2C+pod%21%3D%22%22%2C+image%21%3D%22%22%7D%5B5m%5D%29%29+%2A+on+%28namespace%2C+pod%29+group_left%28owner_kind%2C+owner_name%29+kube_pod_owner%7B%7D+%2A+on+%28namespace%2C+pod%29+group_left%28node%29+kube_pod_info%7Bpod%3D~%22minio-make-bucket-job-rjl6v%7Cminio-845b7bd867-wc8fb%24%22%2C+namespace%3D%22kubesphere-system%22%7D%2C+0.001%29&start=1593594694.492&step=60s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

                修改ks-installer添加devops后查看日志minio依然部署失败

                TASK [common : Kubesphere | Deploy minio] **************************************
                fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/helm upgrade --install ks-minio /etc/kubesphere/minio-ha -f /etc/kubesphere/custom-values-minio.yaml --set fullnameOverride=minio --namespace kubesphere-system --wait --timeout 1800\n", "delta": "0:30:25.859334", "end": "2020-07-01 08:55:10.418415", "msg": "non-zero return code", "rc": 1, "start": "2020-07-01 08:24:44.559081", "stderr": "Error: timed out waiting for the condition", "stderr_lines": ["Error: timed out waiting for the condition"], "stdout": "Release \"ks-minio\" does not exist. Installing it now.", "stdout_lines": ["Release \"ks-minio\" does not exist. Installing it now."]}
                ...ignoring
                
                TASK [common : debug] **********************************************************
                ok: [localhost] => {
                    "msg": [
                        "1. check the storage configuration and storage server", 
                        "2. make sure the DNS address in /etc/resolv.conf is available.", 
                        "3. execute 'helm del --purge ks-minio && kubectl delete job -n kubesphere-system ks-minio-make-bucket-job'", 
                        "4. Restart the installer pod in kubesphere-system namespace"
                    ]
                }

                在控制台查看minio已经状态正常:

                但是有一个请求一直超时:

                mino-mc一直日志显示一直请求超时:

                Connecting to Minio server: http://minio:9000
                
                mc: Unable to initialize new config from the provided credentials. Get http://minio:9000/probe-bucket-sign-eoko62s6i0sr/?location=: dial tcp: i/o timeout.
                
                "Failed attempts: 1"
                
                mc: Unable to initialize new config from the provided credentials. Get http://minio:9000/probe-bucket-sign-pwots1nwkmkw/?location=: dial tcp: i/o timeout.
                
                "Failed attempts: 2"
                
                mc: Unable to initialize new config from the provided credentials. Get http://minio:9000/probe-bucket-sign-oiez46q5nibc/?location=: dial tcp: i/o timeout.
                
                "Failed attempts: 3"
                
                ... ...

                @Forest-L 的方法也尝试了,社区中也找不到解决的方法。
                这到底是啥原因导致的问题,现在完全没有头绪,麻烦帮忙看一下吧。
                @Cauchy @rainwu

                  @hongming
                  console已经部署至外网可以访问,可以帮忙看一下下错误原因吗?