I0430 16:26:20.648003   11692 request.go:655] Throttling request took 1.182909372s, request: GET:https://lb.kubesphere.local:6443/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s
Name:           istiod-1-6-10-5c8cc86fb4-hbb2f
Namespace:      istio-system
Priority:       0
Node:           <none>
Labels:         app=istiod
                istio=istiod
                istio.io/rev=1-6-10
                pod-template-hash=5c8cc86fb4
Annotations:    sidecar.istio.io/inject: false
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  ReplicaSet/istiod-1-6-10-5c8cc86fb4
Containers:
  discovery:
    Image:       istio/pilot:1.6.10
    Ports:       8080/TCP, 15010/TCP, 15017/TCP, 15053/TCP
    Host Ports:  0/TCP, 0/TCP, 0/TCP, 0/TCP
    Args:
      discovery
      --monitoringAddr=:15014
      --log_output_level=default:info
      --domain
      cluster.local
      --trust-domain=cluster.local
      --keepaliveMaxServerConnectionAge
      30m
    Requests:
      cpu:      500m
      memory:   2Gi
    Readiness:  http-get http://:8080/ready delay=1s timeout=5s period=3s #success=1 #failure=3
    Environment:
      REVISION:                                     1-6-10
      JWT_POLICY:                                   first-party-jwt
      PILOT_CERT_PROVIDER:                          istiod
      POD_NAME:                                     istiod-1-6-10-5c8cc86fb4-hbb2f (v1:metadata.name)
      POD_NAMESPACE:                                istio-system (v1:metadata.namespace)
      SERVICE_ACCOUNT:                               (v1:spec.serviceAccountName)
      PILOT_TRACE_SAMPLING:                         1
      PILOT_ENABLE_PROTOCOL_SNIFFING_FOR_OUTBOUND:  true
      PILOT_ENABLE_PROTOCOL_SNIFFING_FOR_INBOUND:   true
      INJECTION_WEBHOOK_CONFIG_NAME:                istio-sidecar-injector-1-6-10
      ISTIOD_ADDR:                                  istiod-1-6-10.istio-system.svc:15012
      PILOT_ENABLE_ANALYSIS:                        false
      CLUSTER_ID:                                   Kubernetes
      CENTRAL_ISTIOD:                               false
    Mounts:
      /etc/cacerts from cacerts (ro)
      /etc/istio/config from config-volume (rw)
      /var/lib/istio/inject from inject (ro)
      /var/run/secrets/istio-dns from local-certs (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from istiod-service-account-token-gq5sc (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  local-certs:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     Memory
    SizeLimit:  <unset>
  cacerts:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  cacerts
    Optional:    true
  inject:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      istio-sidecar-injector-1-6-10
    Optional:  true
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      istio-1-6-10
    Optional:  false
  istiod-service-account-token-gq5sc:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  istiod-service-account-token-gq5sc
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  30m                 default-scheduler  0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient memory.
  Warning  FailedScheduling  30m                 default-scheduler  0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient memory.
  Warning  FailedScheduling  64s (x20 over 23m)  default-scheduler  0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient memory.
  • Jeff 回复了此帖

    HOTTIN 这是内存不足了,可以加节点,你查下

      我现在是3台4核8G的机器,貌似内存不够是么,现在上面只开启了devops和istio,还完全没有部署应用上去。。。

      Jeff

        Type     Reason       Age                   From               Message
        ----     ------       ----                  ----               -------
        Normal   Scheduled    79m                   default-scheduler  Successfully assigned istio-system/istio-ingressgateway-7d7fd96b9-84mxb to node1
        Warning  FailedMount  65m (x2 over 72m)     kubelet            Unable to attach or mount volumes: unmounted volumes=[istiod-ca-cert], unattached volumes=[ingressgateway-ca-certs istio-ingressgateway-service-account-token-gfxc8 istio-envoy config-volume istiod-ca-cert ingressgatewaysdsudspath podinfo ingressgateway-certs]: timed out waiting for the condition
        Warning  FailedMount  59m                   kubelet            Unable to attach or mount volumes: unmounted volumes=[istiod-ca-cert], unattached volumes=[podinfo ingressgateway-certs ingressgateway-ca-certs istio-ingressgateway-service-account-token-gfxc8 istio-envoy config-volume istiod-ca-cert ingressgatewaysdsudspath]: timed out waiting for the condition
        Warning  FailedMount  56m (x3 over 63m)     kubelet            Unable to attach or mount volumes: unmounted volumes=[istiod-ca-cert], unattached volumes=[istio-ingressgateway-service-account-token-gfxc8 istio-envoy config-volume istiod-ca-cert ingressgatewaysdsudspath podinfo ingressgateway-certs ingressgateway-ca-certs]: timed out waiting for the condition
        Warning  FailedMount  38m (x2 over 45m)     kubelet            Unable to attach or mount volumes: unmounted volumes=[istiod-ca-cert], unattached volumes=[istio-envoy config-volume istiod-ca-cert ingressgatewaysdsudspath podinfo ingressgateway-certs ingressgateway-ca-certs istio-ingressgateway-service-account-token-gfxc8]: timed out waiting for the condition
        Warning  FailedMount  18m (x4 over 77m)     kubelet            Unable to attach or mount volumes: unmounted volumes=[istiod-ca-cert], unattached volumes=[istiod-ca-cert ingressgatewaysdsudspath podinfo ingressgateway-certs ingressgateway-ca-certs istio-ingressgateway-service-account-token-gfxc8 istio-envoy config-volume]: timed out waiting for the condition
        Warning  FailedMount  9m7s (x8 over 74m)    kubelet            Unable to attach or mount volumes: unmounted volumes=[istiod-ca-cert], unattached volumes=[ingressgateway-certs ingressgateway-ca-certs istio-ingressgateway-service-account-token-gfxc8 istio-envoy config-volume istiod-ca-cert ingressgatewaysdsudspath podinfo]: timed out waiting for the condition
        Warning  FailedMount  3m54s (x45 over 79m)  kubelet            MountVolume.SetUp failed for volume "istiod-ca-cert" : configmap "istio-ca-root-cert" not found

      这个加节点也可以解决吗

        HOTTIN 你这个configmap没有创建成功 installer日志显示什么呢

          yuswift

          localhost                  : ok=32   changed=25   unreachable=0    failed=0    skipped=14   rescued=0    ignored=0
          
          Start installing monitoring
          Start installing multicluster
          Start installing openpitrix
          Start installing network
          Start installing devops
          Start installing servicemesh
          **************************************************
          Waiting for all tasks to be completed ...
          task network status is successful  (1/6)
          task multicluster status is successful  (2/6)
          task openpitrix status is successful  (3/6)
          task servicemesh status is successful  (4/6)
          task devops status is successful  (5/6)
          task monitoring status is successful  (6/6)
          **************************************************

          kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath=‘{.items[0].metadata.name}’) -f
          是这个命令输出的日志吗?installer日志该怎么查看

            HOTTIN 看起来istio已经装好了 但是configmap不见了 你是不是不小心删掉了呢?

              5 天 后

              yuswift 我没有做过删除的操作,我只是在3.0上配置开启istio后,又升级了3.1,然后istio就起不来了,我现在在尝试增加节点,但是目前也遇到了问题。。

              kubectl -n istio-system rollout restart deploy istiod-1-6-10

              3 个月 后

              kubectl logs jaeger-collector-84477ffd9c-gtkc6 -n=istio-system

              {“level”:“fatal”,“ts”:1628753305.450968,“caller”:“collector/main.go:70”,“msg”:“Failed to init storage factory”,“error”:“failed to create primary Elasticsearch client: health check timeout: no Elasticsearch node available”,“stacktrace”:“main.main.func1\n\tgithub.com/jaegertracing/jaeger@/cmd/collector/main.go:70\ngithub.com/spf13/cobra.(Command).execute\n\tgithub.com/spf13/cobra@v0.0.3/command.go:762\ngithub.com/spf13/cobra.(Command).ExecuteC\n\tgithub.com/spf13/cobra@v0.0.3/command.go:852\ngithub.com/spf13/cobra.(*Command).Execute\n\tgithub.com/spf13/cobra@v0.0.3/command.go:800\nmain.main\n\tgithub.com/jaegertracing/jaeger@/cmd/collector/main.go:126\nruntime.main\n\truntime/proc.go:203”}

                1 个月 后