安装出现问题,TASK [common : Kubesphere | Deploying redis] ***********************************
fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/helm upgrade –install ks-redis /etc/kubesphere/redis-ha -f /etc/kubesphere/custom-values-redis.yaml –set fullnameOverride=redis-ha –namespace kubesphere-system\n”, “delta”: “0:00:00.418073”, “end”: “2020-05-17 01:27:08.678074″, “msg”: “non-zero return code”, “rc”: 1, “start”: “2020-05-17 01:27:08.260001”, “stderr”: “Error: Could not get apiVersions from Kubernetes: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request”, “stderr_lines”: [“Error: Could not get apiVersions from Kubernetes: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request”], “stdout”: "Release \“ks-redis\” does not exist. Installing it now.", “stdout_lines”: ["Release \“ks-redis\” does not exist. Installing it now."]}

PLAY RECAP *********************************************************************
localhost : ok=25 changed=20 unreachable=0 failed=1 skipped=56 rescued=0 ignored=6

Error: Could not get apiVersions from Kubernetes: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request

集群里的metrics-server不正常,论坛里或者网上搜下相关的问题。

/usr/local/bin/helm upgrade –install ks-redis /etc/kubesphere/redis-ha -f /etc/kubesphere/custom-values-redis.yaml –set fullnameOverride=redis-ha –namespace kubesphere-system
Release “ks-redis” does not exist. Installing it now.
Error: Could not get apiVersions from Kubernetes: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the
request

 cat /etc/kubesphere/custom-values-redis.yaml
## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
image:
  repository: redis
  tag: 5.0.5-alpine
  pullPolicy: IfNotPresent
## replicas number for each component
replicas: 3

## Kubernetes priorityClass name for the redis-ha-server pod
# priorityClassName: ""

## Custom labels for the redis pod
labels: {}

## Pods Service Account
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
serviceAccount:
  ## Specifies whether a ServiceAccount should be created
  ##
  create: true
  ## The name of the ServiceAccount to use.
  ## If not set and create is true, a name is generated using the redis-ha.fullname template
  # name:

## Enables a HA Proxy for better LoadBalancing / Sentinel Master support. Automatically proxies to Redis master.
## Recommend for externally exposed Redis clusters.
## ref: https://cbonte.github.io/haproxy-dconv/1.9/intro.html
haproxy:
  enabled: true
  # Enable if you want a dedicated port in haproxy for redis-slaves
  readOnly:
    enabled: false
    port: 6380
  replicas: 3

  image:
    repository: haproxy
    tag: 2.0.4
    pullPolicy: IfNotPresent
  annotations: {}
  resources: {}
  ## Kubernetes priorityClass name for the haproxy pod
  # priorityClassName: ""
  ## Service type for HAProxy
  ##
  service:
    type: ClusterIP
    loadBalancerIP:
    annotations: {}
  serviceAccount:
    create: true
  ## Prometheus metric exporter for HAProxy.
  ##
  exporter:
    image:
      repository: quay.io/prometheus/haproxy-exporter
      tag: v0.9.0
    enabled: false
    port: 9101
  init:
    resources: {}
  timeout:
    connect: 10s
    server: 360s
    client: 360s
  securityContext:
    runAsUser: 1000
    fsGroup: 1000
    runAsNonRoot: true


## Role Based Access
## Ref: https://kubernetes.io/docs/admin/authorization/rbac/
##
rbac:
  create: true

sysctlImage:
  enabled: false
  command: []
  registry: docker.io
  repository: bitnami/minideb
  tag: latest
  pullPolicy: Always
  mountHostSys: false

## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
# schedulerName:

## Redis specific configuration options
redis:
  port: 6379
  masterGroupName: mymaster
  config:
    ## Additional redis conf options can be added below
    ## For all available options see http://download.redis.io/redis-stable/redis.conf
    min-replicas-to-write: 1
    min-replicas-max-lag: 5   # Value in seconds
    maxmemory: "0"       # Max memory to use for each redis instance. Default is unlimited.
    maxmemory-policy: "volatile-lru"  # Max memory policy to use for each redis instance. Default is volatile-lru.
    # Determines if scheduled RDB backups are created. Default is false.
    # Please note that local (on-disk) RDBs will still be created when re-syncing with a new slave. The only way to prevent this is to enable diskless replication.
    save: "900 1"
    # When enabled, directly sends the RDB over the wire to slaves, without using the disk as intermediate storage. Default is false.
    repl-diskless-sync: "yes"
    rdbcompression: "yes"
    rdbchecksum: "yes"


  ## Custom redis.conf files used to override default settings. If this file is
  ## specified then the redis.config above will be ignored.
  # customConfig: |-
      # Define configuration here

  resources: {}
  #  requests:
  #    memory: 200Mi
  #    cpu: 100m
  #  limits:
  #    memory: 700Mi

## Sentinel specific configuration options
sentinel:
  port: 26379
  quorum: 2
  config:
    ## Additional sentinel conf options can be added below. Only options that
    ## are expressed in the format simialar to 'sentinel xxx mymaster xxx' will
    ## be properly templated.
    ## For available options see http://download.redis.io/redis-stable/sentinel.conf
    down-after-milliseconds: 10000
    ## Failover timeout value in milliseconds
    failover-timeout: 180000
    parallel-syncs: 5

  ## Custom sentinel.conf files used to override default settings. If this file is
  ## specified then the sentinel.config above will be ignored.
  # customConfig: |-
      # Define configuration here

  resources: {}
  #  requests:
  #    memory: 200Mi
  #    cpu: 100m
  #  limits:
  #    memory: 200Mi

securityContext:
  runAsUser: 1000
  fsGroup: 1000
  runAsNonRoot: true

## Node labels, affinity, and tolerations for pod assignment
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
nodeSelector: {}

## Whether the Redis server pods should be forced to run on separate nodes.
## This is accomplished by setting their AntiAffinity with requiredDuringSchedulingIgnoredDuringExecution as opposed to preferred.
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity-beta-feature
##
hardAntiAffinity: true

## Additional affinities to add to the Redis server pods.
##
## Example:
##   nodeAffinity:
##     preferredDuringSchedulingIgnoredDuringExecution:
##       - weight: 50
##         preference:
##           matchExpressions:
##             - key: spot
##               operator: NotIn
##               values:
##                 - "true"
##
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
additionalAffinities:
  nodeAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
        - weight: 100
          preference:
            matchExpressions:
              - key: node-role.kubernetes.io/master
                operator: In
                values:
                  - ""

## Override all other affinity settings for the Redis server pods with a string.
affinity: |

# Prometheus exporter specific configuration options
exporter:
  enabled: false
  image: oliver006/redis_exporter
  tag: v0.31.0
  pullPolicy: IfNotPresent

  # prometheus port & scrape path
  port: 9121
  scrapePath: /metrics

  # cpu/memory resource limits/requests
  resources: {}

  # Additional args for redis exporter
  extraArgs: {}

podDisruptionBudget: {}
  # maxUnavailable: 1
  # minAvailable: 1

## Configures redis with AUTH (requirepass & masterauth conf params)
auth: false
# redisPassword:

## Use existing secret containing key `authKey` (ignores redisPassword)
# existingSecret:

## Defines the key holding the redis password in existing secret.
authKey: auth

persistentVolume:
  enabled: true
  ## redis-ha data Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  accessModes:
    - ReadWriteOnce
  size: 2Gi
  annotations: {}
init:
  resources: {}

# To use a hostPath for data, set persistentVolume.enabled to false
# and define hostPath.path.
# Warning: this might overwrite existing folders on the host system!
hostPath:
  ## path is evaluated as template so placeholders are replaced
  # path: "/data/redis"

  # if chown is true, an init-container with root permissions is launched to
  # change the owner of the hostPath folder to the user defined in the
  # security context
  chown: true

这个跟chart没有关系,集群里的metrics-server不正常导致的helm无法正常安装chart

    tscswcn 你应该先听听 Cauchy 的建议 “集群里的metrics-server不正常,论坛里或者网上搜下相关的问题”

    恩,但是 kube-system metrics-server-66444bf745-xlnhc 1/1 Running 0 60m

    [root@node-10-120-13-236 ]# kubectl logs metrics-server-66444bf745-xlnhc -n kube-system
    I0517 01:26:20.353922 1 serving.go:273] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
    [restful] 2020/05/17 01:26:20 log.go:33: [restful/swagger] listing is available at https://:443/swaggerapi
    [restful] 2020/05/17 01:26:20 log.go:33: [restful/swagger] https://:443/swaggerui/ is mapped to folder /swagger-ui/
    I0517 01:26:20.737399 1 serve.go:96] Serving securely on [::]:443
    E0517 01:26:25.877720 1 reststorage.go:144] unable to fetch pod metrics for pod istio-system/istio-policy-7f96d58b8c-d7zpr: no metrics known for pod
    E0517 01:26:25.907071 1 reststorage.go:144] unable to fetch pod metrics for pod istio-system/istio-telemetry-84bcc7f74d-b8dzt: no metrics known for pod
    E0517 01:26:25.918687 1 reststorage.go:144] unable to fetch pod metrics for pod istio-system/istio-pilot-5f6b8fcc77-6wwpv: no metrics known for pod

    去pod 里

    [root@node-10-120-13-236 ]# kubectl exec -it metrics-server-66444bf745-xlnhc -n kube-system sh
    / # ps aux | grep metric
    1 root 0:05 /metrics-server –logtostderr –kubelet-insecure-tls –kubelet-preferred-address-types=InternalIP
    发现已经设置 –kubelet-insecure-tls

    10 天 后

    为什么 跟 metric server 有关系,我是说我的集群 使用率 偏少,metric server 并没有 报告说明异常