创建部署问题时,请参考下面模板,你提供的信息越多,越容易及时获得解答。如果未按模板创建问题,管理员有权关闭问题。
确保帖子格式清晰易读,用 markdown code block 语法格式化代码块。
你只花一分钟创建的问题,不能指望别人花上半个小时给你解答。

操作系统信息
例如:虚拟机,Ubuntu 22.04.1 LTS,16C/32G

KubeSphere版本信息
例如:v4.1.2。离线安装。在已有K8s上安装。

问题是什么
KubeSphere4.1.2的日志扩展插件无法查看成员集群,类似WhizardTelemetry 平台服务配置问题,无法查看成员集群的opensearch;

WhizardTelemetry 平台服务配置(host)如下:

global:
  ## Global image registry to use if it needs to be overriden for some specific use cases (e.g local registries, custom images, ...)
  ##
  imageRegistry: ""

  ## Reference to one or more secrets to be used when pulling images
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
  ##
  imagePullSecrets: []
  # - name: "image-pull-secret"
  # or
  # - "image-pull-secret"
  nodeSelector: {}

whizard-telemetry:
  config:
    monitoring:
      enabled: true
      kind: 0
      endpoint: http://prometheus-k8s.kubesphere-monitoring-system.svc:9090
    notification:
      endpoint: http://notification-manager-svc.kubesphere-monitoring-system.svc:19093
    events:
      enable: true
      servers:
        - elasticsearch:
            cluster:
            - wh-member
            - bj-member
            - host
            endpoints:
              - https://opensearch-cluster-data.kubesphere-logging-system:9200
            version: opensearchv2
            indexPrefix: "{{ .cluster }}-events"
            timestring: "%Y.%m.%d"
            basicAuth: true
            username: admin
            password: admin
    logging:
      enable: true
      servers:
        - elasticsearch:
            cluster:
            - wh-member
            - bj-member
            - host
            endpoints:
              - https://opensearch-cluster-data.kubesphere-logging-system:9200
            version: opensearchv2
            indexPrefix: "{{ .cluster }}-{{ .kubernetes.namespace_name }}-logs"
            timestring: "%Y.%m.%d"
            basicAuth: true
            username: admin
            password: admin


  apiserver:
    image:
      repository: kubesphere/whizard-telemetry-apiserver
      pullPolicy: IfNotPresent
      # Overrides the image tag whose default is the chart appVersion.
      tag: "v1.2.2"

    nodeSelector: {}

    tolerations: []

    affinity: {}

WhizardTelemetry 日志服务配置

global:
  imageRegistry: ""
  nodeSelector: {}
  imagePullSecrets: []
  clusterInfo: {}
logsidecar-injector:
  enabled: true
  sidecar:
    sidecarType: vector
  resources:
    limits:
      cpu: 100m
      memory: 100Mi
    requests:
      cpu: 10m
      memory: 10Mi
  configReloader:
    resources:
      limits:
        cpu: 100m
        memory: 100Mi
      requests:
        cpu: 10m
        memory: 10Mi
  affinity: {}
  tolerations: []
  nodeSelector: {}

vector-logging:
  calico:
    enabled: true
    logPath:
    - "/var/log/calico/cni/cni*.log"

  filter:
    extraLabelSelector: "app.kubernetes.io/name!=kube-events-exporter"
    extraNamespaceLabelSelector: ""
    # When includeNamespaces and excludeNamespaces are set at the same time, only excludeNamespaces will take effect.
    includeNamespaces: []
    excludeNamespaces: []

  sinks:
    loki:
      # Create loki sink or not
      enabled: false
        # Configurations for the loki sink, more info for https://vector.dev/docs/reference/configuration/sinks/loki/
      # Usually users needn't change the following loki sink config, and the default sinks in secret "kubesphere-logging-system/vector-sinks" created by the WhizardTelemetry Data Pipeline extension will be used.
      metadata:
#        endpoint: http://<loki-gateway-ip>:<loki-gateway-port>
#        path: /loki/api/v1/push
#        encoding:
#          codec: json
        tenant_id: whizard-logs-ks
#        out_of_order_action: accept
#        remove_timestamp: false
#        batch:
#          max_bytes: 10000000
#          timeout_secs: 5
#        buffer:
#          max_events: 10000
#        request:
#          retry_attempts: 10
      labels:
        - cluster="{{ .cluster }}"
        - node="{{ .kubernetes.node_name }}"
        - workspace="{{ .kubernetes.workspace }}"
        - namespace="{{ .kubernetes.namespace_name }}"
        - pod="{{ .kubernetes.pod_name }}"
        - container="{{ .kubernetes.container_name }}"
    opensearch:
      # Create opensearch sink or not
      enabled: true
      # The index to store the logs, will be {{ prefix }}-{{ timestring }}
      index:
        # The prefix of index, supports template syntax.
        prefix: "{{ .cluster }}-{{ .kubernetes.namespace_name }}-logs"
        # Timestring is parsed from strftime patterns, like %Y.%m.%d. Used to distribute logs into different indexes according to time.
        timestring: "%Y.%m.%d"
      # Configurations for the opensearch sink, more info for https://vector.dev/docs/reference/configuration/sinks/elasticsearch/
      # Usually users needn't change the following OpenSearch sink config, and the default sinks in secret "kubesphere-logging-system/vector-sinks" created by the WhizardTelemetry Data Pipeline extension will be used.
  #    metadata:
  #      api_version: v8
  #      auth:
  #        strategy: basic
  #        user: admin
  #        password: admin
  #      batch:
  #        timeout_secs: 5
  #      buffer:
  #        max_events: 10000
  #      endpoints:
  #        - https://opensearch-cluster-data.kubesphere-logging-system:9200
  #      tls:
  #        verify_certificate: false

成员集群项目中启用日志收集。

saowu 更改标题为「KubeSphere4.1.2的日志扩展插件无法查看成员集群日志

配置这里的cluster指的是这个这个配置中的opensearch存储的哪个集群的数据,如果你是每个集群一个opensearch,那你这个配置就应该多段配置,而不是cluster放一起。
而且是必须填暴露的地址,svc访问不到

    NullFox

    尝试过了,不行呀

    global:
      ## Global image registry to use if it needs to be overriden for some specific use cases (e.g local registries, custom images, ...)
      ##
      imageRegistry: ""
    
      ## Reference to one or more secrets to be used when pulling images
      ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
      ##
      imagePullSecrets: []
      # - name: "image-pull-secret"
      # or
      # - "image-pull-secret"
      nodeSelector: {}
    
    whizard-telemetry:
      config:
        monitoring:
          enabled: true
          kind: 0
          endpoint: http://prometheus-k8s.kubesphere-monitoring-system.svc:9090
        notification:
          endpoint: http://notification-manager-svc.kubesphere-monitoring-system.svc:19093
        events:
          enable: true
          servers:
            - elasticsearch:
                cluster:
                - host
                endpoints:
                  - https://opensearch-cluster-data.kubesphere-logging-system:9200
                version: opensearchv2
                indexPrefix: "{{ .cluster }}-events"
                timestring: "%Y.%m.%d"
                basicAuth: true
                username: admin
                password: admin
            - elasticsearch:
                cluster:
                - wh-member
                endpoints:
                  - https://192.168.217.35:30920
                version: opensearchv2
                indexPrefix: "{{ .cluster }}-events"
                timestring: "%Y.%m.%d"
                basicAuth: true
                username: admin
                password: admin
            - elasticsearch:
                cluster:
                - bj-member
                endpoints:
                  - https://10.21.3.7:30920
                version: opensearchv2
                indexPrefix: "{{ .cluster }}-events"
                timestring: "%Y.%m.%d"
                basicAuth: true
                username: admin
                password: admin
        logging:
          enable: true
          servers:
            - elasticsearch:
                cluster:
                - host
                endpoints:
                  - https://opensearch-cluster-data.kubesphere-logging-system:9200
                version: opensearchv2
                indexPrefix: "{{ .cluster }}-{{ .kubernetes.namespace_name }}-logs"
                timestring: "%Y.%m.%d"
                basicAuth: true
                username: admin
                password: admin
            - elasticsearch:
                cluster:
                - wh-member
                endpoints:
                  - https://192.168.217.35:30920
                version: opensearchv2
                indexPrefix: "{{ .cluster }}-{{ .kubernetes.namespace_name }}-logs"
                timestring: "%Y.%m.%d"
                basicAuth: true
                username: admin
                password: admin
            - elasticsearch:
                cluster:
                - bj-member
                endpoints:
                  - https://10.21.3.7:30920
                version: opensearchv2
                indexPrefix: "{{ .cluster }}-{{ .kubernetes.namespace_name }}-logs"
                timestring: "%Y.%m.%d"
                basicAuth: true
                username: admin
                password: admin
    
    
      apiserver:
        image:
          repository: kubesphere/whizard-telemetry-apiserver
          pullPolicy: IfNotPresent
          # Overrides the image tag whose default is the chart appVersion.
          tag: "v1.2.2"
    
        nodeSelector: {}
    
        tolerations: []
    
        affinity: {}

      saowu

      用的nodePort来访问的

      NullFox 又行了,是我太没耐心了,我变更完看不见,我就又改成了其他配置。是我太心急了

      真是麻烦你了,谢谢,太感谢了