近期有小伙伴问到怎么根据ns发送日志到kafka的topic,本篇做一下统一答疑,已实践,有问题,可以在文章下方留言。

1.修改fliter组件:
这里将里层的ns字段提取出来:

kubectl edit filters.logging.kubesphere.io -n kubesphere-logging-system kubernetes
----
spec:
  filters:
    - kubernetes:
        annotations: false
        kubeCAFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        kubeTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
        kubeURL: https://kubernetes.default.svc:443
        labels: false
    - nest:
        addPrefix: kubernetes_
        nestedUnder: kubernetes
        operation: lift
    - modify:
        rules:
          - remove: stream
          - remove: kubernetes_pod_id
          - remove: kubernetes_host
          - remove: kubernetes_container_hash
          - copy:  # 添加了这里
              kubernetes_namespace_name: namespace
    - nest:
        nestUnder: kubernetes
        operation: nest
        removePrefix: kubernetes_
        wildcard:
          - kubernetes_*
  match: kube.*
----

2. 添加output
将以下内容保存为stdout.yaml文件,brokers修改为实际值,将需要收集日志的namespace添加到topics中,以‘,’分隔。例,需要收集kubesphere-system下的日志,则topics值为‘fluent-bit,kubesphere-system’。

apiVersion: logging.kubesphere.io/v1alpha2
kind: Output
metadata:
  annotations:
    kubesphere.io/creator: admin
  labels:
    logging.kubesphere.io/component: logging
    logging.kubesphere.io/enabled: "true"
  name: kafka-logging
  namespace: kubesphere-logging-system
spec:
  kafka:
    brokers:  x.x.x.x:xx  
    topicKey: namespace 
    topics: fluent-bit
  match: kube.*

执行

Kubectl apply -f stdout.yaml

PS: 部署完成后可通过执行如下命令以继续添加namespace。

kubectl edit outputs.logging.kubesphere.io -n kubesphere-logging-system kafka-logging