Feynman 这里log文件上传不了。fluentbit日志没什么报错,fluent.log文件有一部分信息不知道是否报错。
日志内容:
level=info msg=“Fluentd started”
2022-04-19 07:18:57 +0000 [info]: parsing config file is succeeded path=“/fluentd/etc/fluent.conf”
2022-04-19 07:18:57 +0000 [info]: gem ‘fluent-plugin-aws-elasticsearch-service’ version ‘2.4.1’
2022-04-19 07:18:57 +0000 [info]: gem ‘fluent-plugin-dedot_filter’ version ‘1.0.0’
2022-04-19 07:18:57 +0000 [info]: gem ‘fluent-plugin-detect-exceptions’ version ‘0.0.14’
2022-04-19 07:18:57 +0000 [info]: gem ‘fluent-plugin-elasticsearch’ version ‘5.2.1’
2022-04-19 07:18:57 +0000 [info]: gem ‘fluent-plugin-kafka’ version ‘0.17.5’
2022-04-19 07:18:57 +0000 [info]: gem ‘fluent-plugin-label-router’ version ‘0.2.10’
2022-04-19 07:18:57 +0000 [info]: gem ‘fluent-plugin-multi-format-parser’ version ‘1.0.0’
2022-04-19 07:18:57 +0000 [info]: gem ‘fluent-plugin-oss’ version ‘0.0.2’
2022-04-19 07:18:57 +0000 [info]: gem ‘fluent-plugin-record-modifier’ version ‘2.1.0’
2022-04-19 07:18:57 +0000 [info]: gem ‘fluent-plugin-rewrite-tag-filter’ version ‘2.4.0’
2022-04-19 07:18:57 +0000 [info]: gem ‘fluent-plugin-s3’ version ‘1.6.1’
2022-04-19 07:18:57 +0000 [info]: gem ‘fluent-plugin-sumologic_output’ version ‘1.7.3’
2022-04-19 07:18:57 +0000 [info]: gem ‘fluentd’ version ‘1.14.4’
2022-04-19 07:18:57 +0000 [info]: [ClusterFluentdConfig-cluster-cluster-fluentd-config-kafka::cluster::clusteroutput::cluster-fluentd-output-kafka-0] brokers has been set: [“my-cluster-kafka-bootstrap.my-cluster-kafka-0.svc:9091”, “my-cluster-kafka-bootstrap.my-cluster-kafka-1.svc:9092”, “my-cluster-kafka-bootstrap.my-cluster-kafka-2.svc:9093”]
2022-04-19 07:18:57 +0000 [warn]: [ClusterFluentdConfig-cluster-cluster-fluentd-config-kafka::cluster::clusteroutput::cluster-fluentd-output-kafka-0] Use ‘topic’ field of event record for topic but no fallback. Recommend to set default_topic or set ‘tag’ in buffer chunk keys like <buffer topic,tag>
2022-04-19 07:18:58 +0000 [info]: using configuration file: <ROOT>
<system>
rpc_endpoint "127.0.0.1:24444"
log_level info
workers 1
</system>
<source>
@type forward
bind "0.0.0.0"
port 24224
</source>
<match **>
@id main
@type label_router
<route>
@label "@0943890ac248552151615ab88ecb5e43"
<match>
namespaces agcloud-dev,default,kube-system
</match>
</route>
</match>
<label @0943890ac248552151615ab88ecb5e43>
<filter \*\*>
@id ClusterFluentdConfig-cluster-cluster-fluentd-config-kafka::cluster::clusterfilter::cluster-fluentd-filter-k8s-0
@type record_transformer
enable_ruby true
<record>
kubernetes_ns ${record["kubernetes"]["namespace_name"]}
</record>
</filter>
<match \*\*>
@id ClusterFluentdConfig-cluster-cluster-fluentd-config-kafka::cluster::clusteroutput::cluster-fluentd-output-kafka-0
@type kafka2
brokers my-cluster-kafka-bootstrap.my-cluster-kafka-0.svc:9091,my-cluster-kafka-bootstrap.my-cluster-kafka-1.svc:9092,my-cluster-kafka-bootstrap.my-cluster-kafka-2.svc:9093
topic_key "kubernetes_ns"
use_event_time true
<format>
@type "json"
</format>
</match>
</label>
<match **>
@type null
@id main-no-output
</match>
<label @FLUENT_LOG>
<match fluent.\*>
@type null
@id main-fluentd-log
</match>
</label>
</ROOT>
2022-04-19 07:18:58 +0000 [info]: starting fluentd-1.14.4 pid=11 ruby=“2.7.5”
2022-04-19 07:18:58 +0000 [info]: spawn command to main: cmdline=[“/usr/bin/ruby”, “-Eascii-8bit:ascii-8bit”, “/usr/bin/fluentd”, “-c”, “/fluentd/etc/fluent.conf”, “-p”, “/fluentd/plugins”, “–under-supervisor”]
2022-04-19 07:18:58 +0000 [info]: adding filter in @0943890ac248552151615ab88ecb5e43 pattern=“**” type=“record_transformer”
2022-04-19 07:18:58 +0000 [info]: adding match in @0943890ac248552151615ab88ecb5e43 pattern=“**” type=“kafka2”
2022-04-19 07:18:58 +0000 [info]: #0 [ClusterFluentdConfig-cluster-cluster-fluentd-config-kafka::cluster::clusteroutput::cluster-fluentd-output-kafka-0] brokers has been set: [“my-cluster-kafka-bootstrap.my-cluster-kafka-0.svc:9091”, “my-cluster-kafka-bootstrap.my-cluster-kafka-1.svc:9092”, “my-cluster-kafka-bootstrap.my-cluster-kafka-2.svc:9093”]
2022-04-19 07:18:58 +0000 [warn]: #0 [ClusterFluentdConfig-cluster-cluster-fluentd-config-kafka::cluster::clusteroutput::cluster-fluentd-output-kafka-0] Use ‘topic’ field of event record for topic but no fallback. Recommend to set default_topic or set ‘tag’ in buffer chunk keys like <buffer topic,tag>
2022-04-19 07:18:58 +0000 [info]: adding match in @FLUENT_LOG pattern=“fluent.*” type=“null”
2022-04-19 07:18:58 +0000 [info]: adding match pattern=“**” type=“label_router”
2022-04-19 07:18:58 +0000 [info]: adding match pattern=“**” type=“null”
2022-04-19 07:18:58 +0000 [info]: adding source type=“forward”
2022-04-19 07:18:58 +0000 [info]: #0 starting fluentd worker pid=20 ppid=11 worker=0
2022-04-19 07:18:58 +0000 [info]: #0 [ClusterFluentdConfig-cluster-cluster-fluentd-config-kafka::cluster::clusteroutput::cluster-fluentd-output-kafka-0] initialized kafka producer: fluentd
2022-04-19 07:18:58 +0000 [info]: #0 listening port port=24224 bind=“0.0.0.0”
2022-04-19 07:18:58 +0000 [info]: #0 fluentd worker is now running worker=0