• 监控日志
  • v3.3.0 版本 KS 的 prometheus-k8s 持续间歇性自动重启

使用的是 AWS 的 EKS,自管理node,SC 使用的是 EFS Nas 存储,集群搭建好没几天,发现监控数据时而没有,才注意到这个 prometheus-k8s sts 总是莫名的自动重启。并不是超过 resource limitation 了,日志也没有发现什么太多异常。

 ts=2022-09-18T01:29:24.804Z caller=main.go:516 level=info msg="Starting Prometheus" version="(version=2.34.0, branch=HEAD, revision=881111fec4332c33094a6fb2680c71fffc427275)"
 ts=2022-09-18T01:29:24.804Z caller=main.go:521 level=info build_context="(go=go1.17.8, user=root@121ad7ea5487, date=20220315-15:18:00)"
 ts=2022-09-18T01:29:24.804Z caller=main.go:522 level=info host_details="(Linux 5.4.209-116.363.amzn2.x86_64 #1 SMP Wed Aug 10 21:19:18 UTC 2022 x86_64 prometheus-k8s-0 (none))"
 ts=2022-09-18T01:29:24.804Z caller=main.go:523 level=info fd_limits="(soft=1048576, hard=1048576)"
 ts=2022-09-18T01:29:24.804Z caller=main.go:524 level=info vm_limits="(soft=unlimited, hard=unlimited)"
 ts=2022-09-18T01:29:24.832Z caller=query_logger.go:79 level=info component=activeQueryTracker msg="These queries didn't finish in prometheus' last run:" queries="[estamp_sec\":1663464549}estamp_sec\":1663464549]"
 ts=2022-09-18T01:29:26.207Z caller=web.go:540 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090
 ts=2022-09-18T01:29:26.208Z caller=main.go:937 level=info msg="Starting TSDB ..."
 ts=2022-09-18T01:29:26.209Z caller=tls_config.go:231 level=info component=web msg="TLS is disabled." http2=false
 ts=2022-09-18T01:29:31.577Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663022181407 maxt=1663027200000 ulid=01GCT7TYA8SCSXK0QS8412WQF1
 ts=2022-09-18T01:29:31.578Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663027200004 maxt=1663034400000 ulid=01GCTCM6KCPF864TMZY78KG8CD
 ts=2022-09-18T01:29:31.579Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663034400039 maxt=1663041600000 ulid=01GCTKFXVDMYB75MANGBGQZPYX
 ts=2022-09-18T01:29:31.580Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663041600096 maxt=1663048800000 ulid=01GCTTBN3BQPNGG1E8SFVAFDJS
 ts=2022-09-18T01:29:31.581Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663048800020 maxt=1663056000000 ulid=01GCV17CBM36F0CTY8R5NS3FE6
 ts=2022-09-18T01:29:31.582Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663056000007 maxt=1663063200000 ulid=01GCV833KMSTY2EH5DKAY1NNCX
 ts=2022-09-18T01:29:31.583Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663063200000 maxt=1663070400000 ulid=01GCVEYTVKGDZRB6PAEJXQQE3R
 ts=2022-09-18T01:29:31.584Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663092000000 maxt=1663099200000 ulid=01GCWADQVJYGHN486ECWN75MB4
 ts=2022-09-18T01:29:31.585Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663099200000 maxt=1663106400000 ulid=01GCWHDX46SQ61YHEXK8M8EM77
 ts=2022-09-18T01:29:31.586Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663070400073 maxt=1663092000000 ulid=01GCWHKZ47Z0CAB7BD29BPW7S3
 ts=2022-09-18T01:29:31.587Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663106400105 maxt=1663113600000 ulid=01GCWR56BMBG9FDG42G4STQ5B6
 ts=2022-09-18T01:29:31.588Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663113603784 maxt=1663120800000 ulid=01GCWZ0XYVRCAN5GQ8MQ22KPP3
 ts=2022-09-18T01:29:31.589Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663120800000 maxt=1663128000000 ulid=01GCX5WMVJAY4CV3FV6JCJESGC
 ts=2022-09-18T01:29:31.590Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663128000066 maxt=1663135200000 ulid=01GCXCRC3NY0CPP754PN2Z1MYX
 ts=2022-09-18T01:29:31.590Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663135200231 maxt=1663142400000 ulid=01GCXKM3BMD6QEKAZR551QAHZR
 ts=2022-09-18T01:29:31.591Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663142400059 maxt=1663149600000 ulid=01GCXTFTKPQHHPSR2Z35M49QQA
 ts=2022-09-18T01:29:31.592Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663149600000 maxt=1663156800000 ulid=01GCY1BHVNY56RVCT6D3BKNQVA
 ts=2022-09-18T01:29:31.593Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663178400016 maxt=1663185600000 ulid=01GCYWTEVMRGE3Y463CNWWYMQ1
 ts=2022-09-18T01:29:31.594Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663156800000 maxt=1663178400000 ulid=01GCYX0QR5XM62X98M8MWTQRKJ
 ts=2022-09-18T01:29:31.595Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663185600043 maxt=1663192800000 ulid=01GCZ3P43GP69YJEKRTERT3KEW
 ts=2022-09-18T01:29:31.596Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663192800000 maxt=1663200000000 ulid=01GCZAZ9GQG1JVMQ9XXRY4GHHJ
 ts=2022-09-18T01:29:31.597Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663200000716 maxt=1663207200000 ulid=01GCZHDJK79WYCTY8HCNPJ26F6
 ts=2022-09-18T01:29:31.598Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663207201750 maxt=1663214400000 ulid=01GCZR9AAFHHPZZSTMD55KBN6S
 ts=2022-09-18T01:29:31.599Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663214400000 maxt=1663221600000 ulid=01GCZZGVG5QXPE1F5PF3JZQ4D1
 ts=2022-09-18T01:29:31.600Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663221600000 maxt=1663228800000 ulid=01GD09F7J2RQETZ91RFMPX1J4N
 ts=2022-09-18T01:29:31.601Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663228800000 maxt=1663236000000 ulid=01GD0D9PNNPNDQHNG83D1GKFM4
 ts=2022-09-18T01:29:31.602Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663236025228 maxt=1663243200000 ulid=01GD0MWEFX9C79GEX0Y9E1SBV9
 ts=2022-09-18T01:29:31.602Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663243200000 maxt=1663250400000 ulid=01GD0TXFZNSYA3B20JZZ3R30ZA
 ts=2022-09-18T01:29:31.603Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663250400000 maxt=1663257600000 ulid=01GD121T4P0V04X2SHGFGRWSAJ
 ts=2022-09-18T01:29:31.604Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663257600000 maxt=1663264800000 ulid=01GD1AKMK5Y1G2WHN0AG0R11TF
 ts=2022-09-18T01:29:31.605Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663264800000 maxt=1663272000000 ulid=01GD1FDHNQS1BBNGRAEXCF31EQ
 ts=2022-09-18T01:29:31.606Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663272000000 maxt=1663279200000 ulid=01GD1QT5TMP16WAQKKFQX678GN
 ts=2022-09-18T01:29:31.607Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663279200000 maxt=1663286400000 ulid=01GD21YPV5RX1VEEKNR9WBZBNA
 ts=2022-09-18T01:29:31.608Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663286400000 maxt=1663293600000 ulid=01GD23T9K7F5TZYH38KMWNZF91
 ts=2022-09-18T01:29:31.609Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663293600000 maxt=1663300800000 ulid=01GD2AP0VFZ9TPS7PC018DGFJC
 ts=2022-09-18T01:29:31.610Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663300800575 maxt=1663308000000 ulid=01GD2HHR3MJ0A4FGX2AK2CF3DT
 ts=2022-09-18T01:29:31.611Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663308000000 maxt=1663315200000 ulid=01GD2RDFBF6KA3EYSHRRVCH21F
 ts=2022-09-18T01:29:31.612Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663315200000 maxt=1663322400000 ulid=01GD2ZY5Z5ZGZDW9G89ZABEJR9
 ts=2022-09-18T01:29:31.613Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663322400000 maxt=1663329600000 ulid=01GD36BRG07N84Q64YF7CRCB8Y
 ts=2022-09-18T01:29:31.614Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663329601751 maxt=1663336800000 ulid=01GD3D6RF23W5DQTJAM1FTF04T
 ts=2022-09-18T01:29:31.614Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663336800000 maxt=1663344000000 ulid=01GD3M2DTDZ1VAA9JQ7HYCQ9C5
 ts=2022-09-18T01:29:31.615Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663344000175 maxt=1663351200000 ulid=01GD3V5FEDPT9Z9EQASRYRKD92
 ts=2022-09-18T01:29:31.616Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663351200000 maxt=1663358400000 ulid=01GD4223BYBQEY1XP0ABN98KX7
 ts=2022-09-18T01:29:31.617Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663358400000 maxt=1663365600000 ulid=01GD48YRHDS1Y7Z59V6JBPQ4MV
 ts=2022-09-18T01:29:31.618Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663365600000 maxt=1663372800000 ulid=01GD4FB9B8C9R3P6TCDXW0GTG8
 ts=2022-09-18T01:29:31.619Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663372800000 maxt=1663380000000 ulid=01GD4PDV811WHZPAZGJT2ZSDCV
 ts=2022-09-18T01:29:31.620Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663380000000 maxt=1663387200000 ulid=01GD4XKTS3A94Q5TTQ0552X0PJ
 ts=2022-09-18T01:29:31.621Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663387200000 maxt=1663394400000 ulid=01GD5F5RCNZFZKXFSMV8M8JBKC
 ts=2022-09-18T01:29:31.622Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663394400000 maxt=1663401600000 ulid=01GD5K3829B2M81N4S3C858C30
 ts=2022-09-18T01:29:31.622Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663401600052 maxt=1663408800000 ulid=01GD5KMQFDKZFH7QHXG5ZN193M
 ts=2022-09-18T01:29:31.623Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663408800000 maxt=1663416000000 ulid=01GD5RHMVAYAYJRQBH504FN0EW
 ts=2022-09-18T01:29:31.624Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663416000216 maxt=1663423200000 ulid=01GD5ZY8BKVKC6AWR96VD53PYN
 ts=2022-09-18T01:29:31.625Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663423200000 maxt=1663430400000 ulid=01GD66WTXVDRFS0V6CY9R99RJV
 ts=2022-09-18T01:29:31.626Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663430400000 maxt=1663437600000 ulid=01GD6DA1X6ZH1P44ZG6PSD79VS
 ts=2022-09-18T01:29:31.627Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663437600000 maxt=1663444800000 ulid=01GD6NMC5D6BWX96GXHA9VHMG5
 ts=2022-09-18T01:29:31.628Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663444800000 maxt=1663452000000 ulid=01GD6V2PXRSBMYY447CFDVH52C
 ts=2022-09-18T01:29:31.629Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663452000000 maxt=1663459200000 ulid=01GD72X5XD5KDS95J03FXRZ4VM
 ts=2022-09-18T01:29:41.678Z caller=head.go:493 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any"
 ts=2022-09-18T01:29:41.701Z caller=head.go:536 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=22.395017ms
 ts=2022-09-18T01:29:41.701Z caller=head.go:542 level=info component=tsdb msg="Replaying WAL, this may take a while"
 ts=2022-09-18T01:30:56.167Z caller=head_wal.go:337 level=warn component=tsdb msg="Unknown series references" samples=11 exemplars=0
 ts=2022-09-18T01:30:56.167Z caller=head.go:578 level=info component=tsdb msg="WAL checkpoint loaded"
 ts=2022-09-18T01:31:01.533Z caller=head.go:613 level=info component=tsdb msg="WAL segment loaded" segment=260 maxSegment=265
 ts=2022-09-18T01:31:01.565Z caller=head.go:613 level=info component=tsdb msg="WAL segment loaded" segment=261 maxSegment=265
 ts=2022-09-18T01:31:01.583Z caller=head.go:613 level=info component=tsdb msg="WAL segment loaded" segment=262 maxSegment=265
 ts=2022-09-18T01:31:01.659Z caller=head.go:613 level=info component=tsdb msg="WAL segment loaded" segment=263 maxSegment=265
 ts=2022-09-18T01:31:01.956Z caller=head.go:613 level=info component=tsdb msg="WAL segment loaded" segment=264 maxSegment=265
 ts=2022-09-18T01:31:05.757Z caller=head.go:613 level=info component=tsdb msg="WAL segment loaded" segment=265 maxSegment=265
 ts=2022-09-18T01:31:05.757Z caller=head.go:619 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=1m14.465894872s wal_replay_duration=9.590616671s total_replay_duration=1m24.078973732s
 ts=2022-09-18T01:31:05.930Z caller=main.go:956 level=warn fs_type=NFS_SUPER_MAGIC msg="This filesystem is not supported and may lead to data corruption and data loss. Please carefully read https://prometheus.io/docs/prometheus/latest/storage/ to learn more about supported filesystems."
 ts=2022-09-18T01:31:05.930Z caller=main.go:961 level=info msg="TSDB started"
 ts=2022-09-18T01:31:05.930Z caller=main.go:1142 level=info msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml
 ts=2022-09-18T01:31:05.937Z caller=kubernetes.go:313 level=info component="discovery manager scrape" discovery=kubernetes msg="Using pod service account via in-cluster config"
 ts=2022-09-18T01:31:05.938Z caller=kubernetes.go:313 level=info component="discovery manager scrape" discovery=kubernetes msg="Using pod service account via in-cluster config"
 ts=2022-09-18T01:31:05.938Z caller=kubernetes.go:313 level=info component="discovery manager scrape" discovery=kubernetes msg="Using pod service account via in-cluster config"
 ts=2022-09-18T01:31:05.938Z caller=kubernetes.go:313 level=info component="discovery manager scrape" discovery=kubernetes msg="Using pod service account via in-cluster config"
 ts=2022-09-18T01:31:05.939Z caller=kubernetes.go:313 level=info component="discovery manager scrape" discovery=kubernetes msg="Using pod service account via in-cluster config"
 ts=2022-09-18T01:31:05.939Z caller=kubernetes.go:313 level=info component="discovery manager scrape" discovery=kubernetes msg="Using pod service account via in-cluster config"
 ts=2022-09-18T01:31:05.940Z caller=kubernetes.go:313 level=info component="discovery manager scrape" discovery=kubernetes msg="Using pod service account via in-cluster config"
 ts=2022-09-18T01:31:05.940Z caller=kubernetes.go:313 level=info component="discovery manager scrape" discovery=kubernetes msg="Using pod service account via in-cluster config"
 ts=2022-09-18T01:31:05.940Z caller=kubernetes.go:313 level=info component="discovery manager notify" discovery=kubernetes msg="Using pod service account via in-cluster config"
 ts=2022-09-18T01:31:05.976Z caller=main.go:1179 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml totalDuration=46.492603ms db_storage=1.59µs remote_storage=1.85µs web_handler=740ns query_engine=1.13µs scrape=194.445µs scrape_sd=3.425352ms notify=16.27µs notify_sd=608.217µs rules=35.318174ms tracing=5.73µs
 ts=2022-09-18T01:31:05.976Z caller=main.go:910 level=info msg="Server is ready to receive web requests."

    elasticsearch-logging-data

    这个组件也是不断在重启,太奇怪了。

    [2022-09-19T01:47:14,561][INFO ][o.e.i.s.IndexShard       ] [elasticsearch-logging-data-0] [ks-logstash-log-2022.09.14][4] primary-replica resync completed with 0 operations
     [2022-09-19T01:47:14,561][INFO ][o.e.i.s.IndexShard       ] [elasticsearch-logging-data-0] [ks-logstash-log-2022.09.14][0] primary-replica resync completed with 0 operations
     [2022-09-19T01:47:23,923][WARN ][o.e.d.z.ZenDiscovery     ] [elasticsearch-logging-data-0] dropping pending state [[uuid[W6TBbeNwTnitb9zfg8mIhg], v[432], m[NcQs9r_VQR-p4mvsTIZzGw]]]. more than [25] pending states.
     [2022-09-19T01:47:39,547][WARN ][o.e.a.b.TransportShardBulkAction] [elasticsearch-logging-data-0] [[logstash-jaeger-span-2022-09-19][1]] failed to perform indices:data/write/bulk[s] on replica [logstash-jaeger-span-2022-09-19][1], node[7XRBYCbmR_aLK9gNVpEB_w], [R], s[STARTED], a[id=hXlhh-suRoOnuABT5fl2Vg]
     org.elasticsearch.transport.NodeNotConnectedException: [elasticsearch-logging-data-1][172.31.68.236:9300] Node not connected
     	at org.elasticsearch.transport.ConnectionManager.getConnection(ConnectionManager.java:151) ~[elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.transport.TransportService.getConnection(TransportService.java:576) ~[elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:531) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.action.support.replication.TransportReplicationAction.sendReplicaRequest(TransportReplicationAction.java:1222) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.action.support.replication.TransportReplicationAction$ReplicasProxy.performOn(TransportReplicationAction.java:1184) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.action.support.replication.ReplicationOperation.performOnReplica(ReplicationOperation.java:165) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.action.support.replication.ReplicationOperation.performOnReplicas(ReplicationOperation.java:152) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.action.support.replication.ReplicationOperation.execute(ReplicationOperation.java:126) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.runWithPrimaryShardReference(TransportReplicationAction.java:433) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.lambda$doRun$0(TransportReplicationAction.java:374) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:62) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.index.shard.IndexShard.lambda$wrapPrimaryOperationPermitListener$14(IndexShard.java:2592) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:62) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.index.shard.IndexShardOperationPermits.acquire(IndexShardOperationPermits.java:273) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.index.shard.IndexShardOperationPermits.acquire(IndexShardOperationPermits.java:240) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.index.shard.IndexShard.acquirePrimaryOperationPermit(IndexShard.java:2567) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.action.support.replication.TransportReplicationAction.acquirePrimaryOperationPermit(TransportReplicationAction.java:996) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.doRun(TransportReplicationAction.java:370) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:325) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:312) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:66) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:704) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:778) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.8.22.jar:6.8.22]
     	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]
     	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]
     	at java.lang.Thread.run(Thread.java:832) [?:?]
     [2022-09-19T01:47:39,547][WARN ][o.e.a.b.TransportShardBulkAction] [elasticsearch-logging-data-0] [[ks-logstash-log-2022.09.19][2]] failed to perform indices:data/write/bulk[s] on replica [ks-logstash-log-2022.09.19][2], node[7XRBYCbmR_aLK9gNVpEB_w], [R], s[STARTED], a[id=ucTI93u4QVqg1PUypWHCfA]
     org.elasticsearch.transport.NodeNotConnectedException: [elasticsearch-logging-data-1][172.31.68.236:9300] Node not connected
     	at org.elasticsearch.transport.ConnectionManager.getConnection(ConnectionManager.java:151) ~[elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.transport.TransportService.getConnection(TransportService.java:576) ~[elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:531) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.action.support.replication.TransportReplicationAction.sendReplicaRequest(TransportReplicationAction.java:1222) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.action.support.replication.TransportReplicationAction$ReplicasProxy.performOn(TransportReplicationAction.java:1184) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.action.support.replication.ReplicationOperation.performOnReplica(ReplicationOperation.java:165) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.action.support.replication.ReplicationOperation.performOnReplicas(ReplicationOperation.java:152) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.action.support.replication.ReplicationOperation.execute(ReplicationOperation.java:126) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.runWithPrimaryShardReference(TransportReplicationAction.java:433) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.lambda$doRun$0(TransportReplicationAction.java:374) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:62) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.index.shard.IndexShard.lambda$wrapPrimaryOperationPermitListener$14(IndexShard.java:2592) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:62) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.index.shard.IndexShardOperationPermits.acquire(IndexShardOperationPermits.java:273) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.index.shard.IndexShardOperationPermits.acquire(IndexShardOperationPermits.java:240) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.index.shard.IndexShard.acquirePrimaryOperationPermit(IndexShard.java:2567) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.action.support.replication.TransportReplicationAction.acquirePrimaryOperationPermit(TransportReplicationAction.java:996) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.doRun(TransportReplicationAction.java:370) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:325) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:312) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:66) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:704) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:778) [elasticsearch-6.8.22.jar:6.8.22]
     	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.8.22.jar:6.8.22]
     	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]
     	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]
     	at java.lang.Thread.run(Thread.java:832) [?:?]

    koalawangyang

    可以查看prometheus-k8s 或elasticsearch-logging-data 的退出日志进一步排查。

    kubectl logs -n kubesphere-monitoring-system prometheus-k8s-0 -c prometheus -p

      frezes

      感谢大佬回复,执行这个命令后,返回下面的内容,看起来是突然收到了“Received SIGTERM, exiting gracefully…” 关闭信号,然后就重启了。是因为达到了 resource limitation?

      ts=2022-09-19T07:10:11.598Z caller=main.go:516 level=info msg="Starting Prometheus" version="(version=2.34.0, branch=HEAD, revision=881111fec4332c33094a6fb2680c71fffc427275)"
      ts=2022-09-19T07:10:11.598Z caller=main.go:521 level=info build_context="(go=go1.17.8, user=root@121ad7ea5487, date=20220315-15:18:00)"
      ts=2022-09-19T07:10:11.598Z caller=main.go:522 level=info host_details="(Linux 5.4.209-116.363.amzn2.x86_64 #1 SMP Wed Aug 10 21:19:18 UTC 2022 x86_64 prometheus-k8s-0 (none))"
      ts=2022-09-19T07:10:11.598Z caller=main.go:523 level=info fd_limits="(soft=1048576, hard=1048576)"
      ts=2022-09-19T07:10:11.598Z caller=main.go:524 level=info vm_limits="(soft=unlimited, hard=unlimited)"
      ts=2022-09-19T07:10:17.337Z caller=web.go:540 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090
      ts=2022-09-19T07:10:17.338Z caller=main.go:937 level=info msg="Starting TSDB ..."
      ts=2022-09-19T07:10:17.339Z caller=tls_config.go:231 level=info component=web msg="TLS is disabled." http2=false
      ts=2022-09-19T07:10:17.345Z caller=repair.go:57 level=info component=tsdb msg="Found healthy block" mint=1663557782897 maxt=1663560000000 ulid=01GDA6M688YJB2SZW17YV42VR9
      ts=2022-09-19T07:10:20.851Z caller=head.go:493 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any"
      ts=2022-09-19T07:10:20.927Z caller=head.go:536 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=75.723739ms
      ts=2022-09-19T07:10:20.927Z caller=head.go:542 level=info component=tsdb msg="Replaying WAL, this may take a while"
      ts=2022-09-19T07:10:26.115Z caller=head.go:613 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=9
      ts=2022-09-19T07:10:27.665Z caller=head.go:613 level=info component=tsdb msg="WAL segment loaded" segment=1 maxSegment=9
      ts=2022-09-19T07:10:28.102Z caller=head.go:613 level=info component=tsdb msg="WAL segment loaded" segment=2 maxSegment=9
      ts=2022-09-19T07:10:28.418Z caller=head.go:613 level=info component=tsdb msg="WAL segment loaded" segment=3 maxSegment=9
      ts=2022-09-19T07:10:28.691Z caller=head.go:613 level=info component=tsdb msg="WAL segment loaded" segment=4 maxSegment=9
      ts=2022-09-19T07:10:28.702Z caller=head.go:613 level=info component=tsdb msg="WAL segment loaded" segment=5 maxSegment=9
      ts=2022-09-19T07:10:28.706Z caller=head.go:613 level=info component=tsdb msg="WAL segment loaded" segment=6 maxSegment=9
      ts=2022-09-19T07:10:28.715Z caller=head.go:613 level=info component=tsdb msg="WAL segment loaded" segment=7 maxSegment=9
      ts=2022-09-19T07:10:28.790Z caller=head.go:613 level=info component=tsdb msg="WAL segment loaded" segment=8 maxSegment=9
      ts=2022-09-19T07:10:28.792Z caller=head.go:613 level=info component=tsdb msg="WAL segment loaded" segment=9 maxSegment=9
      ts=2022-09-19T07:10:28.792Z caller=head.go:619 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=2.339816ms wal_replay_duration=7.862657848s total_replay_duration=7.940790025s
      ts=2022-09-19T07:10:28.935Z caller=main.go:956 level=warn fs_type=NFS_SUPER_MAGIC msg="This filesystem is not supported and may lead to data corruption and data loss. Please carefully read https://prometheus.io/docs/prometheus/latest/storage/ to learn more about supported filesystems."
      ts=2022-09-19T07:10:28.935Z caller=main.go:961 level=info msg="TSDB started"
      ts=2022-09-19T07:10:28.935Z caller=main.go:1142 level=info msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml
      ts=2022-09-19T07:10:28.942Z caller=kubernetes.go:313 level=info component="discovery manager scrape" discovery=kubernetes msg="Using pod service account via in-cluster config"
      ts=2022-09-19T07:10:28.943Z caller=kubernetes.go:313 level=info component="discovery manager scrape" discovery=kubernetes msg="Using pod service account via in-cluster config"
      ts=2022-09-19T07:10:28.943Z caller=kubernetes.go:313 level=info component="discovery manager scrape" discovery=kubernetes msg="Using pod service account via in-cluster config"
      ts=2022-09-19T07:10:28.943Z caller=kubernetes.go:313 level=info component="discovery manager scrape" discovery=kubernetes msg="Using pod service account via in-cluster config"
      ts=2022-09-19T07:10:28.944Z caller=kubernetes.go:313 level=info component="discovery manager scrape" discovery=kubernetes msg="Using pod service account via in-cluster config"
      ts=2022-09-19T07:10:28.944Z caller=kubernetes.go:313 level=info component="discovery manager scrape" discovery=kubernetes msg="Using pod service account via in-cluster config"
      ts=2022-09-19T07:10:28.945Z caller=kubernetes.go:313 level=info component="discovery manager scrape" discovery=kubernetes msg="Using pod service account via in-cluster config"
      ts=2022-09-19T07:10:28.945Z caller=kubernetes.go:313 level=info component="discovery manager scrape" discovery=kubernetes msg="Using pod service account via in-cluster config"
      ts=2022-09-19T07:10:28.946Z caller=kubernetes.go:313 level=info component="discovery manager notify" discovery=kubernetes msg="Using pod service account via in-cluster config"
      ts=2022-09-19T07:10:28.983Z caller=main.go:1179 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml totalDuration=47.809688ms db_storage=1.1µs remote_storage=1.56µs web_handler=800ns query_engine=1.13µs scrape=186.416µs scrape_sd=3.349854ms notify=21.081µs notify_sd=588.296µs rules=37.154109ms tracing=5.58µs
      ts=2022-09-19T07:10:28.983Z caller=main.go:910 level=info msg="Server is ready to receive web requests."
      ts=2022-09-19T07:13:14.165Z caller=compact.go:519 level=info component=tsdb msg="write block" mint=1663560000000 maxt=1663567200000 ulid=01GDA9B1W61WZ1TY3GRGPJ6M9E duration=2m44.911485767s
      ts=2022-09-19T07:14:06.482Z caller=head.go:840 level=info component=tsdb msg="Head GC completed" duration=1.402996698s
      ts=2022-09-19T07:14:07.828Z caller=checkpoint.go:98 level=info component=tsdb msg="Creating checkpoint" from_segment=0 to_segment=5 mint=1663567200000
      ts=2022-09-19T07:16:48.126Z caller=main.go:776 level=warn msg="Received SIGTERM, exiting gracefully..."
      ts=2022-09-19T07:16:48.126Z caller=main.go:799 level=info msg="Stopping scrape discovery manager..."
      ts=2022-09-19T07:16:48.126Z caller=main.go:813 level=info msg="Stopping notify discovery manager..."
      ts=2022-09-19T07:16:48.126Z caller=main.go:835 level=info msg="Stopping scrape manager..."
      ts=2022-09-19T07:16:48.126Z caller=manager.go:610 level=warn component="rule manager" group=kubernetes-system-controller-manager msg="Evaluating rule failed" rule="alert: KubeControllerManagerDown\nexpr: absent(up{job=\"kube-controller-manager\"} == 1)\nfor: 15m\nlabels:\n  severity: critical\nannotations:\n  description: KubeControllerManager has disappeared from Prometheus target discovery.\n  runbook_url: https://runbooks.prometheus-operator.dev/runbooks/kubernetes/kubecontrollermanagerdown\n  summary: Target disappeared from Prometheus target discovery.\n" err="query timed out in expression evaluation"
      ts=2022-09-19T07:16:48.127Z caller=main.go:795 level=info msg="Scrape discovery manager stopped"
      ts=2022-09-19T07:16:48.129Z caller=manager.go:610 level=warn component="rule manager" group=prometheus.rules msg="Evaluating rule failed" rule="record: prometheus:up:sum\nexpr: sum(up{job=\"prometheus-k8s\",namespace=\"kubesphere-monitoring-system\"} == 1)\n" err="query timed out in query execution"
      ts=2022-09-19T07:16:48.128Z caller=manager.go:610 level=warn component="rule manager" group=kubernetes-apps msg="Evaluating rule failed" rule="alert: KubePodCrashLooping\nexpr: max_over_time(kube_pod_container_status_waiting_reason{job=\"kube-state-metrics\",reason=\"CrashLoopBackOff\"}[5m])\n  >= 1\nfor: 15m\nlabels:\n  severity: warning\nannotations:\n  description: 'Pod {{ $labels.namespace }}/{{ $labels.pod }} ({{ $labels.container\n    }}) is in waiting state (reason: \"CrashLoopBackOff\").'\n  runbook_url: https://runbooks.prometheus-operator.dev/runbooks/kubernetes/kubepodcrashlooping\n  summary: Pod is crash looping.\n" err="query timed out in query execution"
      ts=2022-09-19T07:16:48.129Z caller=manager.go:610 level=warn component="rule manager" group=node.rules msg="Evaluating rule failed" rule="record: node_cpu_used_seconds_total\nexpr: sum by(cpu, instance, job, namespace, pod) (node_cpu_seconds_total{job=\"node-exporter\",mode=~\"user|nice|system|iowait|irq|softirq\"})\n" err="query timed out in query execution"
      ts=2022-09-19T07:16:48.129Z caller=manager.go:610 level=warn component="rule manager" group=kube-apiserver-burnrate.rules msg="Evaluating rule failed" rule="record: apiserver_request:burnrate1d\nexpr: ((sum by(cluster) (rate(apiserver_request_duration_seconds_count{job=\"apiserver\",verb=~\"LIST|GET\"}[1d]))\n  - ((sum by(cluster) (rate(apiserver_request_duration_seconds_bucket{job=\"apiserver\",le=\"1\",scope=~\"resource|\",verb=~\"LIST|GET\"}[1d]))\n  or vector(0)) + sum by(cluster) (rate(apiserver_request_duration_seconds_bucket{job=\"apiserver\",le=\"5\",scope=\"namespace\",verb=~\"LIST|GET\"}[1d]))\n  + sum by(cluster) (rate(apiserver_request_duration_seconds_bucket{job=\"apiserver\",le=\"30\",scope=\"cluster\",verb=~\"LIST|GET\"}[1d]))))\n  + sum by(cluster) (rate(apiserver_request_total{code=~\"5..\",job=\"apiserver\",verb=~\"LIST|GET\"}[1d])))\n  / sum by(cluster) (rate(apiserver_request_total{job=\"apiserver\",verb=~\"LIST|GET\"}[1d]))\nlabels:\n  verb: read\n" err="query timed out in expression evaluation"
      ts=2022-09-19T07:16:48.130Z caller=klog.go:116 level=error component=k8s_client_runtime func=ErrorDepth msg="pkg/mod/k8s.io/client-go@v0.22.7/tools/cache/reflector.go:167: Failed to watch *v1.Service: Get \"https://10.10.0.1:443/api/v1/namespaces/default/services?allowWatchBookmarks=true&resourceVersion=6729612&timeout=8m41s&timeoutSeconds=521&watch=true\": context canceled"
      ts=2022-09-19T07:16:48.130Z caller=main.go:809 level=info msg="Notify discovery manager stopped"
      ts=2022-09-19T07:16:48.635Z caller=manager.go:946 level=info component="rule manager" msg="Stopping rule manager..."
      ts=2022-09-19T07:16:48.635Z caller=main.go:829 level=info msg="Scrape manager stopped"
      ts=2022-09-19T07:16:48.692Z caller=manager.go:956 level=info component="rule manager" msg="Rule manager stopped"
      ts=2022-09-19T07:17:20.371Z caller=head.go:1009 level=info component=tsdb msg="WAL checkpoint complete" first=0 last=5 duration=3m13.858733378s
      ts=2022-09-19T07:17:20.408Z caller=notifier.go:600 level=info component=notifier msg="Stopping notification manager..."
      ts=2022-09-19T07:17:20.408Z caller=main.go:1068 level=info msg="Notifier manager stopped"
      ts=2022-09-19T07:17:20.408Z caller=main.go:1080 level=info msg="See you next time!"

      可是我看到 prometheus-k8s-0 的 resource limitation 是 4U + 16G 内存啊,不可能超过的呀。

        koalawangyang
        “Received SIGTERM, exiting gracefully…” 只是表明收到了 SIGTERM 信号,这个信号可能来自OOM_Kill,LivenessProbe,或者其他主动发送信号的来源。

        可以用排除法及结合Kubernetes Event, 来进一步确定原因。如观察 prometheus-k8s-0 Pod的Memery 的消耗排查确定是否为资源问题导致,Pod 重启时Kubernetes Event,kubelet日志等来共同定位。

          frezes 感谢大佬回复,今天 ES 服务还算稳定,通过 ES 查了更多信息:

          首先是 prometheus 是因为 liveness probe failed 导致的重启

          检测的是这个地址

          Readiness probe failed: Get "http://172.31.88.195:9090/-/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

          我进入 Pod 直接测试,发现 8080 是通的,但 9090 的确不通。

          ts=2022-09-20T05:52:37.901Z caller=query_logger.go:79 level=info component=activeQueryTracker msg="These queries didn't finish in prometheus' last run:" queries="[{\"query\":\"round(sum by (namespace, pod) (irate(container_cpu_usage_seconds_total{job=\\\"kubelet\\\", pod!=\\\"\\\", image!=\\\"\\\"}[5m])) * on (namespace, pod) group_left(owner_kind, owner_name) kube_pod_owner{} * on (namespace, pod) group_left(node) kube_pod_info{pod=~\\\"prometheus-k8s-0$\\\", namespace=\\\"kubesphere-monitoring-system\\\"}, 0.001)\",\"timestamp_sec\":1663653153}]"
           ts=2022-09-20T05:52:39.368Z caller=web.go:540 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090
           ts=2022-09-20T05:52:39.368Z caller=main.go:937 level=info msg="Starting TSDB ..."

          虽然从 log 看,9090 是已经 listening 了,但是实际测试是不通的。

          到这里,我就不知道怎么办了~

          为什么 9090 不通呢。

            koalawangyang

            检查下使用的存储吧。Prometheus 自身的服务其实足够健壮,端口无法监听大概会是其他外部条件导致的,如卷空间写满,卷写入时延过高或其他原因导致的。

              frezes 感谢大佬回复。

              您说的没错,我是有点怀疑和 AWS 的 EFS存储有关,感觉这个存储的速度特别慢,导致了这些问题。我正在尝试切换为其他的存储。多谢~

              frezes

              大佬,我将KS 内置的 ES 配置修改为使用 AWS 的外置 ES 了,查看 ES 侧的监控,已经有写入,但是在 KS 控制台中查询日志时,会报下面的错误

              ES 配置使用 https 或者 http 我都试过了,都不行,这是什么问题呢