0

filebeat写入数据到kafka topic失败问题排查

 2 months ago
source link: https://chegva.com/5953.html
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

filebeat写入数据到kafka topic失败问题排查

大数据组有个需求,需要将两台cvm之前落盘的历史日志文件迁移到大数据集群上,一共600多个G,数据量不是很大,想着直接使用filebeat采集到kafka上让他们消费即可。中途遇到日志无法写入到kafka topic的问题,但是在机器上用脚本测试数据是能通过filebeat写入kafka的,Mark一下。遇到的报错如下:

Feb 28 16:48:57 chegva_centos filebeat[26009]: 2024-02-28T16:48:57.355+0800        DEBUG        [input]        log/input.go:222        Start next scan        {"input_id": "9d00bad2-0141-4f38-ab9f-39b9dd20e536"}
Feb 28 16:48:57 chegva_centos filebeat[26009]: 2024-02-28T16:48:57.355+0800        DEBUG        [input]        log/input.go:472        Check file for harvesting: /root/chegva.com.2024022816-16.log        {"input_id": "9d00bad2-0141-4f38-ab9f-39b9dd20e536"}
Feb 28 16:48:57 chegva_centos filebeat[26009]: 2024-02-28T16:48:57.355+0800        DEBUG        [input]        log/input.go:570        Update existing file for harvesting: /root/chegva.com.2024022816-16.log, offset: 736        {"input_id": "9d00bad2-0141-4f38-ab9f-39b9dd20e536", "source": "/root/chegva.com.2024022816-16.log", "state_id": "native::475592-64769", "finished": false, "os_id": "475592-64769", "old_source": "/root/chegva.com.2024022816-16.log", "old_finished": false, "old_os_id": "475592-64769"}
Feb 28 16:48:57 chegva_centos filebeat[26009]: 2024-02-28T16:48:57.355+0800        DEBUG        [input]        log/input.go:623        Harvester for file is still running: /root/chegva.com.2024022816-16.log        {"input_id": "9d00bad2-0141-4f38-ab9f-39b9dd20e536", "source": "/root/chegva.com.2024022816-16.log", "state_id": "native::475592-64769", "finished": false, "os_id": "475592-64769", "old_source": "/root/chegva.com.2024022816-16.log", "old_finished": false, "old_os_id": "475592-64769"}
Feb 28 16:48:57 chegva_centos filebeat[26009]: 2024-02-28T16:48:57.355+0800        DEBUG        [input]        log/input.go:286        input states cleaned up. Before: 1, After: 1, Pending: 0        {"input_id": "9d00bad2-0141-4f38-ab9f-39b9dd20e536"}
Feb 28 16:49:01 chegva_centos filebeat[26009]: 2024-02-28T16:49:01.396+0800        DEBUG        [kafka]        kafka/client.go:371        finished kafka batch
Feb 28 16:49:01 chegva_centos filebeat[26009]: 2024-02-28T16:49:01.396+0800        DEBUG        [kafka]        kafka/client.go:385        Kafka publish failed with: kafka: client has run out of available brokers to talk to (Is your cluster reachable?)
Feb 28 16:49:01 chegva_centos filebeat[26009]: 2024-02-28T16:49:01.396+0800        INFO        [publisher]        pipeline/retry.go:219        retryer: send unwait signal to consumer
Feb 28 16:49:01 chegva_centos filebeat[26009]: 2024-02-28T16:49:01.396+0800        INFO        [publisher]        pipeline/retry.go:223          done
Feb 28 16:49:02 chegva_centos filebeat[26009]: 2024-02-28T16:49:02.151+0800        ERROR        [kafka]        kafka/client.go:317        Kafka (topic=TOPIC_TEST_LOG): kafka: client has run out of available brokers to talk to (Is your cluster reachable?)

1.Dropping event: no topic could be selected
这个问题是没有找到kafka topic,改filebeat配置文件即可

2.kafka/client.go:317  Kafka (topic=TOPIC_TEST_LOG): kafka: client has run out of available brokers to talk to (Is your cluster reachable?)
这个问题起初我以为是filebeat版本与kafka不匹配,特意将filebeat从7.14升级到7.17,还是报错
最后在output.kafka中配置上kafka版本后解决:version: 0.10.2.1
腾讯云kafka不加version配置时的kafka版本默认是1.1.1,之前在别的地方使用没设置version版本也能正常写入,这个实例版本比1.1.1要低,坑了一把
官方说明:https://www.elastic.co/guide/en/beats/filebeat/7.17/kafka-output.html#_version_3
versionedit
Kafka version filebeat is assumed to run against. Defaults to 1.0.0.
Valid values are all kafka releases in between 0.8.2.0 and 2.0.0.
See Compatibility for information on supported versions.

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK