入门客AI创业平台(我带你入门,你带我飞行)
博文笔记

Kafka“Failed to send messages after 3 tries”问题解决

创建时间:2016-08-16 投稿人: 浏览次数:5045

                  最近在用flume向kafka推送数据时,过一段时间就会出现Kafka“Failed to send messages after 3 tries”这种错误,错误信息如下:

ERROR [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.sink.kafka.KafkaSink.process:139)  - Failed to publish events
kafka.common.FailedToSendMessageException: Failed to send messages after 3 tries.
        at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)
        at kafka.producer.Producer.send(Producer.scala:76)
        at kafka.javaapi.producer.Producer.send(Producer.scala:42)
        at org.apache.flume.sink.kafka.KafkaSink.process(KafkaSink.java:129)
        at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
        at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
        at java.lang.Thread.run(Thread.java:744)
ERROR [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.SinkRunner$PollingRunner.run:160)  - Unable to deliver event. Exception follows.
org.apache.flume.EventDeliveryException: Failed to publish events
        at org.apache.flume.sink.kafka.KafkaSink.process(KafkaSink.java:150)
        at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
        at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
        at java.lang.Thread.run(Thread.java:744)
Caused by: kafka.common.FailedToSendMessageException: Failed to send messages after 3 tries.
        at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)
        at kafka.producer.Producer.send(Producer.scala:76)
        at kafka.javaapi.producer.Producer.send(Producer.scala:42)
        at org.apache.flume.sink.kafka.KafkaSink.process(KafkaSink.java:129)

         查看日志后发现

WARN  [SinkRunner-PollingRunner-DefaultSinkProcessor] (kafka.utils.Logging$class.warn:83)  - Produce request with correlation id 107141954 failed due to [log-error,17]: kafka.common.MessageSizeTooLargeException          这才是造成数据不能发送的原因,修改kafka配置即可解决这个问题

       

Consumer side:fetch.message.max.bytes - this will determine the largest size of a message that can be fetched by the consumer. - Broker side: replica.fetch.max.bytes - this will allow for the replicas in the brokers to send messages within the cluster and make sure the messages are replicated correctly. If this is too small, then the message will never be replicated, and therefore, the consumer will never see the message because the message will never be committed (fully replicated). - Broker side: message.max.bytes - this is the largest size of the message that can be received by the broker from a producer. - Broker side (per topic): max.message.bytes - this is the largest size of the message the broker will allow to be appended to the topic. This size is validated pre-compression. (Defaults to broker"s message.max.bytes.)
 
         之前也查了很多资料,有的说让改hostname、ip什么的,还是要仔细看日志,具体问题具体分析呀
 


阅读更多
声明:该文观点仅代表作者本人,入门客AI创业平台信息发布平台仅提供信息存储空间服务,如有疑问请联系rumenke@qq.com。
  • 上一篇:没有了
  • 下一篇:没有了
未上传头像