kafka topic consumer 消费非常大的消息
有业务上推送到kafka的json串非常大,json文件达到了6m,差不多36万行,内部嵌套四层,需要我们从kafka中接收数据并进行解析。在测试过程中,需要自己将该json串生产到测试的topic,发现这么大的字符串,没有办法从控制台那里粘贴上去。此处我们是用java写个生产者,读取文件然后发送值topic。然而不报错,也消费不到。这种情况下,需要配置kafka相关的一些参数,以下相关的参...
·
有业务上推送到kafka的json串非常大,json文件达到了6m,差不多36万行,内部嵌套四层,需要我们从kafka中接收数据并进行解析。
在测试过程中,需要自己将该json串生产到测试的topic,发现这么大的字符串,没有办法从控制台那里粘贴上去。此处我们是用java写个生产者,读取文件然后发送值topic。然而不报错,也消费不到。
这种情况下,需要配置kafka相关的一些参数,以下相关的参数放上来。
Consumer side : fetch.message.max.bytes
- this will determine the largest size of a message that can be fetched by the consumer.
Broker side : replica.fetch.max.bytes
- this will allow for the replicas in the brokers to send messages within the cluster and make sure the messages are replicated correctly. If this is too small, then the message will never be replicated, and therefore, the consumer will never see the message because the message will never be committed (fully replicated).
Broker side : message.max.bytes
- this is the largest size of the message that can be received by the broker from a producer.
Broker side (per topic) : max.message.bytes
- this is the largest size of the message the broker will allow to be appended to the topic. This size is validated pre-compression. (Defaults to broker’s message.max.bytes.)
Producer: Increase max.request.size
to send the larger message.
Consumer: Increase max.partition.fetch.bytes
to receive larger messages.
更多推荐
已为社区贡献1条内容
所有评论(0)