You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[2023-02-08 19:56:35,993] WARN [SocketServer listenerType=ZK_BROKER, nodeId=0] Unexpected error from /10.138.0.12 (channelId=10.32.2.15:9092-10.138.0.12:37806-76); closing connection (org.apache.kafka.common.network.Selector)
org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 1347375956 larger than 104857600)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:105)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576)
at org.apache.kafka.common.network.Selector.poll(Selector.java:481)
at kafka.network.Processor.poll(SocketServer.scala:1055)
at kafka.network.Processor.run(SocketServer.scala:959)
at java.base/java.lang.Thread.run(Thread.java:829)
This is what I received when I connect prometheus-kafka-adapter to Prometheus. Modified the max receive size of Kafka lager than 1347375956 but still getting the same error. Any advise will be welcome!
The text was updated successfully, but these errors were encountered:
Hi @jerryum, that's a big message. I think it has nothing to do with prometheus-kafka-adapter, but with Kafka config itself. Could you try to increase the message.max.bytes in your Kafka brokers? Is that what you changed?
yes, that's what I did... Couldn't find the solution and ... forked the the repo and modified the adapter to create two different Kafka topics by exporters to reduce the message size....Metrics for the pods are too many... separated the metrics - one topic for the pods and the other topic for the rest of metrics.
I faced a similar issue with Spark writes. I believe you may need to adjust the producer properties, specifically with max.request.size. Please take a look at this resource: How to Send Large Messages in Apache Kafka.
You might need to make changes to the producer configuration in the adapter code or tweak some settings in the configuration. I'll update you once I find the necessary changes.
This is what I received when I connect prometheus-kafka-adapter to Prometheus. Modified the max receive size of Kafka lager than 1347375956 but still getting the same error. Any advise will be welcome!
The text was updated successfully, but these errors were encountered: