-
-
Notifications
You must be signed in to change notification settings - Fork 523
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Too many OFFSET_OUT_OF_RANGE errors #149
Comments
I've redacted Buffer data, and here's a debug trace —
It is also available as a gist — https://gist.github.com/paambaati/7ee798fc2af5c8bc397299bdb07b8b33#file-kafkajs-log |
Sure, let's debug this.
Do you know if the We have been using KafkaJS with topics around 50 partitions, it's the first time I see 400 partition topics, but it should work just fine. I've set up a local environment with 5 nodes, and 1 topic with 400 partitions and I can consume just fine. I think this is a compacted topic, can you confirm? |
It is, yes.
It is not. The topic's default cleanup policy is
I did, yes. I still see the same issue. @tulios Also, a quick question about this —
If the topic is pretty huge in size (say ~4 GB with 1.6B summed offsets), is that a possible bottleneck? What exactly happens when applying the default offset? I also tried the same thing with |
@paambaati can you post the value of Can you also try two things, the code in master and a different group id? Edit: await admin.fetchOffsets({ groupId, topic }) |
@tulios Some more observations after more runs —
Which parameters do I tune so I can lower these failures? |
@tulios
|
@tulios More info!
I don't see this problem, but all reads throw an error from my
A different group ID seems to fix the issue! The original group ID must've been stale because I was doing a lot of local test runs. Is this expected? I do have a SIGINT handler that calls |
Nice, at least you can move on. You can reset the offsets of the consumer group and see if it works: await admin.resetOffsets({ groupId, topic }) // latest by default
// await admin.resetOffsets({ groupId, topic, earliest: true }) On the LZ4, it's probably not handling the new RecordBatch of Kafka v0.11. I can take a look. |
@tulios Thanks! Do you have any guesses about why those My guess is there's some event emitter that is being set up inside a loop. |
@paambaati there are a couple of fixes in master for this issue, you can use master disabling the v0.11 APIs (because of the LZ4 problem): new Kafka({ ... allowExperimentalV011: false }) The If you are producing in parallel you can also consider using the await producer.sendBatch({
topicMessages: <TopicMessages[]>,
acks: <Number>,
timeout: <Number>,
compression: <CompressionTypes>,
}) Feel free to re-open this issue if you need. |
For reference, here is the issue about |
I have a 5-broker Kafka cluster with ~100 topics, and I'm using KafkaJS to subscribe to 2 topics (1 with 400 partitions and 1 with 10), and I'm always stuck at
"The requested offset is not within the range of offsets maintained by the server"
errors.These errors occur in an endless loop and I'm not able to consume any messages. I've also tried resetting the consumer offsets between runs, and also the
fromBeginning
option.I know this is very little information, but can you help me understand what circumstances can cause this error? I can provide more info about my setup and code.
The text was updated successfully, but these errors were encountered: