Skip to content
This repository has been archived by the owner on Jan 24, 2024. It is now read-only.

[BUG] can not consume all messages if the topic has lots of partitions(>1000) #353

Closed
dockerzhang opened this issue Jan 26, 2021 · 0 comments · Fixed by #354
Closed

[BUG] can not consume all messages if the topic has lots of partitions(>1000) #353

dockerzhang opened this issue Jan 26, 2021 · 0 comments · Fixed by #354
Assignees
Labels

Comments

@dockerzhang
Copy link
Contributor

Describe the bug
can not consume all messages if the topic has lots of partitions(>1000).

this issue may be related to #318

To Reproduce
Steps to reproduce the behavior:

  1. create a topic with 1000 partitions.
  2. produce messages.
  3. consume messages using a group.
  4. describe consumer group using Kafka client, See an error

Expected behavior
LAG is 0 for all partitions.

Screenshots
image

image

BewareMyPower pushed a commit that referenced this issue Jan 27, 2021
fixes #353

1, `responseData` is `LinkedHashMap`, which implementation is not synchronized, therefore `responseValues` should not parallel.
2, `getExecutor schedule` may not process so fast for lots of partitions.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants