-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Return 1 or 0 instead of -1 when detecting an invalid offset in Kafka #2621
Conversation
/run-e2e kafka* |
Hi @RamCohen , only people with write permission can trigger this. |
/run-e2e kafka* |
Does this make sense @kedacore/keda-core-contributors ? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good, thanks! Could you please open documentation PR?
@bpinske @PaulLiang1 PTAL |
looks good. |
I wonder whether we can extend the e2e test to cover this scenario? I think that creating a topic without any message should cause the invalid offset problem, right? |
@RamCohen do you think you can contribute this? |
I'll have a look, but I'm not sure where have we landed with the proposed solution vis-a-vis having a configuration parameter for returning either 0 or 1 and what would be its default |
thanks for this pull request! we are having problems in production due to this issue :) do we have an estimate when the new version will be released? |
hi @rrmcclymont , |
thanks @JorTurFer glad to see that we will include this fix in the following release :) |
b1042f0
to
35b592a
Compare
Added tests |
/run-e2e kafka* |
/run-e2e kafka* |
@RamCohen could please open docs PR for this? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM,
@RamCohen so the last bit is Changelog
When getting the consumer offset of a Kafka topic for which no offset was committed yet, the scaler returns with -1 instead of 0, which causes to scale to the maximum number of replicas. Also fixed some typos and used interim variables for error strings. Fixes kedacore#2612 Signed-off-by: Ram Cohen <ram.cohen@gmail.com>
…ffset Signed-off-by: Ram Cohen <ram.cohen@gmail.com>
Signed-off-by: Ram Cohen <ram.cohen@gmail.com>
Signed-off-by: Ram Cohen <ram.cohen@gmail.com>
Update to use strimzi operator to v0.23.0 and Kafka 2.6.0 in order to properly work on Kubernetes 1.21 and up due to deprecation of beta CRD api Also refactor common deploy and status checks to use internal methods Signed-off-by: Ram Cohen <ram.cohen@gmail.com>
Signed-off-by: Ram Cohen <ram.cohen@gmail.com>
Signed-off-by: Ram Cohen <ram.cohen@gmail.com>
Signed-off-by: Ram Cohen <ram.cohen@gmail.com>
34713b8
to
e9b8e01
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks a lot @RamCohen!
/run-e2e kafka* |
@zroubalik What is the expected behaviour now. We want to scale consumers to 0 when there are no messages in topic and scale back consumers if there any new messages. Apologies if I did not understand or if I am missing something here. Thanks. |
@akshay201 We faced the same problem as you. I can share the main approaches we considered. I'm also curious to hear from others on how they've worked around this.
During exploration we wondered if it was possible to manually commit offsets from the consumer's Kafka client, but I believe the broker's rejected the commit since it was trivial (committing to offset 0 for all partitions). We never really determined if this was a product of the kafka client we were using, or a broker setting, or totally unavoidable. So, that could be something quick to try as well. |
@pnorth1 It seems like issue is resolved from the documentation of latest version Did you try new version? Update is there from 2.x+ version. If anyone else tried the same I would like to hear if it worked. |
When getting the consumer offset of a Kafka topic for which no offset
was committed yet, the scaler returns with -1 instead of 0, which
causes to scale to the maximum number of replicas.
This fix changes the behavior to return in that case either '1' (the default) or '0', depending on a new scaler parameter scaleToZeroOnInvalidOffset which default to 'false'
Also fixed some typos and used interim variables for error strings.
Fixes #2612 #2033