-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kafka scaler not scaling to zero when offset is not properly initialized #2033
Comments
@raffis seems like the offset is not properly committed in the topic. Please try to fix that on the Kafka side (or try a new topic) and scaling should work. |
Yes I am aware of that but I expected that it scales down still since there is no offset yet. |
Same issue, is it better to scale to 0 in this case? |
Same problem here. Does anybody solved this? In this case if you scale the deployment to 0 KEDA Operator will scale it back to 1. |
I see, my only concern is that we might break existing behavior if we do this change. I am curious if there could exist a usecase where we would like to keep the current behaviour in this scenario? @messense @grassiale @matzew @lionelvillard thoughts? |
For our use cases, the lag is the only thing that really matters. |
What I would expect from KEDA is:
|
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions. |
@bpinske @PaulLiang1 opinion on this? |
Okay after reading through this and reminding myself what There might people who depend on the existing behaviour, but it's going to be a small group with a misconfiguration that works by accident. If people do solely depend on the kafka lag and want to guarantee at least 1 pod to always be available, regardless of lag being 0, then they should be be setting minReplicas=1. They should not be relying on this particular quirk where 0 gets interpreted as a valid metric preventing them from scaling to 0 even if they set minReplicas=0. I'd suggest a highlighted note in a change log would be sufficient for this. I personally don't think this is a behaviour worth preserving when people can simply set minReplicas appropriately for what they really want. I have similar uses cases with SQS where if there are simply no messages that need to be processed, just scale to zero - I basically treat it like aws lambda. I'd expect Kafka to have the same behaviour. @pierDipi has the right idea. |
OK cool, anybody willing to implement this for the upcoming release? |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions. |
This issue has been automatically closed due to inactivity. |
I think this should no be closed as the issue is still on going and affecting a big group of users including myself :) |
@gcaracuel following PR #2621 will be included in the next release. It should fix this issue, is there anything you are missing there? |
Report
How is scaling to zero supposed to work? I'd like to scale to 0 if there is a topic with no messages at all or all consumed so far.
However it always scales to 1.
Expected Behavior
Scale deployment to 0.
Actual Behavior
Scaled to 1 replica.
Steps to Reproduce the Problem
Logs from KEDA operator
KEDA Version
2.4.0
Kubernetes Version
1.18
Platform
Amazon Web Services
Scaler Details
Kafka
Anything else?
No response
The text was updated successfully, but these errors were encountered: