-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KEDA integrate with readiness and liveness probes #237
Comments
The hope here is that if a deployment state is unhealthy for any reason, the autoscaling will be configured in a way that scaling out will stop. There is likely related work needed for Azure Functions specifically to make sure it exposes a liveness and readiness probe endpoint to drive this for KEDA scenarios with functios. |
@Aarthisk do you want to keep this in the Current milestone? |
@asavaritayal Asavari, yes. I am investigating this right now. |
Update from standup: A bit more work than we expected, @Aarthisk to chat with Fabio on if this is feasible or what is feasible. Needs discussion on if this is a blocker. Should probably just check that this works for any container that's in a broken state, can figure out functions runtime in future |
I believe only work here is on the Azure Functions Runtime to correctly expose a liveness and readiness endpoint. Other deployments that do should just work today. Keeping to track the work on that team but not KEDA specifc |
Is there another GitHub issue for that? |
Created one here Azure/azure-functions-host#5259 |
This might be interesting feature for v2, not Functions related, but generally any deployment with Liveness probes |
Bringing this back up - Should we do anything here? Kubernetes will just scale the workload for us by using KEDA, but it's up to the workload to implement health checks. If we start monitoring these as well, I think it might go a bit far, no? |
Not sure if my use case is totally related to this issue but I'll let you decide. I have an environment where we ingest logs from external entities and store them in a database. We have another service running that reads the logs and fires alerts based on the data in the db. Some of the alerts are triggered by a stop in related events. If any part of the system becomes non-functional then events stop getting into the database and all alerts that look for a cessation of events trigger at that time. We have several components in the system, if any of which go down then the whole pipeline stops. The components are an input proxy that sends data to a kafka broker, logstash that consumes the kafka stream and sends it to Elasticsearch. We need a way to scale the alerting deployment to 0 if the liveness probes fail for any of the other deployments. Any insight appreciated :) |
I think it's somewhat related and if we would be plugging in the Kubernetes events for a workload that we could identify it. What do you think @zroubalik? Sounds like an interesting case though and another one is that we can scale workloads down if the downstream service is having issues. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions. |
This issue has been automatically closed due to inactivity. |
…#237) Signed-off-by: Danny Seymour <dseymour@taos.com>
Bringing this back up because I have a use case that might make sense here. |
If a container continues consuming messages but is failing/ throwing exceptions. Should KEDA continue scaling these containers?
The text was updated successfully, but these errors were encountered: