Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KEDA integrate with readiness and liveness probes #237

Closed
jeffhollan opened this issue May 28, 2019 · 14 comments
Closed

KEDA integrate with readiness and liveness probes #237

jeffhollan opened this issue May 28, 2019 · 14 comments
Assignees
Labels
P2 stale All issues that are marked as stale due to inactivity
Milestone

Comments

@jeffhollan
Copy link
Member

jeffhollan commented May 28, 2019

If a container continues consuming messages but is failing/ throwing exceptions. Should KEDA continue scaling these containers?

@jeffhollan jeffhollan added this to the Current milestone May 28, 2019
@jeffhollan
Copy link
Member Author

The hope here is that if a deployment state is unhealthy for any reason, the autoscaling will be configured in a way that scaling out will stop. There is likely related work needed for Azure Functions specifically to make sure it exposes a liveness and readiness probe endpoint to drive this for KEDA scenarios with functios.

@asavaritayal
Copy link

@Aarthisk do you want to keep this in the Current milestone?

@Aarthisk
Copy link
Contributor

@asavaritayal Asavari, yes. I am investigating this right now.

@jeffhollan
Copy link
Member Author

Update from standup: A bit more work than we expected, @Aarthisk to chat with Fabio on if this is feasible or what is feasible. Needs discussion on if this is a blocker.

Should probably just check that this works for any container that's in a broken state, can figure out functions runtime in future

@jeffhollan
Copy link
Member Author

I believe only work here is on the Azure Functions Runtime to correctly expose a liveness and readiness endpoint. Other deployments that do should just work today. Keeping to track the work on that team but not KEDA specifc

@jeffhollan jeffhollan added the P2 label Nov 17, 2019
@tomkerkhove
Copy link
Member

Is there another GitHub issue for that?

@jeffhollan
Copy link
Member Author

Created one here Azure/azure-functions-host#5259

@tomkerkhove tomkerkhove added the integration:azure-functions All issues related to integration with Azure Functions label Nov 19, 2019
@zroubalik
Copy link
Member

This might be interesting feature for v2, not Functions related, but generally any deployment with Liveness probes

@zroubalik zroubalik removed the integration:azure-functions All issues related to integration with Azure Functions label Apr 8, 2020
@tomkerkhove
Copy link
Member

Bringing this back up - Should we do anything here?

Kubernetes will just scale the workload for us by using KEDA, but it's up to the workload to implement health checks. If we start monitoring these as well, I think it might go a bit far, no?

@Oliver-Sellwood
Copy link

Not sure if my use case is totally related to this issue but I'll let you decide.

I have an environment where we ingest logs from external entities and store them in a database. We have another service running that reads the logs and fires alerts based on the data in the db. Some of the alerts are triggered by a stop in related events. If any part of the system becomes non-functional then events stop getting into the database and all alerts that look for a cessation of events trigger at that time.

We have several components in the system, if any of which go down then the whole pipeline stops. The components are an input proxy that sends data to a kafka broker, logstash that consumes the kafka stream and sends it to Elasticsearch. We need a way to scale the alerting deployment to 0 if the liveness probes fail for any of the other deployments.

Any insight appreciated :)

@tomkerkhove
Copy link
Member

I think it's somewhat related and if we would be plugging in the Kubernetes events for a workload that we could identify it. What do you think @zroubalik?

Sounds like an interesting case though and another one is that we can scale workloads down if the downstream service is having issues.

@stale
Copy link

stale bot commented Oct 13, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale All issues that are marked as stale due to inactivity label Oct 13, 2021
@stale
Copy link

stale bot commented Oct 20, 2021

This issue has been automatically closed due to inactivity.

@stale stale bot closed this as completed Oct 20, 2021
preflightsiren pushed a commit to preflightsiren/keda that referenced this issue Nov 7, 2021
@kennyparsons
Copy link

Bringing this back up because I have a use case that might make sense here.
I have implemented a deployment with health checks and readiness probes. Basically, I have the pod marked as not ready when it is in use. This prevents the pod from getting more than 1 job to run.
This is reflected when I get do kubectl get deployments -l app=runner and it shows 1/1 ready. As soon as it gets a job, the pod starts returning 5xx and kubernetes now shows 0/1 ready.
Where keda comes in is to always make sure there is 1 ready pod for the deployment. It would be fantastic to be able to use the readiness prob, as it's native and well documented.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
P2 stale All issues that are marked as stale due to inactivity
Projects
None yet
Development

No branches or pull requests

7 participants