-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Getting the error: error when patching "obj.yaml": Timeout: request did not complete within requested timeout - context deadline exceeded #5700
Comments
@ghostx31 Have you figured out what can be causing this issue? We observe a similar behavior, and to be honest it is not clear for me what component throws timeout error. |
Hello, |
Removing the SO, and adding it again solves the issue, but it is not convenient to delete and add when we want to make a change 🙁
This is the part that I don't understand. I am assuming that the timeout is given by the Kubernetes API Server, but what is the root cause? Is it the
Any clues? 🙂 |
Yeah, it's not a solution at all if you have to delete it all the time. After removing it, do you still not be able to modify it? I mean, you've deleted it and it has worked, so now, can you update it or still not? |
Even after deleting and adding it again, I am not able to update it. The same error is returned 🙂 |
Hello @AleksanderBrzozowski Deleting the SO and then re-syncing it from Argo seems to solve it for us, but this is a bit of hassle and not really a solution since we need to delete and re-sync it every time we need make some change. |
@ghostx31 Yeah, so we have the same situation, and trying to find a root cause of this. Any clues what might be causing it? 🙂 |
Could you share the ScaledObject that produces conflicts? |
Yeah, here it is: apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: my-service
namespace: my-namespace
spec:
maxReplicaCount: 60
minReplicaCount: 2
pollingInterval: 10
scaleTargetRef:
name: my-service
triggers:
- metadata:
metricName: RPS
query: sum(rate(istio_requests_total{destination_workload_namespace="my-namespace",destination_workload="my-service",
reporter="destination"}[1m[]))
serverAddress: http://prometheus-svc.prometheus-ns:9090
threshold: "500"
type: prometheus
- metadata:
metricName: Latency
metricType: Value
query: histogram_quantile(0.99, sum(rate(istio_request_duration_milliseconds_bucket{kubernetes_namespace="my-namespace",
app="my-service", reporter="destination"}[1m[])) by (le))
serverAddress: http://prometheus-svc.prometheus-ns:9090
threshold: "50"
type: prometheus |
Sorry for the delay, I've been quite busy these weeks. Returning to your case, could you have any issue with the webhooks? Thinking about this, the control plane is calling to all the admission webhooks registered in the clusters (if they have registered the item). KEDA has it's own admission webhook for validating the ScaledObject, do you see any error on it? You can try disabling the admission webhook temporally just removing the |
No worries 🙂
Yeah, we are aware of the webhook, we should try to disable it to see if it helps. What webhook does under the hood? |
Basically, a few calls to the control plane to get some extra info, like other HPAs and the workload manifest to validate the ScaledObject information (preventing collisions on HPAs, wrong cpu memory config, etc) |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions. |
This issue has been automatically closed due to inactivity. |
@JorTurFer can we please re-open this one |
I experienced this too, but it was intermittent, basically 1 out of many deploys, it is annoying when it happens. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions. |
Report
We manage our deployments using ArgoCD. We upgraded our keda from 2.6 to 2.11.2 recently.
We have this issue on only two specific apps after the upgrade. The exact error message is:
This issue occurs both when syncing the Scaled Object from ArgoCD or when applying from kubectl itself. We have 44 scaled objects in this application, out of which ~40 are synced. We get this error when trying to sync for this specific application.
Our environment:
Keda version: v2.11.2
GKE version: 1.25.16-gke.1460000
I found another issue which resembles this but felt we should open a new issue due to the difference in environment: #5487
Expected Behavior
The scaled object should sync without issues.
Actual Behavior
The scaled object does not sync and fails.
Steps to Reproduce the Problem
KEDA Version
2.11.2
Kubernetes Version
< 1.26
Platform
Google Cloud
Scaler Details
External scaler - prometheus
The text was updated successfully, but these errors were encountered: