-
Notifications
You must be signed in to change notification settings - Fork 1.1k
500 response with CronJob in repo #868
Comments
That particular error message comes from the kubernetes client, grep tells me. |
On our dev (where we have the problem):
and in minikube (where I can't reproduce the problem):
The only possibly relevant difference is |
Except: https://github.com/weaveworks/flux/blob/master/cluster/kubernetes/resourcekinds.go#L225
|
There appear to be two distinct problems here:
(2) is the more serious issue, by far - no problem with an individual resource should affect operations on others. |
Yes and no. For some operations we want to know whether we can correctly apply a change, and that usually means looking at everything to make sure there are no duplicates, or whatever. In this case, I think there's a spot where we're over-estimating the information we need in order to proceed. It's OK to parse all the files to make sure there's a coherently-defined set of controllers (https://github.com/weaveworks/flux/blob/master/release/context.go#L68); less OK is asking for all the resources in question from kubernetes, then filtering down to the single one we care about (a few lines on). |
#869 addresses this problem, by making the release process only ask the cluster about services that are explicitly included in the release. It won't help if the problematic resource is specifically included, in implicitly included with |
As for the specific problem of dealing with CronJobs: our code uses the I don't know how one is supposed to move between versions -- either using |
This bit addressed by #875 (yes, it does respond with resources given either of |
When we put a CronJob resource in our git repo, fluxd started returning 500s to release requests for another resource, citing only
the server could not find the requested resource
.NB this was running Kubernetes 1.7, which doesn't support CronJob; it failed to sync that resource, while otherwise succeeding.
The text was updated successfully, but these errors were encountered: