-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Retry pending nodes #2385
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks really good and I think people will really appreciate the ability to resume under these circumstances.
Can I make one ask? The controller code is complex and critical, any change to it can introduce bugs. Can you take a look at writing some unit tests or an e2e test please?
Codecov Report
@@ Coverage Diff @@
## master #2385 +/- ##
=========================================
Coverage ? 13.11%
=========================================
Files ? 71
Lines ? 25302
Branches ? 0
=========================================
Hits ? 3319
Misses ? 21545
Partials ? 438
Continue to review full report at Codecov.
|
test/e2e/functional_test.go
Outdated
@@ -20,6 +21,11 @@ type FunctionalSuite struct { | |||
fixtures.E2ESuite | |||
} | |||
|
|||
func (s *FunctionalSuite) TearDownSuite() { | |||
s.E2ESuite.DeleteResources(fixtures.Label) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
huh, should not need this, how odd
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is brilliant. I'd like @simster7 to have the opportunity to a look before merging.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Just one question
Please hold off on merging this for a bit, want to take another look |
@simster7 how is it going? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My main concern is that the location where we attempt to recreate the pod:
doesn't record the recreation as a "retry". This could be a problem if you want to limit the amount of times you want to retry creating the pod (as is the only way to limit this would be by setting a backoff.maxDuration
flag, but we could also want a retryStrategy.limit
).
I would suggest refactoring the code so that an attempt to recreate the pod is considered as a full retry. This would mean taking the code out of this larger if
and letting these lines in the else run:
I think a natural place to retry creating the pod is in the execute{Container, Resource, Script}
function itself. You seem to attempt to do that here:
Which I think might be a better approach. What do you think?
@simster7, these two use cases are different: to retry a failed pod, and retry the submission of the pod, so it isn't related to retryStrategy.limit, as semantic is different. Suppose we have another parameter under retryStrategy to limit the number of resubmissions. Then imagine you have two pods, both of them require full namespace resources. Then there would be no way to tell when the first one is completed, thus making impossible to limit the number of resubmissions upfront. Therefore, it's not a refactoring per se, but rather completely different functionality, and this is not what we want to have. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
these two use cases are different: to retry a failed pod, and retry the submission of the pod, so it isn't related to retryStrategy.limit, as semantic is different.
I see your point, I have two comments/questions on it:
-
We use the term "Error" to denote a state when a pod fails because of anything other than its own code (e.g., the pod is deleted from the cluster, the pod image can't be pulled due to network issues, etc.). I would argue that failing to schedule a Pod because of resource quotas could be considered an "Error". Do we want to consider marking pods that failed to schedule due to a resource quota as an "Error"? This will cause them to be retried naturally using the existing retry mechanism.
-
Regardless of if we mark these Pods as an "Error", I don't think we should allow an unbounded recreating of Pods in "Pending" state. Perhaps a solution would be to add
retryStrategy.recreationLimit
(or some better name) that makes the distinction that you're looking for. As of now I can see some problems with this suggestion.
I am leaning towards going with approach 1, I think it's cleaner and it fits with the existing mechanisms and user expectations. But I want to hear your opinions on this as you seem better informed in this use case than me.
@simster7, to me the difference is substantial: in one case pod exists (=consumes resources), and in another pod doesn't exist (=no resources allocated). I remember once our cluster went down because it had unlimited retries in retryStrategy, and there were an error in pod itself, so I definitely do not want to retry resubmission the same way I want to retry failed pods. To make it perfect we need to have specific error, like QuotaError, and this requires to understand the error returned by Kubernetes API in more detail (=not how it is right now), and I sort of implemented it by passing To lump together submission errors (forbidden) and all other errors will make it unusable for our case (we have namespaces with limits by default for all our teams). Please help me understand why do you want to limit the number of resubmissions due to So, the approach suggested is not a perfect solution, but rather a practical compromise, as this issue is really a show-stopper for us (and it seems not only for us). |
@simster7, wrt to script tasks, here is the definition: https://github.com/argoproj/argo/blob/master/examples/retry-script.yaml |
@simster7, I do, I'd suggest having a feature toggle for now (I'm thinking workflow label/annotation?), and make it default in future releases (because tbh this is what I would expect from pod orchestrator). What do you think? |
Let's do it then! Perhaps as a label in Once this feature toggle is set and #2385 (comment) is addressed we can merge this in. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Three smaller comments, then we should be good to go. Thanks for all the hard work!
Please see below.
Hey @jamhed. I just made three changes:
Could you please take a look at the PR now and let me know of any comments you may have? If you OK this, we'll merge it in. |
@simster7 it was there for a reason :) i'm quite certain it won't work this way for templates with retryStrategy.limit = 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jamhed and I chatted offline and he gave this the green light
Checklist:
"fix(controller): Updates such and such. Fixes #1234"
.