-
Notifications
You must be signed in to change notification settings - Fork 239
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[YUNIKORN-2976] Handle multiple require node allocations per node #1001
Conversation
If an allocation requires a specific node the scheduler should not consider any other node. We should allow multiple allocations that require the same node to reserve the node at the same time. A required node allocation must be placed on the node before anything else. If other non required node reservations are made on a node remove the existing reservations that do not require that node. Make sure that the releases are tracked correctly in the partition. After the repeat count removal reservations can be simplified: - track reservations using the allocation key - removed the composite key setup - removed collection listener call on reserve or unreserve of a node Cleanup testing of the node collection
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #1001 +/- ##
==========================================
+ Coverage 81.56% 81.99% +0.43%
==========================================
Files 97 97
Lines 15646 15621 -25
==========================================
+ Hits 12761 12809 +48
+ Misses 2601 2535 -66
+ Partials 284 277 -7 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wow, nice change :)
First, I just have a general observation: there seems to be too many uncovered paths in application.go
. I'm sure some of those already existed before, but this change touches critical part of the allocation cycle, so I'd be happy to see those codecov reports go away, like:
#L883 - L887
#L944 - L947
#L1106 - L1120
#L1143
#L1241 (this looks weird, because the flow continues)
#L1317 (looks weird too)
#L1362
Perhaps they're covered indirectly from partition, but it's hard to reason about it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some more comments after round #1.
I've installed this patch on a cluster where I've spun up a kwok-based autoscaler. I encountered a scenario where a monitoring pod (daemonset member) wouldn't schedule on a full node by triggering preemption. The logs ping-pong back and forth between these two messages now:
Events for the pod:
I'm using Update: Manually killing a pod to make room doesn't make the logs go away, but does schedule the pod. This would seem to indicate that the reservations aren't being cleaned up properly. |
Sorry for the late response:
One function was not updated to allow multiple daemon set pods to reserve multiple nodes. A test case was missing for that unit test also. |
Added more unit tests remove function and calls to getAllocationReservation remove function and calls to IsAllocationReserved rewrote function IsReservedOnNode to NodeReservedForAsk and make the caller check the nodeID fix the logic around a previously reserved allocation in tryRequiredNode Put a safeguard into the partition to handle the broken case mentioned in the review
filed two follow up jiras to add unit tests to tryNodesNoReserve and tryPlaceholderAllocate on the application |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 LGTM. I've verified the updated patch on a cluster with a kwok autoscaler, and have been unable to reproduce the original issues. This looks good.
) If an allocation requires a specific node the scheduler should not consider any other node. We should allow multiple allocations that require the same node to reserve the node at the same time. A required node allocation must be placed on the node before anything else. If other non required node reservations are made on a node remove the existing reservations that do not require that node. Make sure that the releases are tracked correctly in the partition. After the repeat count removal reservations can be simplified: - track reservations using the allocation key - removed the composite key setup - removed collection listener call on reserve or unreserve of a node Closes: #1001 Signed-off-by: Craig Condit <ccondit@apache.org>
…ache#1001) If an allocation requires a specific node the scheduler should not consider any other node. We should allow multiple allocations that require the same node to reserve the node at the same time. A required node allocation must be placed on the node before anything else. If other non required node reservations are made on a node remove the existing reservations that do not require that node. Make sure that the releases are tracked correctly in the partition. After the repeat count removal reservations can be simplified: - track reservations using the allocation key - removed the composite key setup - removed collection listener call on reserve or unreserve of a node Closes: apache#1001 Signed-off-by: Craig Condit <ccondit@apache.org> (cherry picked from commit b02e15e)
What is this PR for?
If an allocation requires a specific node the scheduler should not consider any other node. We should allow multiple allocations that require the same node to reserve the node at the same time. A required node allocation must be placed on the node before anything else. If other non required node reservations are made on a node remove the existing reservations that do not require that node. Make sure that the releases are tracked correctly in the partition.
After the repeat count removal reservations can be simplified:
Cleanup testing of the node collection
What type of PR is it?
Todos
What is the Jira issue?
YUNIKORN-2976
How should this be tested?
unit tests are updated new e2e test should be added