-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update nodeenv plugin and DS controller to not process openshift.io/node-selector if scheduler.alpha.kubernetes.io/node-selector is set on pods namespace #21033
Conversation
|
||
// If scheduler.alpha.kubernetes.io/node-selector is set on the pod, | ||
// do not process the pod further. | ||
if len(namespace.ObjectMeta.Annotations) > 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any way this could break an existing cluster?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes i think, in case both nodeenv and podnodeselector are being used and that means both openshift.io/node-selector
and scheduler.alpha.kubernetes.io/node-selector
are configured. For example, lets say pod has k1=v1, and openshift.io/node-selector
is k2=v2
and scheduler.alpha.kubernetes.io/node-selector
is k3=v3
so result would be all three (k1=v1
,k2=v2
,k3=v3
), but with this PR result would be (k1=v1
, k3=v3
). Though I am not sure how frequently both nodeenv and podnodeselector are being used together.
As far as I remember, the initial design thoughts were to have possibility of processing both upstream (scheduler.alpha.kubernetes.io/node-selector
) and downstream (openshift.io/node-selector
) at the same time. So this PR breaks that. But dont see any choice so far with conformance tests and default openshift configuration.
Also I noticed that I need to modify vendor/k8s.io/kubernetes/pkg/controller/daemon/patch_nodeselector.go too to make DS controller in sync with this change, because patched DS controller also takes the same approach i.e. processing both node selectors annotations at the same time.
So I am still testing my changes to vendor/k8s.io/kubernetes/pkg/controller/daemon/patch_nodeselector.go.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i am curious that have we been not running kube conformance tests regularly? as how we caught the breakage recently only?
109c9ae
to
7723ca8
Compare
I have updated this PR, and so far I have tested it with upstream DS conformance tests with the upstream pr (kubernetes/kubernetes#68793), and DS tests are passing. In case anyone wants to double check, that would be helpful too. |
selector, err := labels.Parse(originNodeSelector) | ||
if err == nil { | ||
if !selector.Matches(labels.Set(node.Labels)) { | ||
kubeNodeSelector, ok := ns.Annotations["scheduler.alpha.kubernetes.io/node-selector"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this should be in a carry commit.
…notation key scheduler.alpha.kubernetes.io/node-selector in nodeenv admission plugin.
7723ca8
to
b4679a4
Compare
@derekwaynecarr updated. PTAL. |
Looking into test failures. |
openshift-io/node-selector if scheduler.alpha.kubernetes.io/node-selector is set.
b4679a4
to
f2d0786
Compare
Fixed unit tests, and updated PR. |
Current failures dont look like related to this PR, so restarting those tests. |
/test end_to_end |
/test integration |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: aveshagarwal, derekwaynecarr The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/cherrypick release-3.11 |
@derekwaynecarr: once the present PR merges, I will cherry-pick it on top of release-3.11 in a new PR and assign it to you. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/retest |
/retest Please review the full test history for this PR and help us cut down flakes. |
@derekwaynecarr: #21033 failed to apply on top of branch "release-3.11":
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Do i need to create the patch manually for 3.11 now? |
@aveshagarwal yes |
@smarterclayton @sjenning
Ref: https://bugzilla.redhat.com/show_bug.cgi?id=1628998
I am still testing it though.