Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add nodeselector field to builds #11144

Closed
wants to merge 2 commits into from

Conversation

bparees
Copy link
Contributor

@bparees bparees commented Sep 28, 2016

fixes #9971

@bparees
Copy link
Contributor Author

bparees commented Sep 28, 2016

@openshift/api-review before i go any further w/ this, can i get an api-review?

@deads2k
Copy link
Contributor

deads2k commented Sep 28, 2016

@openshift/api-review before i go any further w/ this, can i get an api-review?

Would I be correct in assuming that you'll expect to add tolerations at some point as well?

The name of the type implies it affects creates. Does it also apply to updates? Is the name immutable?

@bparees
Copy link
Contributor Author

bparees commented Sep 28, 2016

Would I be correct in assuming that you'll expect to add tolerations at some point as well?

yes, though that may not be part of the build definition itself since it likely would not be under user control, it might be something we directly apply to build pods.

The name of the type implies it affects creates. Does it also apply to updates? Is the name immutable?

the name+behavior is the same name pod templates use for node selection. i don't follow your questions...

the build pod nodeselector field will be populated w/ whatever value exists on the buildconfig or build at the time the build is started.

@deads2k
Copy link
Contributor

deads2k commented Sep 28, 2016

the build pod nodeselector field will be populated w/ whatever value exists on the buildconfig or build at the time the build is started.

Seems like a cluster-admin would expect that if he says: "build pods will have node selector A", no user would be able to control the node selector, so allowing it to be set per-buildconfig may not be appropriate except as an additional constraint.

@bparees
Copy link
Contributor Author

bparees commented Sep 28, 2016

Seems like a cluster-admin would expect that if he says: "build pods will have node selector A", no user would be able to control the node selector, so allowing it to be set per-buildconfig may not be appropriate except as an additional constraint.

if they set the admission controller override value, that's what they'll get, right? they'll get the selector labels defined by the override settings, and then any additional selectors they explicitly define on their buildconfig. short of a user waiting for their build pod to be created, then updating the nodeselector value on the build pod (if that's even possible? if it is, perhaps we can enforce a constraint that prevents that). And there are valid use cases for letting users pick the nodeselector for their build (consider a cluster that has multiple build farms, some for memory bound builds, some for cpu bound builds)

my understanding from discussions w/ @smarterclayton is that basically we recognize users can make apps pretend to be builds and builds pretend to be apps, so pretending that nodeselectors provide any real guarantees against subversive users doesn't make sense.

@deads2k
Copy link
Contributor

deads2k commented Sep 29, 2016

my understanding from discussions w/ @smarterclayton is that basically we recognize users can make apps pretend to be builds and builds pretend to be apps, so pretending that nodeselectors provide any real guarantees against subversive users doesn't make sense.

You can't stop a user from creating a pod and pretending its a build, but you can prevent the build controller from creating a pod and pretending its not a build.

And-ing the condition would work. If you doing that unconditionally, this isn't defaulting, its forcing.

@bparees
Copy link
Contributor Author

bparees commented Sep 29, 2016

You can't stop a user from creating a pod and pretending its a build, but you can prevent the build controller from creating a pod and pretending its not a build.

which is what the override admission controller does.

And-ing the condition would work. If you doing that unconditionally, this isn't defaulting, its forcing.

the overrider applies all the keys explicitly. the defaulter applies the keys if the buildconfig itself hasn't already defined it.

@deads2k
Copy link
Contributor

deads2k commented Sep 29, 2016

Knowing that there's two, the idea seems fine.

Applying this to build pods (or pods you recognize as build pods) means that this will be hard to compose on top of a stock kube. This seems like it could be applied to Builds themselves, which would allow the same effect (build controller makes the build pods according to the build), but setting the values on builds would allow this admission plugin to live cleanly in origin.

@openshift-bot openshift-bot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Sep 29, 2016
@bparees bparees closed this Sep 29, 2016
@bparees bparees reopened this Sep 29, 2016
@bparees
Copy link
Contributor Author

bparees commented Sep 29, 2016

[test]

@openshift-bot openshift-bot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Sep 29, 2016
@bparees
Copy link
Contributor Author

bparees commented Sep 29, 2016

Applying this to build pods (or pods you recognize as build pods) means that this will be hard to compose on top of a stock kube. This seems like it could be applied to Builds themselves, which would allow the same effect (build controller makes the build pods according to the build), but setting the values on builds would allow this admission plugin to live cleanly in origin.

So i do think we want to refactor these admission controllers in general to operate on builds instead of pods, but there are a few things preventing us from doing that today:

  1. we want to keep users from changing these values (especially the override value). users can't edit build pods, but they can edit builds, so they could change the value after the build was created, before the pod got created, today, if we set the value on the build instead of the pod.

  2. we're trying to preserve the build as indicating the "intent" the user specified.

  3. if the default value changes and someone does a "rebuild" from an existing build (which in implementation means we clone the existing build object into a new build object), we want to apply the new default value, which wouldn't happen if the build object contained the default values from the previous build.

The answer to all of these in the future is for us to get cleaner about separating "spec" and "status" fields in our builds, and setting the override/default values on the status section, not the spec section, and then ensure that users can't update the status fields, only the spec fields.

But for now, i think we will just follow the patterns we've laid out for build admission controllers, which is that we modify the pod directly to control behavior. I think the refactor would be part of a v2 api, there's a lot of other things about the current build spec/status api that we need to address with that.

wdyt?

@bparees bparees force-pushed the build_nodeselector branch 4 times, most recently from 0805f61 to fc83a44 Compare September 30, 2016 21:32
@openshift-bot openshift-bot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Oct 1, 2016
@bparees
Copy link
Contributor Author

bparees commented Oct 10, 2016

flake #9548

@bparees bparees force-pushed the build_nodeselector branch 2 times, most recently from c8338ed to 537542b Compare October 10, 2016 20:25
@bparees
Copy link
Contributor Author

bparees commented Oct 10, 2016

@openshift/api-review can i get a sign off?

@bparees bparees force-pushed the build_nodeselector branch from 537542b to 000a45e Compare October 10, 2016 21:01
@openshift-bot openshift-bot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Oct 10, 2016
@@ -41,10 +41,11 @@ type buildOverrides struct {
// NewBuildOverrides returns an admission control for builds that overrides
// settings on builds
func NewBuildOverrides(overridesConfig *overridesapi.BuildOverridesConfig) admission.Interface {
return &buildOverrides{
a := &buildOverrides{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why this change?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think it's leftover from some debugging, i'll remove it.

@deads2k
Copy link
Contributor

deads2k commented Oct 11, 2016

Your admission plugin runs after PodNodeConstraints and OriginPodNodeEnvironment. I'd feel a lot better about this change if you ran before those. Is there a reason you must come after? If so, I think we'll need to consider splitting your admission plugin.

@bparees bparees force-pushed the build_nodeselector branch from 000a45e to af36a77 Compare October 11, 2016 12:39
@deads2k
Copy link
Contributor

deads2k commented Oct 11, 2016

I'm fine with the new fields, but this allows any clever project-editor to bypass the node selection criteria and get any pod he wants scheduled to a different location by bypassing the node selector admission plugins. I don't think that sort of "escalate me" behavior was intended. If it was, some reasoning about the repercussions of allowing any user to escape their node selector would be helpful.

@bparees bparees force-pushed the build_nodeselector branch from af36a77 to 3573064 Compare October 11, 2016 12:55
@bparees
Copy link
Contributor Author

bparees commented Oct 11, 2016

ok @smarterclayton we've reached the point where we need you to weigh in:

if we run the build admission controllers last(such that they can override nodeselector values from the project nodeselector admission control), @deads2k's concern is that an admin does the following:

  1. define a project nodeselector to allow user bob to run root pods but only on some jailed node FOO
  2. define a buildoverride nodeselector to force all build pods onto build farm nodes BAR

then bob creates a pod that looks like a build but runs as root/etc. That pod gets scheduled onto the build farm nodes but has scary privileges.

my concern with running the build admission controllers first is:

  1. admin defines a project nodeselector to force all apps onto a set of nodes FOO (node=FOO)
  2. admin defines a build admission controller to force all builds onto build farm nodes BAR (node=BAR)

Users will get a "nodeselector conflict" when the project nodeselector admission control tries to set the nodeselector to node=FOO when it's already set to node=BAR.

Using a distinct set of keys (app=FOO, build=BAR) doesn't help because now even if you label your app nodes as "app=FOO" and your build nodes as "app=FOO,build=BAR", apps can get scheduled onto the build farm nodes.

So do we want an impossible to use feature or an insecure feature? :)

@deads2k
Copy link
Contributor

deads2k commented Oct 11, 2016

Also, once admission is split into mutation and authorization stages, this admission plugin would be impossible to have in the chain and in conflict with the pod node selector authorization stage of admission. If we allow it to bypass the pod node selector admission plugin now, in 1.5 or 1.6 we'll have to choose one admission plugin to remove from the chain.

@bparees bparees force-pushed the build_nodeselector branch from 3573064 to f548db0 Compare October 11, 2016 14:11
@deads2k
Copy link
Contributor

deads2k commented Oct 11, 2016

Using a distinct set of keys (app=FOO, build=BAR) doesn't help because now even if you label your app nodes as "app=FOO" and your build nodes as "app=FOO,build=BAR", apps can get scheduled onto the build farm nodes.

A taint/toleration with control over who can use a toleration (authorization check we can allow for the buildcontroller), would prevent apps from being scheduled to build nodes.

@bparees
Copy link
Contributor Author

bparees commented Oct 11, 2016

A taint/toleration with control over who can use a toleration (authorization check we can allow for the buildcontroller), would prevent apps from being scheduled to build nodes.

except that:

  1. we don't have taints/tolerations yet, so this won't be usable yet
  2. it still means doing weird node labeling shenanigans (I have to label my build nodes w/ the app label to make it work)

@smarterclayton
Copy link
Contributor

smarterclayton commented Oct 12, 2016 via email

@bparees
Copy link
Contributor Author

bparees commented Oct 12, 2016

discussed with @smarterclayton, the selection was "implement the useless version of the feature"

but also a bonus prize of "and rip out all the admission controllers while you're at it", so..i'll be back.

@smarterclayton
Copy link
Contributor

With the ability to set a default toleration on builds so that admins can
steer apps away.

On Wed, Oct 12, 2016 at 4:22 PM, Ben Parees notifications@github.com
wrote:

discussed with @smarterclayton https://github.com/smarterclayton, the
selection was "implement the useless version of the feature"

but also a bonus prize of "and rip out all the admission controllers while
you're at it", so..i'll be back.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#11144 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABG_p9BO09GU0WGaXvpt9A81XzXk6trTks5qzUFygaJpZM4KJN0m
.

@bparees
Copy link
Contributor Author

bparees commented Oct 12, 2016

With the ability to set a default toleration on builds so that admins can steer apps away.

not in 3.4, no.

@smarterclayton
Copy link
Contributor

Discussion today is that without that this feature isn't complete enough.
Apps landing on build nodes automatically isn't enough of a solution.

On Oct 12, 2016, at 4:52 PM, Ben Parees notifications@github.com wrote:

With the ability to set a default toleration on builds so that admins can
steer apps away.

not in 3.4, no.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#11144 (comment),
or mute
the thread
https://github.com/notifications/unsubscribe-auth/ABG_pzRw8J6iwgznV6zYE4YDnqmviI32ks5qzUhugaJpZM4KJN0m
.

@bparees
Copy link
Contributor Author

bparees commented Oct 13, 2016

Discussion today is that without that this feature isn't complete enough.
Apps landing on build nodes automatically isn't enough of a solution.

then we should punt on doing this until we can do it right. which is not 3.4.

@openshift-bot openshift-bot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Oct 13, 2016
@bparees
Copy link
Contributor Author

bparees commented Oct 14, 2016

@smarterclayton @liggitt ok here's my compromise proposal:

in addition to default/override nodeselectors, we'll allow default/override annotations that will be applied to the pod. Admins can use that to set the toleration annotation as an override/default annotation on build pods.

Acceptable? It avoids tying us directly to the current taints/toleration implementation while providing a semi-useful general feature.

@openshift-bot
Copy link
Contributor

Evaluated for origin test up to 73c8473

@openshift-bot
Copy link
Contributor

continuous-integration/openshift-jenkins/test Running (https://ci.openshift.redhat.com/jenkins/job/test_pr_origin/10075/) (Base Commit: 2142bdb)

@openshift-bot
Copy link
Contributor

Origin Action Required: Pull request cannot be automatically merged, please rebase your branch from latest HEAD and push again

@smarterclayton
Copy link
Contributor

I still don't understand what you mean by the "default taints/toleration
implementation"? You mean the structs that are alpha?

I'm fine with adding default override annotations.

On Fri, Oct 14, 2016 at 1:04 PM, Ben Parees notifications@github.com
wrote:

@smarterclayton https://github.com/smarterclayton @liggitt
https://github.com/liggitt ok here's my compromise proposal:

in addition to default/override nodeselectors, we'll allow
default/override annotations that will be applied to the pod. Admins can
use that to set the toleration annotation as an override/default annotation
on build pods.

Acceptable? It avoids tying us directly to the current taints/toleration
implementation while providing a semi-useful general feature.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#11144 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABG_pz6pRK-526JgC8PdH2UjUx2fnoFnks5qz7YJgaJpZM4KJN0m
.

@bparees
Copy link
Contributor Author

bparees commented Oct 15, 2016

I still don't understand what you mean by the "default taints/toleration
implementation"? You mean the structs that are alpha?

since i never said those words, i don't know either? what structs?

The only aspect of taints/tolerations i know about/care about is that you specify tolerations by putting an annotation on your pod:

  1. the annotation name includes "alpha" in it
  2. @liggitt implied that the annotation mechanism would go away in favor of a first class field at some point

With those things in mind, I don't want to expose build override/defaulting logic that is explicitly tied to an alpha annotation. I'm ok w/ tying it to a generic annotation mechanism and letting users use it to set an alpha annotation if they so choose.

if/when toleration becomes a first class field, we'll have to revisit the override/defaulter to add another field (tolerations) that admins can explicitly provide an override for.

@bparees
Copy link
Contributor Author

bparees commented Oct 15, 2016

abandoning this in favor of #11380 due to merge conflict mess.

@bparees bparees closed this Oct 15, 2016
@bparees bparees deleted the build_nodeselector branch October 24, 2016 18:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs-api-review needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Proposal: Build isolation
5 participants