-
Notifications
You must be signed in to change notification settings - Fork 105
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
design-proposal: Feature configurables #316
base: main
Are you sure you want to change the base?
Conversation
# Design | ||
If a developer wants to make a feature configurable, he needs to do so by adding new fields to the KubeVirt CR under `spec.configuration`. | ||
|
||
> **NOTE:** The inclusion of these new KubeVirt API fields should be carefully considered and justified. The feature configurables should be avoided as much as possible. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct. There should be very good reasons to complicate our API with new fields.
I think that this proposal would benefit if it include concrete examples of features that really require it. In fact, I'd love to see a comprehensive list of all GAed features that need a cluster-wide configuration.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried my best to add a list.
I've created two separate list of feature gates:
- Those that can be tuned in HCO and in the downstream documentation we ask the user to switch them to get a specific feature. These should have a configurable.
- Those that are always enabled in HCO by default, which IMHO we should either, remove the feature gate entirely or create a configurable.
Could you please confirm if the features listed here are GA'd?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you please confirm if the features listed here are GA'd?
I cannot, but we must know the answer, even if it means reaching out to every developer that introduced each feature gate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perfect. I agree, maybe we can create an issue for each feature gate and ping the last 2 o 3 contributors of them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense. I think that contacting them should be part of this proposal, as they are the stakeholder who would need to do the work prescribed here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
84420a7
to
7e4b3db
Compare
- AlignCPUs | ||
|
||
This is the current list of GA'd features present in KubeVirt/KubeVirt which are still using feature gates and are always | ||
enabled by [HCO](https://github.com/kubevirt/hyperconverged-cluster-operator/blob/main/controllers/operands/kubevirt.go#L125-L142): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this mean that you recommend that the feature flag is dropped and the feature is enabled on all clusters?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've assumed that those features are GA since in downstream they are enabled in a hardcoded way, i.e., the user can't disable them by any means. Therefore, IMHO, they should drop the feature flag.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Those are Kubevirt FeatureGates that are always set by HCO as part of its opinionated deployment.
Those are not exposed to the cluster admin by HCO and they cannot be enabled/disabled.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. Correct. Thanks for the double check.
Pending state. | ||
- Feature status checks should only be performed during the scheduling process, not at runtime. Therefore, the feature | ||
status changes in the KubeVirt CR should not affect running VMis. Moreover, the VMi should still be able to live | ||
migrate, preserving its original feature state. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a very difficult requirement. If a feature is disabled, I don't expect things like new nodes to support it (e.g they would not have needed daemonset running on them) and I would not expect the VMI to migrate to them.
If a feature is disabled, VMs that use it may die. I don't think that we should actively kill them, but we should not promise anything about them. I think that we should just alert that VM is using a feature that is no longer supported by the cluster.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If a feature is disabled, I don't expect things like new nodes to support it (e.g they would not have needed daemonset running on them) and I would not expect the VMI to migrate to them.
Ack.
I don't think that we should actively kill them,
Agree.
but we should not promise anything about them
If a feature is disabled, VMs that use it may die.
I do not agree. Which reason do you see that we should not promise the same as we promised at the time the VM was started?
I think that we should just alert that VM is using a feature that is no longer supported by the cluster.
Let's check if this is implementable using a reasonable amount of effort at runtime.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but we should not promise anything about them
If a feature is disabled, VMs that use it may die.I do not agree. Which reason do you see that we should not promise the same as we promised at the time the VM was started?
The moment that I, the cluster-admin, decided to turn off the feature, I already broke my "promise". For example, a daily VM-bound job would fail to start. Or a VM with runStrategy=Always
would fail to start if the node crashes. I see no reason to add (and no way to promise) a special case of the case where a node is turned off and VMs are migrated away. When I disable a feature (say, because I realize that downwardMetrics exposes too much information to the guest), I want VMs to stop using it.
Another way to think of this is with eventual consistency in mind. Assume that a VM started just as the admin disabled the feature. We should not promise eternal life to the VM if it sneaked in a microsecond earlier. If the feature is disabled, then eventually, a VM using it should not be running.
I think that we should just alert that VM is using a feature that is no longer supported by the cluster.
Let's check if this is implementable using a reasonable amount of effort at runtime.
Sure. I hope it would not be hard to add a condition reporting that the VM uses a feature in state A but the cluster has it in state B.
status changes in the KubeVirt CR should not affect running VMis. Moreover, the VMi should still be able to live | ||
migrate, preserving its original feature state. | ||
- Optionally, It could enable the possibility to reject the KubeVirt CR change request if running VMis are using the | ||
feature in a given state. However, by the default the request should be accepted. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This option cannot be implemented safely, as it is raceful. Besides, it gives a single VM of a single user the ability to block the cluster-admin from making a change to their cluster.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggested this use case and agree that as written it can easily race if a creation request from a user comes in while the admin is attempting to disable a given feature.
That said I think that at a high level the use case is still valid and that some cluster admins aren't going to want to disable a feature cluster wide that's being used by running VMs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That said I think that at a high level the use case is still valid and that some cluster admins aren't going to want to disable a feature cluster wide that's being used by running VMs.
Right. I think that cluster admins would want to know if the functionality that they are disabling is currently being used. They may also want to know by whom (as not all VMs are created equal). But a mere VM owner should not be in the position to block the cluster admin from expressing an intent that a feature is to be reconfigured.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm just a bit concerned about the time and resources needed to fetch all VMI using the feature.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That said I think that at a high level the use case is still valid and that some cluster admins aren't going to want to disable a feature cluster wide that's being used by running VMs.
Right. I think that cluster admins would want to know if the functionality that they are disabling is currently being used. They may also want to know by whom (as not all VMs are created equal). But a mere VM owner should not be in the position to block the cluster admin from expressing an intent that a feature is to be reconfigured.
+1
I'm not sure we can make a simplistic rule here, it sounds possible that in some cases it's useful to change configuration for running VMs and in others it isn't. Perhaps it's useful to start off with more concrete examples and have a discussion regarding them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct. And in "warning" I means "raise and alert or event or condition" on any affected VMs.
Ah I see, so should be the VMs itself who triggers the alert, event, warning, etc. Did I understand correctly?
Good. But we should not have a non-default attempt to block modification of the spec. It cannot be done safely (it is raceful), so we should not try.
Works for me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should be the VMs itself who triggers the alert, event, warning, etc.
As a VM owner, I see value in knowing that my VM is using a feature that is being removed from the cluster. So yes, a VM condition would be useful.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All right. Would that also work for the cluster admin changing configurations? I.e., as a cluster admin, I'm changing configurables and want to know if there's any affected VM? or should be the cluster admin careful enough to run a command the one you have posted before kubectl get vm -A -o yaml|grep someRequestedFeature
to know it in advance?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should be the cluster admin careful enough to run a ...
kubectl get vm -A -o yaml|grep someRequestedFeature
to know it in advance?
I don't see any other way. Note that this is also raceful. Someone may create a VM with someRequestedFeature
a microsecond later. That's the nature of our platform - the cluster admin cannot "know" at any point. They can only declare what is desired, and eventually the cluster would get there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All right. Thanks for the input. I will drop the sentence or change it by something like "the kubevirt CR can't be rejected, the cluster admin is responsible to know in advance if the feature configurable change will affect running VMIs".
7e4b3db
to
3104058
Compare
status changes in the KubeVirt CR should not affect running VMis. Moreover, the VMi should still be able to live | ||
migrate, preserving its original feature state. | ||
- Optionally, It could enable the possibility to reject the KubeVirt CR change request if running VMis are using the | ||
feature in a given state. However, by the default the request should be accepted. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggested this use case and agree that as written it can easily race if a creation request from a user comes in while the admin is attempting to disable a given feature.
That said I think that at a high level the use case is still valid and that some cluster admins aren't going to want to disable a feature cluster wide that's being used by running VMs.
fields, this change is acceptable, but it should be marked as a breaking change and documented. Moreover, all feature | ||
gates should be evaluated to determine if they need to be dropped and transitioned to configurables. | ||
|
||
## About implementing the checking logic in the VM controller |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
VMI controller?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My intention here is to reflect that we can implement some logic in the VM controller itself to get an early feedback, i.e., before starting the VM, about the feature status in the VM conditions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
May be worth (if possible, not entirely sure yet if is possible at all independent of the location, to still consolidate runtime checks on the VMI lievel and propagate from the vmi status to vm status.
3104058
to
fe68749
Compare
PoC of design proposal [feature configurables](kubevirt/community#316) using the downward metrics feature as an example. It deprecates the `DownwardMetrics` feature gate in favor of a configurable spec `spec.configuration.downwardMetris: {}`. Signed-off-by: Javier Cano Cano <jcanocan@redhat.com>
spec: | ||
certificateRotateStrategy: {} | ||
configuration: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How would this look for the downward metrics? How would I enable or disable it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be enabled with:
apiVersion: kubevirt.io/v1
kind: KubeVirt
[...]
spec:
certificateRotateStrategy: {}
configuration:
downwardMetrics: {}
[...]
And disabled by removing spec.configuration.downwardMetrics: {}
.
You can find a working PoC in: kubevirt/kubevirt#12650
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the benefit of having the configuration
subelement over
apiVersion: kubevirt.io/v1
kind: KubeVirt
[...]
spec:
certificateRotateStrategy: {}
downwardMetrics: {}
[...]
?
And how about an even simpler downwardMetrics: true
or downwardMetrics: false
with false
as the default?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using an empty struct with the meaning of true
and its absence with the implicit meaning of false
sounds a bit odd to me.
I think that in general we should aim to have such kind of configuration values always explicitly represented in our APIs, rather than asserting that "unspecified fields get the default behavior".
Something like featureXxxx: true/false
will gave us the ability to always explicitly show the expected configuration still letting us choose if we want to default to true
or false
via the default property in the openAPIv3 schema, see: https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#defaulting
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to throw in other possiblities which may be worth considering, values which transport potentially more meaning are possible as well instead of true/false:
spec:
featureGates:
DownwardMetrics: Enabled
FeatureB: Enabled
FeatureC: Disabled
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The empty struct here means "use an emptyDir volume source"
Sorry if my question was unclear. I'm asking if in the history of
emptyDir
it was ever defined as an empty dictionary with no field at all?
Hey @dankenigsberg!
Sorry for not being clear.
Yes indeed. As can be seen here, emptyDir used to be an empty struct. Only later in time the Medium
field was added, then later the Size
field was added as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like there is an agreement on no placing new configurables under
spec.configuration
but inspec
. Adjusted the document to reflect this.
In theory I support that.
But as said above, we already have fields living under configuration and we need to mind backward compatibility.
Here are some options that come into mind:
- Duplicate configuration fields into
.spec
, deprecate the ones under.spec.configuration
, and basically duplicate them until we advance Kubevirt CR to v2. - Document that in the future we want to move these fields, but keep them as-is for now.
IMHO, you might consider splitting these two efforts to two distinct PRs: this PR to determine the feature toggles policy, and a different PR to plan deprecating .spec.configuration
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree. My point is: new feature configurable specs like downwardMetrics
, should be placed in spec
in the Kubevirt CR, e.g.,
spec:
downwardMetrics: {}
Regarding existing specs under .spec.configuration
, do you think we should reflect this somehow in this design document?
@jcanocan can you follow up on empty-map-means-true discussion?
Sure. IMHO, arguments provided by @iholder101 and @0xFelix sounds good enough.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
new feature configurable specs like downwardMetrics, should be placed in spec in the Kubevirt CR, e.g.,
Regarding existing specs under .spec.configuration, do you think we should reflect this somehow in this design document?
IMO it would be easier to discuss this in a different proposal
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All right. Thanks for the clarification.
Pending state. | ||
- Feature status checks should only be performed during the scheduling process, not at runtime. Therefore, the feature | ||
status changes in the KubeVirt CR should not affect running VMis. Moreover, the VMi should still be able to live | ||
migrate, preserving its original feature state. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but we should not promise anything about them
If a feature is disabled, VMs that use it may die.I do not agree. Which reason do you see that we should not promise the same as we promised at the time the VM was started?
The moment that I, the cluster-admin, decided to turn off the feature, I already broke my "promise". For example, a daily VM-bound job would fail to start. Or a VM with runStrategy=Always
would fail to start if the node crashes. I see no reason to add (and no way to promise) a special case of the case where a node is turned off and VMs are migrated away. When I disable a feature (say, because I realize that downwardMetrics exposes too much information to the guest), I want VMs to stop using it.
Another way to think of this is with eventual consistency in mind. Assume that a VM started just as the admin disabled the feature. We should not promise eternal life to the VM if it sneaked in a microsecond earlier. If the feature is disabled, then eventually, a VM using it should not be running.
I think that we should just alert that VM is using a feature that is no longer supported by the cluster.
Let's check if this is implementable using a reasonable amount of effort at runtime.
Sure. I hope it would not be hard to add a condition reporting that the VM uses a feature in state A but the cluster has it in state B.
fe68749
to
99c7e4d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great proposal!
requiring a feature in a state different from what was configured in the KubeVirt CR, or what should happen if the | ||
configuration of a feature in use is changed. (see matrix below). | ||
|
||
## Goals |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be great if the goals could be extended to "I want to have a clear understanding which features are enabled and disabled".
One take which we did lately in another project, was expressing feature gates like this:
Spec:
spec:
featureGates:
DownwardMetrics: Enabled
FeatureB: Enabled
FeatureC: Disabled
Status:
status:
featureGates:
Downwardmetrics: Enabled # explicitly enabled
FeatureB: Enabled # explicitly enabled
FeatureD: Enabled # enabled by default
FeatureC: Disabled # explicitly disabled
FeatureD: Disabled # disabled by default
The status section makes it eventually very clear and discoverable what's effectively enabled.
The example here is specficially about feature gates and not about toggling features after they GAed, but I think a simliar approach may make sense to make it visible what's effectively enabled at the end. Especially if we add a second layer of configurables independent of feature gates, it may be even more valuable, since it get's harder to understand what's truly enabled.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be great if the goals could be extended to "I want to have a clear understanding which features are enabled and disabled".
I agree.
The status section makes it eventually very clear and discoverable what's effectively enabled.
I'm not sure. In the case, a user tries to enable a feature which still is under a feature gate, and this feature gate is disabled, the system should not allow you to enable the feature using the configurable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rmohr just wanted to point out that feature gates are usually booleans, i.e. either enabled or disabled, while feature configuration might be more complex than that including more tunables that aren't necessarily booleans.
That being said, I like the approach of having the status outline what's effectively configured.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding the status: I've included this in the goals and an example of how it might look like.
spec: | ||
certificateRotateStrategy: {} | ||
configuration: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to throw in other possiblities which may be worth considering, values which transport potentially more meaning are possible as well instead of true/false:
spec:
featureGates:
DownwardMetrics: Enabled
FeatureB: Enabled
FeatureC: Disabled
- If the feature is set to state A in the KubeVirt CR and the VMI is requesting the feature in state B, the VMIs must | ||
stay in Pending state. The VMI status should be updated, showing a status message, highlighting the reason(s) for the | ||
Pending state. | ||
- Feature status checks should only be performed during the VMI reconciliation process, not at runtime. Therefore, the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should probably go a little bit more into detail here. There can be cases where feature enablement has no API visibility on the VMI and may only happen at virt-handler side, or where virt-operator has to redeploy components in different configurations. If you want to do this in the virt-controller reconciliation stage, some things may need additional hints in the vmi status to kind of snapshot the current configuration. Just to ensure we think about such cases before an agreement is found.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1
fields, this change is acceptable, but it should be marked as a breaking change and documented. Moreover, all feature | ||
gates should be evaluated to determine if they need to be dropped and transitioned to configurables. | ||
|
||
## About implementing the checking logic in the VM controller |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
May be worth (if possible, not entirely sure yet if is possible at all independent of the location, to still consolidate runtime checks on the VMI lievel and propagate from the vmi status to vm status.
policy, features reaching General Availability (GA) need to drop their use of feature gates. This applies also to | ||
configurable features that we may want to disable. | ||
|
||
## Motivation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the idea of the proposal to outline a general way of handling kubevirt configurables, or is it specifically for VMI features?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This proposal targets VM/VMI features respectively every configurable that needs to be taken into account when handling VMs/VMIs.
99c7e4d
to
342c91e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for this proposal, this is a very important topic. Happy to see this discussion!
requiring a feature in a state different from what was configured in the KubeVirt CR, or what should happen if the | ||
configuration of a feature in use is changed. (see matrix below). | ||
|
||
## Goals |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rmohr just wanted to point out that feature gates are usually booleans, i.e. either enabled or disabled, while feature configuration might be more complex than that including more tunables that aren't necessarily booleans.
That being said, I like the approach of having the status outline what's effectively configured.
This is current list of GA'd features present in KubeVirt/KubeVirt which are still using feature gates and are shown as | ||
configurables in [HCO](https://github.com/kubevirt/hyperconverged-cluster-operator/blob/main/controllers/operands/kubevirt.go#L166-L174): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While these lists are valuable for discussion's sake, I'm not sure they belong to the proposal itself. WDYT about moving these into the PR description?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree. Done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I find the concrete list very helpful to understand what this proposal is about. I would prefer to understand what we plan for each of them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay. I've reintroduced the FG list.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. I'd like to understand how the proposal is going to affect many/all of them.
# Design | ||
|
||
In order to make a feature configurable, it must be done by adding new fields to the KubeVirt CR under | ||
`spec.configuration`. | ||
|
||
|
||
> **NOTE:** The inclusion of these new KubeVirt API fields should be carefully considered and justified. The feature | ||
> configurables should be avoided as much as possible. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apart for the list of feature gates this is the only thing written under the "Design" section, although many questions are left unanswered like the questions you've listed under "Goals".
Here I'd expect the document to generally outline the approach we're going to commit to.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've extended the section to include more details and to make the Goal Section a bit easier to follow. Hope it is better now.
spec: | ||
certificateRotateStrategy: {} | ||
configuration: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm with @0xFelix on this one.
Using booleans as APIs is, in general, a smell since it's unextendible, and this is especially true for designing APIs for features configuration.
Would you suggest an example that would be awkward? If one day we find out that we need to control the "color" of downwardMetrics I would consider this quite readable:
apiVersion: kubevirt.io/v1
kind: KubeVirt
[...]
spec:
certificateRotateStrategy: {}
downwardMetrics: true
downwardMetricsColor: purple
[...]
@dankenigsberg To me this is much less readable, but that's pretty fine when you have only a single unstructured configuration. Please consider the following example (which is, obviously, entirely made up):
kind: KubeVirt
[...]
spec:
downwardMetrics:
hostInformation:
- MACAdress
- IP
- ListOfUsers
style:
color: White
textSize: 17
[...]
I think you can agree that it looks much better than (the order is messed up on purpose. bear in mind that the order of yaml fields is usually sorted in an alphabetical order):
kind: KubeVirt
[...]
spec:
downwardMetrics: true
downwardMetricsColor: purple
downwardMetricsIncludeHostIP: true
downwardMetricsTextSize: 17
downwardMetricsIncludeHostMacAddress: true
downwardMetricsIncludeHostListOfUsers: true
[...]
My point is: this is unscalable and will fastly turn into a complete mess. We should take advantage of the fact that YAML/JSON already supports structured data.
In addition, it's common in k8s to omitempty
and leave only whatever is relevant / enabled, so I'm not sure how it's odd.
- If the feature is set to state A in the KubeVirt CR and the VMI is requesting the feature in state B, the VMIs must | ||
stay in Pending state. The VMI status should be updated, showing a status message, highlighting the reason(s) for the | ||
Pending state. | ||
- Feature status checks should only be performed during the VMI reconciliation process, not at runtime. Therefore, the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1
- Get a clear understanding about the feature status. | ||
- Establish how the features status swapping should work. | ||
- Describe how the system should react in these scenarios: | ||
- A feature in KubeVirt is set to state A and a VMI requests the feature to be in state B. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You assume that all of the features are granular to the VM/VMI level, but I don't think it's necessarily true. Things like UseEmulation
, MinimumClusterTSCFrequency
, ImageRegistry
and so on are cluster-wide by nature that don't even have APIs at the VM/VMI level.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've extended the goal definition to just aim those features that exposes an VM/VMI API.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why? I think that we should eliminate all feature gates.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why? I think that we should eliminate all feature gates.
Sorry, I'm not sure what do you mean. Could you please elaborate?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I'm not sure what do you mean. Could you please elaborate?
We have many feature gates that do not comply with https://github.com/kubevirt/community/blob/main/design-proposals/feature-lifecycle.md . Some of them expose a VM/VMI API, some does not. But imo that's not the main issue. All feature gates must comply. They all must either graduate or be reverted, and those that require a configurable in kubevirt cr must define it - not only those with VM/VMI APIs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. I've included the goal "-Graduate features status swapping from features gates to configurables." with that intention. Maybe it's not clear enough. WDYT of changing it to "Drop feature gates that do not require a configurable and graduate feature gates to feature configurables that requires them."?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would avoid the term "status swapping", we are talking here about configuring or specifying how a feature behave, it is not about the current status or about swapping two things. Howe about
Graduate features by dropping their gates and (optionally) adding
spec options for them
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good!
All right, I will replace any occurrence of "status swapping".
Many thanks for the suggestion. Much appreciated.
stay in Pending state. The VMI status should be updated, showing a status message, highlighting the reason(s) for the | ||
Pending state. | ||
- Feature status checks should only be performed during the VMI reconciliation process, not at runtime. Therefore, the | ||
feature status changes in the KubeVirt CR should not affect running VMIs. Moreover, the VMI should still be able to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about things like EvictionStrategy
?
Are we sure that we don't want to ever impact running VMs? To me it sounds valuable from an admin's perspective to change the eviction strategy before an upgrade for example.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we can allow impacting running VMs if the change does not require rebooting the VM. WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't like to give this promise.
Rebooting VMs is not fun, we should try to avoid it. But if as a cluster-admin, I no longer want downwardMetrics
, I would like to be able to express this - even if eventually this means that all VMs must be restarted.
Typically, any change to kubevirt cr is going to affect how currently-running or future VMs are going to behave. We can warn the admin, but not block them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we allow live migration in the first place if, for instance, the feature is disabled and the VMI is requesting it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we allow live migration in the first place if, for instance, the feature is disabled and the VMI is requesting it?
I would expect migration to fail if the destination host does not support the requested feature. Otherwise, we should not block migration (e.g to a host that still has this feature for some reason)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Understood. I've updated the text, trying to better reflect what we have discussed here.
status changes in the KubeVirt CR should not affect running VMis. Moreover, the VMi should still be able to live | ||
migrate, preserving its original feature state. | ||
- Optionally, It could enable the possibility to reject the KubeVirt CR change request if running VMis are using the | ||
feature in a given state. However, by the default the request should be accepted. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That said I think that at a high level the use case is still valid and that some cluster admins aren't going to want to disable a feature cluster wide that's being used by running VMs.
Right. I think that cluster admins would want to know if the functionality that they are disabling is currently being used. They may also want to know by whom (as not all VMs are created equal). But a mere VM owner should not be in the position to block the cluster admin from expressing an intent that a feature is to be reconfigured.
+1
I'm not sure we can make a simplistic rule here, it sounds possible that in some cases it's useful to change configuration for running VMs and in others it isn't. Perhaps it's useful to start off with more concrete examples and have a discussion regarding them.
342c91e
to
898b450
Compare
- Get a clear understanding about the features status. | ||
- Establish how the features status swapping should work. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I find the terminology here confusing. Is "status" the request of the cluster admin that can be "swapped"? Typically it is the actual condition reported by the cluster. Maybe you can explain the goal here in terms of a user story? What does the cluster admin want to do/understand?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've adjusted the goal. The idea is to reflect how the features can be tuned, e.g., enabled or disabled.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still find the term "feature configuration status" confusing. "config" or "spec" is something that a user controls. "status" is how the system is known to be working. I don't know what putting them together means.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe "state" would be more accurate?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated. Just kept "state" in those situations where I believe is clear what are we referring to. Please, take a look and if it is still confusing, I will change it.
status changes in the KubeVirt CR should not affect running VMis. Moreover, the VMi should still be able to live | ||
migrate, preserving its original feature state. | ||
- Optionally, It could enable the possibility to reject the KubeVirt CR change request if running VMis are using the | ||
feature in a given state. However, by the default the request should be accepted. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm just a bit concerned about the time and resources needed to fetch all VMI using the feature.
Would you elaborate, possibly with a specific example? Would it be much harder than kubectl get vm -A -o yaml|grep someRequestedFeature
?
+1 for considering several concrete examples.
- Get a clear understanding about the feature status. | ||
- Establish how the features status swapping should work. | ||
- Describe how the system should react in these scenarios: | ||
- A feature in KubeVirt is set to state A and a VMI requests the feature to be in state B. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why? I think that we should eliminate all feature gates.
898b450
to
d275eb8
Compare
apiVersion: kubevirt.io/v1 | ||
kind: KubeVirt | ||
[...] | ||
status: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is too abstract for me. How would this look like for existing features?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For instance, let's suppose that we have downwardMetrics
and maxAllowedCPUsPerVM
feature. The maxAllowedCPUsPerVM
controls the maximum amount of CPUs that a given VM can get. We enable the downwardMetrics
and we want to restrict that any given VM just can get 2 CPUs. The Kubevirt CR will looks like:
apiVersion: kubevirt.io/v1
kind: KubeVirt
[...]
spec:
downwardMetrics: {} # this means "enabled"
maxCPUsPerVM: 2
[...]
status:
featureStatus:
downwardMetrics:
status: Enabled
maxCPUsPerVM:
status: 2 CPUs
WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see the value of featureStatus
here. Would it ever be different from the relevant spec
elements?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, it shouldn't differ. This idea was proposed here: #316 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In that context it made sense, as the default of a feature was not clear. If we use the empty dictionary to mean enabled
(which I find very hard to consume), the default is always disabled
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All right. I will drop it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Dropped.
spec: | ||
certificateRotateStrategy: {} | ||
configuration: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jcanocan can you follow up on empty-map-means-true discussion?
- Get a clear understanding about the feature status. | ||
- Establish how the features status swapping should work. | ||
- Describe how the system should react in these scenarios: | ||
- A feature in KubeVirt is set to state A and a VMI requests the feature to be in state B. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I'm not sure what do you mean. Could you please elaborate?
We have many feature gates that do not comply with https://github.com/kubevirt/community/blob/main/design-proposals/feature-lifecycle.md . Some of them expose a VM/VMI API, some does not. But imo that's not the main issue. All feature gates must comply. They all must either graduate or be reverted, and those that require a configurable in kubevirt cr must define it - not only those with VM/VMI APIs.
d275eb8
to
a825f81
Compare
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
In order to adjust the scope of this design proposal. The alerting concern has been dropped, as well as the detection to configurables checks that do not expose VMI API fields. Those concerns will addressed in a follow-up design proposal. |
|
||
The checking in the VM controller could be added to let the user know if a VM has requested a feature configuration which | ||
is different from what it is specified in the KubeVirt CR. This will provide an early feedback to the user before starting | ||
the VM. However, it should not prevent the user to start the VM, the VMI controller should take care of checks preventing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand the English here, e.g what is the "early feedback" you refer to.
Can you rephrase the paragraph to ensure that
- The controller must not start a VM that requires a feature that is not available in the cluster.
- The fact that a VM cannot start despite its owner asking to do so must be reported by the controller in the
status
field of the VM.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
Let me clarify what I wanted to express with "early feedback". Let's suppose that the feature-b
is disabled cluster-wide and the user creates a VM object requesting this feature. The VM is not requested to start, just the VM object is created. The idea is that the VM controller updates the status
field reflecting that this VM won't start because it is requesting a feature no available.
I'm not sure if this would be helpful for users or just will create more noise.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah. You would like to inform a stopped VM that it won't be able to generate a VMI. This can be quite expensive (as it requires you to track all stopped VMs for no immediate reason). I would not make this a requirement.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AFAIK, it is not that expensive. It's a check in the VM controller reconciliation loop, what it is true is that if you have a stopped VM and disable the feature, the reconciliation loop is not triggered immediately. Takes some time. I will make this optional.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done. Could you please add the lgtm again if you are fine with the change? Thanks.
86dcaa3
to
5092239
Compare
5092239
to
b311f1a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do I understand correctly that if a VM is unable to start, because it is requesting a non-valid configuration, a VMI that remains in pending state is still created? Wouldn't it be better to not create a VMI if we know early that it won't be able to start?
The downward metrics feature exposes some metrics about the host node where the VMI is running to the guest. This | ||
information may be considered sensitive information. | ||
If there is no mechanism to disable the feature, any VMI could request the metrics and inspect information that, in some | ||
cases, the admin would like to hide, creating a potential security issue. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Need to know principle" :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added this.
configB: string | ||
[...] | ||
``` | ||
Please note that if the feature spec field is not present, the feature is assumed to be completely disabled. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not true for common-instancetypes deployment, which is enabled by default (the opposite).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Dropped. IMO, this could be left up to the developers to decide.
## Update/Rollback Compatibility | ||
|
||
The feature configurables should not affect forward or backward compatibility once the feature GA. A given feature, | ||
after 3 releases in Beta, all feature gates must be dropped. Those features that need a configurable should define it ahead |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A given feature, after 3 releases in Beta, all feature gates must be dropped.
Do we want to mention this here? I was under the impression this proposal is not about features gates?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A given feature, after 3 releases in Beta, all feature gates must be dropped.
Do we want to mention this here? I was under the impression this proposal is not about features gates?
I think it is important to mention. Some old features have been using gates to configure if they are off or on. This has to change - features that cannot be always on, must add configurable so that we can graduate the feature and drop the feature gate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ACK
b311f1a
to
852893d
Compare
No, it should not create the VMI object. Only if the VMI object is created directly, it will remain in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
Thanks! Time to get this merged.
## Update/Rollback Compatibility | ||
|
||
The feature configurables should not affect forward or backward compatibility once the feature GA. A given feature, | ||
after 3 releases in Beta, all feature gates must be dropped. Those features that need a configurable should define it ahead |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ACK
852893d
to
31bdca3
Compare
In one sense it is better to fail the creation of the VMI (less redundant objects in the system); But in another sense it is worse, because this would force the VM controller to replicate logic that is already in the VMI controller. I prefer the cleaner design, with a clear separation of responsibilities. |
Could you please expand how the "clear separation of responsibilities" would look like? |
The sync look for the VMI controller is responsible to check if the VMI can start on the cluster. It must have this code for people defining their VMI without a VM or changing cluster config while starting a VM. Making another component (such as the VM controller) know about this logic mixes the responsibilities. Architecturally speaking, it is better for each component to do one thing and do it well. So let us have the VM controller create the VMI object, which would float aimlessly until the VM is stopped or the feature is enabled. The VM controller should still bubble up the reason for the failure in the |
This design document states how features that require to have a mechanism to change it's state, e.g., enabled/disabled should be implemented in KubeVirt. Signed-off-by: Javier Cano Cano <jcanocan@redhat.com>
31bdca3
to
7b2c0f0
Compare
Got it. Thanks for the clarifications. |
Pull requests that are marked with After that period the bot marks them with the label /label needs-approver-review |
What this PR does / why we need it:
This design document states how features that require to have a mechanism to change it's state, e.g., enabled/disabled, should be implemented in KubeVirt.
Special notes for your reviewer:
This is current list of GA'd features present in KubeVirt/KubeVirt which are still using feature gates and are shown as
configurables in HCO:
This is the current list of GA'd features present in KubeVirt/KubeVirt which are still using feature gates and are always
enabled by HCO:
Please note that only feature gates included in KubeVirt/KubeVirt are listed here.
Checklist
This checklist is not enforcing, but it's a reminder of items that could be relevant to every PR.
Approvers are expected to review this list.
Release note: