-
Notifications
You must be signed in to change notification settings - Fork 373
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Spec ModifyVolume for QoS kep-3751 #544
Spec ModifyVolume for QoS kep-3751 #544
Conversation
/assign @bswartz |
spec.md
Outdated
| Exceeds capabilities | 3 INVALID_ARGUMENT | Indicates that the CO has specified capabilities not supported by the volume. | Caller MAY verify volume capabilities by calling ValidateVolumeCapabilities and retry with matching capabilities. | | ||
| Volume does not exist | 5 NOT FOUND | Indicates that a volume corresponding to the specified volume_id does not exist. | Caller MUST verify that the volume_id is correct and that the volume is accessible and has not been deleted before retrying with exponential back off. | | ||
| Volume in use | 9 FAILED_PRECONDITION | Indicates that the volume corresponding to the specified `volume_id` could not be updated because it is currently published on a node but the plugin does not have ONLINE expansion capability. | Caller SHOULD ensure that volume is not published and retry with exponential back off. | | ||
| Unsupported `capacity_range` | 11 OUT_OF_RANGE | Indicates that the capacity range is not allowed by the Plugin. More human-readable information MAY be provided in the gRPC `status.message` field. | Caller MUST fix the capacity range before retrying. | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should add Unsupported iops
and Unsupported throughput
. Possibly also adding these new fields to ValidateVolumeCapabilities.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently the capacity_range is not listed in the ValidateVolumeCapabilities, I am not sure if I should add these to the ValidateVolumeCapabilities APIs but I will update this form.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh i think you meant adding to message VolumeCapability, will add. Thanks!
spec.md
Outdated
|-----------|-----------|-------------|-------------------| | ||
| Exceeds capabilities | 3 INVALID_ARGUMENT | Indicates that the CO has specified capabilities not supported by the volume. | Caller MAY verify volume capabilities by calling ValidateVolumeCapabilities and retry with matching capabilities. | | ||
| Volume does not exist | 5 NOT FOUND | Indicates that a volume corresponding to the specified volume_id does not exist. | Caller MUST verify that the volume_id is correct and that the volume is accessible and has not been deleted before retrying with exponential back off. | | ||
| Volume in use | 9 FAILED_PRECONDITION | Indicates that the volume corresponding to the specified `volume_id` could not be updated because it is currently published on a node but the plugin does not have ONLINE expansion capability. | Caller SHOULD ensure that volume is not published and retry with exponential back off. | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| Volume in use | 9 FAILED_PRECONDITION | Indicates that the volume corresponding to the specified `volume_id` could not be updated because it is currently published on a node but the plugin does not have ONLINE expansion capability. | Caller SHOULD ensure that volume is not published and retry with exponential back off. | | |
| Volume in use | 9 FAILED_PRECONDITION | Indicates that the volume corresponding to the specified `volume_id` could not be updated because it is currently published on a node but the plugin does not have ONLINE modify capability. | Caller SHOULD ensure that volume is not published and retry with exponential back off. | |
17b0aa5
to
b514162
Compare
My instinct would be to solve this "mutable parameters" problem more generally vs first-classing specific tunings. Since message CreateVolumeRequest {
...
// Tunings are like parameters but may be modified over time.
// They MUST be constant over time for replays of idempotent RPCs.
// To change tunings, see e.g. ModifyVolume.
// OPTIONAL.
map<string,string> tunings = X;
...
} One downside to this, of course, is that a parameter and a tuning could be specified w/ the same name - which would be very confusing. Perhaps that should be disallowed (which is rather easy to implement). Another consequence is that a plugin allowing something like "iops" as either a parameter or a tuning now needs to remember which way it was specified - mutable, or not? And this is not very friendly for stateless plugins (or, even stateful plugins that rely on a fixed backend metadata schemas that don't accommodate additional tweaks). So maybe this isn't the best idea either, but it feels a bit closer to the end goal. |
The spec doesn't really have a place to capture use cases, but our intent is to first-class the concept of volume performance so that workloads have a portable definition they can rely on across different storage vendors. Operators today have to encode specific storage system APIs in order to tune their performance which is not scalable across the myriad of CSI drivers today. |
Thanks for the use case.
BTW - volume expansion is a kind of "modify volume" API - and it got
complicated fairly quickly. What leads us to believe that tuning
performance parameters will be less tricky than expansion?
…On Thu, Apr 13, 2023 at 6:45 PM Michelle Au ***@***.***> wrote:
The spec doesn't really have a place to capture use cases, but our intent
is to first-class the concept of volume performance so that workloads have
a portable definition they can rely on across different storage vendors.
Operators today have to encode specific storage system APIs in order to
tune their performance which is not scalable across the myriad of CSI
drivers today.
—
Reply to this email directly, view it on GitHub
<#544 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAR5KLEZADWLVDCRB2G5TRDXBB6ZDANCNFSM6AAAAAAWJMM4YU>
.
You are receiving this because you commented.Message ID:
***@***.***>
--
James DeFelice
|
A lot of the complication for capacity expansion was because of the coordination across Controller and Node operations. I do not believe volume performance has a similar issue, but you're right in that it does bring other complications, such as:
|
For those that didn't attend, this PR was the subject of discussion during our last CSI meeting: I believe the major decision we have to make is whether to surface QoS parameters as first class fields (which will require us to pick exactly which fields and develop definitions that are acceptable) or whether to keep QoS parameters as opaque parameters, and then to try to develop a mechanism to support modifying opaque parameters. I personally lean towards the opaque parameter strategy, because it avoids one of the hardest problems which is coming up with a set of field definitions which is: (1) powerful enough to actually solve the problems (2) flexible enough to accommodate several different vendor implementations and (3) simple enough to not be a burden to developers and users. I acknowledge that there are significant challenges and drawbacks to the opaque parameter strategy, but the benefit if we can get that right is that it would support many other use cases beyond QoS. To convince me that the first class field approach is workable I'd like to invite some more vendors to the discussion and see if we can come up with mutually acceptable definitions of throughput and IOPS. I'd like to see the 3 major cloud vendors (Google/Amazon/Microsoft) and some of the larger storage companies (NetApp/EMC/Pure) agree to something. I'm happy to try to facilitate those meetings. Once we go forward with a first class field implementation it will be hard to change it, so we should try hard to see if we can get something good enough the first time around. I'm also open to @jdef's |
IMO portability and standard APIs is one of the major reasons Kubernetes and CSI has become so successful. Performance tuning is something that is ubiquitously supported across storage systems, so I think it is worth the effort to see if we can come up with an agreeable standard. Besides portable configuration, it can also enable observability, scheduling and auto-scaling use cases in the future. +1 on getting more vendors participating in the discussions. @sunnylovestiramisu has already involved multiple cloud providers. Perhaps we should start a separate doc to collect all the different data points, broadcast this discussion to the mailing list to get folks to add their information to the doc, and then setup another meeting where we can discuss further. @sunnylovestiramisu can you start that doc and then we can get @bswartz to help with broadcasting + facilitating further discussion? |
9d3827e
to
bf86ef5
Compare
I'd be happy to participate in this representing Microsoft. We strongly prefer making iops and throughput first class PVC fields alongside capacity. We feel that this is the best approach because:
CSI Spec for ModifyVolume doc lists two concerns with first class fields:
Has the decision been taken to go with the class-based approach? If not, how can I help? I feel that the class-based approach would not be a good fit for Azure and that although this is not our preference, we should investigate alternatives, perhaps similar to awslabs/volume-modifier-for-k8s (github.com). While annotations wouldn't be portable across SP's, neither are the opaque params in classes, and unsupported annotations would simply be ignored. Quotas would need to be enforced by the SP instead. Happy to discuss further, and please correct me if I've misunderstood anything. |
For which IO patterns does Azure guarantee a minimum throughput? For arbitrary chunk sizes or just for e.g. "writes of 8KB blocks"? With or without buffering? I very much doubt that throughput can be the same regardless of the chunk size, so if this becomes a first class field, the definition of that field would have to include how to measure throughput. Otherwise its pretty meaningless and therefore not useful. Just my two cents as an interested bystander... |
Azure Ultra Disk and Premium SSD v2 have 4k sector sizes by default, and while they allow dynamic configuration of iops/throughput, guarantees for specific IO patterns are not published. I appreciate that each vendor or disk type may have slightly different behaviour, but does the k8s definition need to be that precise? Users understand that |
Above you said "a request for iops/throughput should be the guaranteed minimum", but now it sounds more like "you can request something here, but it depends on the vendor what that actually means - it might not even be guaranteed". Doesn't that defeat the point of having this as a first-class field that users then can rely upon across vendors? |
The intent was to see if a mutually acceptable definitions of throughput and IOPS could be reached. I suggested guaranteed minimum as this matches the existing semantics of capacity requests. IMO there is much more value to the user if we could reach an agreement and avoid non-portable opaque fields. |
Did you run "make"? That should generate changes in csi.proto and csi.pb.go and those should be submitted as well. See example here: #539 |
98160fe
to
fbdace7
Compare
spec.md
Outdated
@@ -843,6 +848,10 @@ message CreateVolumeRequest { | |||
// VOLUME_ACCESSIBILITY_CONSTRAINTS plugin capability, the SP MAY | |||
// choose where the provisioned volume is accessible from. | |||
TopologyRequirement accessibility_requirements = 7; | |||
|
|||
// This field is OPTIONAL. This allows the CO to specify the | |||
// volume performance class parameters to apply. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree on not using K8s-specific terms, but we need specific and clear verbiage in the CreateVolume text that explains what this new field is for, and how it should be used, BOTH for the CO and the SP.
I am particularly worried about the potential for overlap between keys in the parameters and the mutable_parameters list. For example:
- What should happen if a CO specifies a key for a mutable parameter in the existing parameters field?
- What should happen if a CO specifies a key for an immutable parameter in the mutable parameters field?
- What should happen if a CO specifies a key in both maps?
- What should happen if there are different values associated with the two keys?
- Is it valid for an SP to support the same key having different meanings in the different maps?
What I'm getting at is: how is having separate maps for existing parameters and mutable parameters functionally different from putting them in the same map? To be clear: I think they should be separate, because I think it gives us a better shot at detecting and reporting user errors, but we should think through what those errors are likely to look like, and how they will be caught and reported, and what kind of guidance will minimize the likelihood of problems.
8c8f553
to
d27aaae
Compare
cd30f47
to
35283e7
Compare
// parameter. Absent keys that were previously assigned a value MUST | ||
// be interpreted as a deletion of the key from the set of mutable | ||
// parameters. | ||
map<string, string> mutable_parameters = 3; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Absent keys that were previously assigned a value MUST
// be interpreted as a deletion of the key from the set of mutable
// parameters.
Does this mean that calling ControllerModifyVolume
without specifying anything in mutable_parameters
will mean, user is trying to reset these mutable_parameters
to whatever default values driver/provider supports? What if storage provider does not support going back to default values?
It also feels like, this would generally mean driver should keep track of each mutable_parameter
it supports and reset them if the keys are unspecified? I am not sure, if this will be possible for each driver.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This wording was aimed mostly at the CO. I wanted to make it clear that if there are 2 mutable parameters "foo" and "bar" which were previously set, and the CO wants to modify "foo", it cannot just send a modify with the new value for "foo" and omit "bar", it needs to send the whole updated map of mutable parameters.
The goal here is to make it clear that the CO must implement a simple reconciler loop that always passes the whole map any time the map changes, and not something more complex where the CO computes a diff of what changed and sends only the diff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From k8s perspective since values in ControllerModifyVolume
will be coming from respective VAC class, this means that, if k8s admin wants a volume to be associated with new VAC class, they must specify all parameters from previous VAC class or CO should compute parameters
of ControllerModifyVolume
by using parameters from both old and new VAC class.
Another problem I have is with wording MUST be interpreted as deletion of key
. I don't know how driver will interpret this and what they are supposed to do.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you have a better recommended wording I'm flexible here. I just recognized that in the previous spec wording that if you wanted to change two different values, you might have thought that you could do it as two separate modify calls each with one key in the map. I wanted to make it clear that the CO is supposed to send everything it knows each time, and not attempt to modify parameters one at a time.
I agree that it's not obvious what it means to delete/unset a value, and mandating that the SP does so not my intent. My point was that the CO should not omit values that it knows but does not intend to modify.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From k8s perspective since values in
ControllerModifyVolume
will be coming from respective VAC class, this means that, if k8s admin wants a volume to be associated with new VAC class, they must specify all parameters from previous VAC class or CO should computeparameters
ofControllerModifyVolume
by using parameters from both old and new VAC class.
My intent is that if the VAC changes, the reconciler would just send the parameters for the new VAC. The contents of the old VAC would be irrelevant, so neither the CO nor the SP would need to know them in order for modify to complete successfully.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still think that there needs to be a way to unset previously-set mutable keys, which could be interpreted by the SP as reverting to some global default (defined elsewhere) or disabling some option (which was previously enabled).
I was thinking of adding a boolean
flag to accomplish this if needed but I think a cleaner approach might be to add a new RPC call. Something like - ControllerResetVolumeModification
which does what you say. This RPC I would imagine will take no map like parameters and reset mutable parameters to some kind of global default.
But for now IMO, I am not sure if we need it and we can always come back to it and add it in future. This allows us to keep ControllerModifyVolume
RPC call relatively simpler.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the things you suggest would also work, but they seem more complex. What's the downside of requiring the CO to pass all of the mutable options to each modify call and allowing the SP to interpret absent keys how it likes?
The point I'm trying to make here is that the CO implementation can be very simple and this detail doesn't matter (to the CO) because all the options are opaque anyways. The one thing I don't want to allow is updating a subset of the options in a single call, because I don't believe there is any benefit to allowing that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With my Kubernetes hat on, I think it should not matter what VAC was set to a PVC before, important is only the current VAC.
In other words, if a volume is created with VAC bronze
, then modified to silver
and then to gold
, it should have the same QoS parameters as if it was created with gold
right away. Regardless if any parameter was / was not dropped from the VACs. Otherwise it will be very difficult to see what QoS parameters a volume actually has. And current proposal here will allow this behavior with little / no code on CO side.
Yes, it puts more responsibility on cluster admins, they must count with resetting a missing parameter to default, or even always file all parameters to all VACs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One idea: we could somehow clarify that ControllerModifyVolume
should result in the same volume that would be created by ControllerCreateVolume
with the same parameters
as during original ControllerCreateVolume
and new mutable_parameters
? IMO that would be very clear to SP what to do, on the other hand they may need to store original parameters
somewhere (in volume_context?), so they can return to them when corresponding key gets missing in mutable_parameters
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That may not work for some SP's case, for example for GCP's hyperdisk-extreme, you can just specify the size of the disk, and a default iops will be set for that disk even you did not specify iops in mutable_parameters. But later you can actually modify the disk to have higher or lower iops as long as it is within the range.
// parsing and validating these parameters. COs will treat these | ||
// as opaque. Plugins SHOULD treat these as if they take precedence | ||
// over the parameters field. | ||
map<string, string> mutable_parameters = 8 [(alpha_field) = true]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can there be a CSI driver which can't apply all mutable_parameters
during volume creation but can only apply some of the other parameters via separate ControllerModifyVolume
RPC call?
In terms of integration with CO, this would mean specifying problematic mutable_parameters
during volume creation could cause PV provisioning to fail. I am fine if we chose not to support this use case but I wonder if it is worth calling out, so as something like csi-sanity tests could validate a driver for CSI spec compliance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think so. It's legal to call Modify right after Create, so an SP could just implement Create with mutable params as a create without mutable params followed immediately by a Modify (internally).
Requiring the CO so supply the mutable params at create time gives the SP a chance to avoid any expensive modifications by creating the volume correctly from the start. In theory we could have not included the param, but including it allows optimizations that wouldn't otherwise be possible. I see absolutely no downside from the SP perspective.
35283e7
to
474f1a5
Compare
csi.proto
Outdated
// COs SHALL NOT provide any values in mutable_parameters if the | ||
// capability is not enabled. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You have to mention the capability by name here.
// This field SHALL NOT be specified unless the SP has the
// MODIFY_VOLUME plugin capability.
csi.proto
Outdated
// Plugin specific parameters to apply, passed in as opaque key-value | ||
// pairs. This field is OPTIONAL. The Plugin is responsible for | ||
// parsing and validating these parameters. COs will treat these | ||
// as opaque. COs MUST specify the intended value of every mutable | ||
// parameter. Absent keys that were previously assigned a value MUST | ||
// be interpreted as a deletion of the key from the set of mutable | ||
// parameters. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// Plugin specific parameters to apply, passed in as opaque key-value
// pairs. This field is REQUIRED. The Plugin is responsible for
// parsing and validating these parameters. COs will treat these
// as opaque. The CO SHOULD specify the intended values of all mutable
// parameters it intends to modify. SPs MUST NOT modify volumes based
// on the absence of keys, only keys that are specified should result in
// modifications to the volume.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We talked about how keys mapped to empty strings could be used to delete mutable_parameter entries. I don't see language describing that. Did thinking shift? If not, perhaps @bswartz can provide a sentence around it
I feel nervous about declaring a field's content to be opaque, and then turning around and saying that the empty string means something specific. If a vendor wanted, they could define the empty string to have some other particular meaning, and it should not matter to either CSI or the CO. I mainly wanted to ensure that it was possible to clear a value through the CSI interface, if a vendor wants to rely on that ability for any purpose, such as disabling a previously enabled feature. As long as CSI commits to transmitting keys with empty string values from the CO to the SP like any other opaque string value (and we believe it already does), I think nothing additional is required. Do you think we need to spell out that sending a key with an empty string value and omitting the key will lead to different results? I thought it was clear but we can add a sentence saying that. I just don't want to mandate how the SP should interpret the key-with-empty-value case. |
It feels very strange to NOT have a standard, explicit way to "drop" a
mutable parameter entry. If there's a backing "parameter" w/ the same key,
then "drop" could also be just setting the mutable KVP to match the
non-mutable entry. But that falls flat where there's no parameter entry to
override. A generic "undo" or "rollback" capability (from a CO's
perspective) also becomes more difficult w/o an explicit spec w/ respect to
deletion. I agree w/ your comment re: opaqueness. An (verbose) alternative
would be a companion field "repeated string delete_mutable_parameters". And
then, perhaps, renaming "mutable_parameters" of the modify-volume call to
"upsert_mutable_parameters" - which would yield a completely clear intent
on the part of the CO: change the things in the "upsert" field and delete
the things in the "delete" field (and it's invalid for the upserted keys to
overlap w/ the deleted items). Or maybe for the modify RPC there's an
embedded message type that models the intent more cleanly?
message ControllerModifyVolumeRequest {
...
message ParameterChanges {
map<string,string> upserted = 1;
repeated string deleted = 2;
}
ParameterChanges mutable_parameter_changes = N;
...
}
Unrelated to the above, but something that occurs to me as I review this PR
again... GetCapacityRequest has "parameters" but not "mutable_parameters" -
which seems like an oversight.
…On Tue, Jul 25, 2023 at 6:39 PM Ben Swartzlander ***@***.***> wrote:
We talked about how keys mapped to empty strings could be used to delete
mutable_parameter entries. I don't see language describing that. Did
thinking shift? If not, perhaps @bswartz <https://github.com/bswartz> can
provide a sentence around it
I feel nervous about declaring a field's content to be opaque, and then
turning around and saying that the empty string means something specific.
If a vendor wanted, they could define the empty string to have some other
particular meaning, and it should not matter to either CSI or the CO.
I mainly wanted to ensure that it was *possible* to clear a value through
the CSI interface, if a vendor wants to rely on that ability for any
purpose, such as disabling a previously enabled feature. As long as CSI
commits to transmitting keys with empty string values from the CO to the SP
like any other opaque string value (and we believe it already does), I
think nothing additional is required.
Do you think we need to spell out that sending a key with an empty string
value and omitting the key will lead to different results? I thought it was
clear but we can add a sentence saying that. I just don't want to mandate
how the SP should interpret the key-with-empty-value case.
—
Reply to this email directly, view it on GitHub
<#544 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAR5KLCJ3CHTY5BIZ2M4TSLXSBDJNANCNFSM6AAAAAAWJMM4YU>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
--
James DeFelice
|
How does "undo" or "rollback" work in general? It is another UpdateVolume call to the SP? In what condition do we rollback? I was thinking most SP's fail UpdateVolume call if it is not qualified and the volume stays as it was, then what do we rollback to? |
Logical op that a CO could offer, not an SP function.
…On Thu, Jul 27, 2023, 12:11 PM Sunny ***@***.***> wrote:
How does "undo" or "rollback" work in general? It is another UpdateVolume
call to the SP? In what condition do we rollback? I was thinking most SP's
fail UpdateVolume call if it is not qualified and the volume stays as it
was, then what do we rollback to?
—
Reply to this email directly, view it on GitHub
<#544 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAR5KLHZS6YLENHR7AJ3ZI3XSKHM7ANCNFSM6AAAAAAWJMM4YU>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
The tricky thing is the rollback at Kubernetes level is via changing back to the original VAC. If we add this rollback logic in the CSI Spec, We are giving two different ways for users to rollback. And what happens if they have conflicts? |
So I don't like the idea of a "unset/delete" explicit parameter because to use it would required the CO to be able to compute the diff between the old state and new state, and that's the thing we've been trying to avoid. I want the CO to be able to reconcile mutable parameters with no knowledge of what the previous state was. We also need to leave the door open for there to be parameters managed outside CSI, because that's how these things are done today, and we don't want to break that for backwards compatibility. This new feature should allow COs to modify values, but it should also allow COs to continue to ignore values managed outside CSI if the administrator chooses. Regarding GetCapacity -- we've already said that mutable parameters should be nondisruptively changeable, which suggests that whatever capacity the SP has must be compatible with all the mutable options. It would not make sense for the mutable parameters to have any impact on capacity then. |
badcae2
to
3e57adf
Compare
Can we get approval by Weds August 2nd to unblock some implementation tasks? @msau42 suggested us to move forward with implementation and since this is still an alpha API, we can make modification before beta? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
Regarding GetCapacity -- we've already said that mutable parameters
should be nondisruptively changeable, which suggests that whatever capacity
the SP has must be compatible with all the mutable options. It would not
make sense for the mutable parameters to have any impact on capacity then.
It may make sense to include this sentiment in the GetCapacity docs then.
Because someone, at some point, is going to ask for this - and it will be
easier to answer that question if we write it down.
…On Tue, Aug 1, 2023 at 12:51 PM Niels de Vos ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In spec.md
<#544 (comment)>
:
> + // pairs. This field is OPTIONAL. The Plugin is responsible for
+ // parsing and validating these parameters. COs will treat these
+ // as opaque. COs MUST specify the intended value of every mutable
+ // parameter. Absent keys that were previously assigned a value MUST
+ // be interpreted as a deletion of the key from the set of mutable
+ // parameters.
+ map<string, string> mutable_parameters = 3;
+}
+
+message ControllerModifyVolumeResponse {
+ option (alpha_message) = true;
+}
+
+```
+
+##### ControllerModifyVolume Errors
It is probably imaginable that a change in parameters requires re-staging
the volume to apply the parameters to a new connection (think of hostname
change of an NFS-server). It may not be required immediately, but the
change of the parameters only gets into effect once the volume has been
disconnected and newly connected.
A FAILED_PRECONDITION is not sufficient in that case, I think. Maybe the
ControllerModifyVolumeResponse could include an attribute to signal this?
—
Reply to this email directly, view it on GitHub
<#544 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAR5KLANRNV7HI4ZK5NJFKDXTEXX3ANCNFSM6AAAAAAWJMM4YU>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
--
James DeFelice
|
@jdef How about we add one more sentence after the GetCapacity description
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A couple of small nits, otherwise LGTM.
Address those, and I will go ahead approve/merge this PR.
We can track @jdef's remaining feedback in the issue you opened (to be fixed before we move this feature to beta/GA).
This RPC allows the CO to change mutable key attributes of a volume. | ||
|
||
This operation MUST be idempotent. | ||
The new mutable parameters in ControllerModifyVolume can be different from the existing mutable parameters. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does this mean?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This means that, for example the existing mutable parameters in the volume has a field iops, and it is set to 500. The new mutable parameters in ControllerModifyVolume can set this field iops to something diff, like 1000.
9e44f04
to
c918b7f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
Since most of the big issues have been addressed, and the remaining issues are smaller wording related changes -- I am going to go ahead and merge this change. We can track the remaining issues in #549 and ensure they are addressed before this feature moves to beta or GA. |
What type of PR is this?
What this PR does / why we need it:
Spec change for kep-3751 kubernetes/enhancements#3780
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Track additional comments we need to address before going Beta: #549
Does this PR introduce an API-breaking change?: