-
Notifications
You must be signed in to change notification settings - Fork 222
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The ontap drivers should have more intelligent volume placement #64
Comments
To be clear, most of these would apply to all ONTAP drivers. The Trident scheduler is definitely a rich target environment. |
I will +1 this request on behalf of many interactions I've had and add some additional items. I am often asked for several capabilities around the storage pool selection logic:
|
Well, if we talk about placement logic, I'll add a few more items to consider (based on actual customer requirements):
Combined with the points already mentioned by others and weighted and sorted according to the specific requirements the customer has. In the end, we'd either need a customizable rule engine in trident, or a flexible mechanism to attach other tools. If we decide to go with the later, I'd vote for WFA in addition to AppDM and NSLM that are already mentioned, since (today) it is the only tool that is flexible enough to build a custom solution that takes all of the above items into consideration. |
+1 for this as i'm hitting issues when using Trident + cloud manager together, as its still not picking the correct pool which is automatically generated for me. |
+1 Same request: we have a 12 node cluster for the point: Node which holds the LIF for the SVM (in order to avoid indirect data access for best performance)
|
also +1 Another idea. The limitAggregateUsage parameter looks at the global usage of the aggregate, not only the capacity managed by Trident. if the aggregate is shared among different workloads, which is often the case, this parameter should be a bit more precise. an example:
|
The placement of flexvols by the ontap-nas-economy driver needs to be more intelligent. The following suggestions are made as a starting point, based on customer feedback for a large scale environment:
Allow Trident to be configured so that we only continue to provision qtrees in a flexvol until the underlying aggregate is X percentage full or Y percentage oversubscribed.
If multiple aggregates are defined, prefer to use whatever aggregate has the most free space and least oversubscribed.
While Update README.md #1 and Support provisioning of qtrees instead of volumes to improve Trident scalability #2 are more important in the short term, at some point it would also be desirable to have provisioning take into account node headroom.
The text was updated successfully, but these errors were encountered: