-
Notifications
You must be signed in to change notification settings - Fork 83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FSx driver does not support using different Persistence volumes on the same FSx instance #151
Comments
I've been working on some improvements to fix this. I'm working on the CLA process now, but wanted to start a dialog on the fix and if this is the kind of change the team would accept. |
Would it solve the statefulset use-case? I am not too familiar with statefulsets, is it possible to create a pvc template where the volumeHandle is the same but the volumeAttributes differ per pet in the set |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Add my vote. We were looking at switching from EFS for FSx for Lustre for some workloads for which EFS is completely unsuitable, but having to allocate a separate FSx with a minimum of 1.2Tib for each PV is prohibitively expensive. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Rotten issues close after 30d of inactivity. Send feedback to sig-contributor-experience at kubernetes/community. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Any updates on this |
/kind bug
What happened?
The current FSX driver does not support mounting different persistence volumes on the same FSx instance
This is blocking us from reusing the same fsx instance on different persistence volumes
Similar issues with EFS in the past
kubernetes-sigs/aws-efs-csi-driver#100
kubernetes-sigs/aws-efs-csi-driver#105
kubernetes-sigs/aws-efs-csi-driver#145
kubernetes-sigs/aws-efs-csi-driver#167
kubernetes/kubernetes#91556
What you expected to happen?
A new FSx Persistence volume should creates a dedicated folder on "/" path named with the persistence volume name
Each Statefulset pod-pv will be mounted on a new folder on the root tree on the FSx instance**
How to reproduce it (as minimally and precisely as possible)?
df -kh
within the container hosting the FSx mount point and see that it has the same mount point for different persistence volumes!!Anything else we need to know?:
Environment
Kubernetes version (use
kubectl version
):Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.6", GitCommit:"d32e40e20d167e103faf894261614c5b45c44198", GitTreeState:"clean", BuildDate:"2020-05-20T13:16:24Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"17+", GitVersion:"v1.17.9-eks-4c6976", GitCommit:"4c6976793196d70bc5cd29d56ce5440c9473648e", GitTreeState:"clean", BuildDate:"2020-07-17T18:46:04Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Driver version:
{
"driverVersion": "v0.4.0-dirty",
"gitCommit": "4cca48b4b005e50685a9117bec75d76014b8ad72",
"buildDate": "2020-06-28T06:40:22Z",
"goVersion": "go1.13.4",
"compiler": "gc",
"platform": "linux/amd64"
}
The text was updated successfully, but these errors were encountered: