Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FSx driver does not support using different Persistence volumes on the same FSx instance #151

Closed
aviorma opened this issue Aug 12, 2020 · 8 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@aviorma
Copy link

aviorma commented Aug 12, 2020

/kind bug

What happened?
The current FSX driver does not support mounting different persistence volumes on the same FSx instance
This is blocking us from reusing the same fsx instance on different persistence volumes

Similar issues with EFS in the past

kubernetes-sigs/aws-efs-csi-driver#100
kubernetes-sigs/aws-efs-csi-driver#105
kubernetes-sigs/aws-efs-csi-driver#145
kubernetes-sigs/aws-efs-csi-driver#167
kubernetes/kubernetes#91556

What you expected to happen?

  1. A new FSx Persistence volume should creates a dedicated folder on "/" path named with the persistence volume name

  2. Each Statefulset pod-pv will be mounted on a new folder on the root tree on the FSx instance**

How to reproduce it (as minimally and precisely as possible)?

  1. Install Latest fsx driver on EKS
  2. Create different persistence volumes from FSx storage class
  3. Running df -kh within the container hosting the FSx mount point and see that it has the same mount point for different persistence volumes!!
    Anything else we need to know?:

Environment

  • Kubernetes version (use kubectl version):
    Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.6", GitCommit:"d32e40e20d167e103faf894261614c5b45c44198", GitTreeState:"clean", BuildDate:"2020-05-20T13:16:24Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"17+", GitVersion:"v1.17.9-eks-4c6976", GitCommit:"4c6976793196d70bc5cd29d56ce5440c9473648e", GitTreeState:"clean", BuildDate:"2020-07-17T18:46:04Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

  • Driver version:

{
"driverVersion": "v0.4.0-dirty",
"gitCommit": "4cca48b4b005e50685a9117bec75d76014b8ad72",
"buildDate": "2020-06-28T06:40:22Z",
"goVersion": "go1.13.4",
"compiler": "gc",
"platform": "linux/amd64"
}

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Aug 12, 2020
@aviorma aviorma changed the title Mounting subpaths does not work FSx driver does not support using different Persistence volumes on the same FSx instance Aug 12, 2020
@aviorma aviorma changed the title FSx driver does not support using different Persistence volumes on the same FSx instance FSx driver does not support using mounting different Persistence volumes on the same FSx instance Aug 13, 2020
@aviorma aviorma changed the title FSx driver does not support using mounting different Persistence volumes on the same FSx instance FSx driver does not support using different Persistence volumes on the same FSx instance Aug 13, 2020
@scopej
Copy link

scopej commented Oct 27, 2020

I've been working on some improvements to fix this.
I just submitted #162.

I'm working on the CLA process now, but wanted to start a dialog on the fix and if this is the kind of change the team would accept.

@wongma7
Copy link
Contributor

wongma7 commented Oct 27, 2020

Would it solve the statefulset use-case? I am not too familiar with statefulsets, is it possible to create a pvc template where the volumeHandle is the same but the volumeAttributes differ per pet in the set

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 25, 2021
@svella
Copy link

svella commented Jan 29, 2021

Add my vote. We were looking at switching from EFS for FSx for Lustre for some workloads for which EFS is completely unsuitable, but having to allocate a separate FSx with a minimum of 1.2Tib for each PV is prohibitively expensive.

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 28, 2021
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@AshishisAws
Copy link

Any updates on this

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

7 participants