Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ability to mount the same PV across mutiple workspaces (one workspace running at a time) #15652

Closed
davidwindell opened this issue Jan 10, 2020 · 28 comments
Labels
engine/devworkspace Issues related to Che configured to use the devworkspace controller as workspace engine. help wanted Community, we are fully engaged on other issues. Feel free to take this one. We'll help you! kind/enhancement A feature request - must adhere to the feature request template. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. severity/P1 Has a major impact to usage or development of the system.

Comments

@davidwindell
Copy link
Contributor

Is your enhancement related to a problem? Please describe.

We would like to be able to attach a persistent volume (in our case an EFS volume) that is shared across multiple workspaces on Che. This would allow us to share static assets that are stored on an NFS drive with all workspaces of the same project. We work on large web projects where GBs of media needs to be mounted into the workspace.

Describe the solution you'd like

It would be great to be able to define this in the devfile, for example:

  volumes:
     - claimName: pvc-name
       containerPath: "/home/user/media"

Describe alternatives you've considered

We thought of using the Kubernetes custom resource section of the devfile but this doesn't seem to work.

@davidwindell davidwindell added the kind/enhancement A feature request - must adhere to the feature request template. label Jan 10, 2020
@davidwindell
Copy link
Contributor Author

davidwindell commented Feb 19, 2020

In summary, it would be amazing if we could just attach an existing Kubernetes PersistentVolumeClaim to all workspaces, specifying the subpath in the volume to use in the devfile.

This would make a huge difference to our developers performance as we rely on database backed media storage at the moment to share it amongst workspaces.

@skabashnyuk
Copy link
Contributor

@davidwindell that sounds like an interesting use case. Would you like to propose a patch?

@sleshchenko
Copy link
Member

I wonder if user may want to reconfigure source for all components, like editor/plugins. If yes then what should be a better format:

  1. Add an ability to configure/override volumes for all components:
components:
- id: eclipse/che-theia/7.10.0
  volumes: #Here are overrides for https://github.com/eclipse/che-plugin-registry/blob/master/v3/plugins/eclipse/che-theia/7.8.0/meta.yaml#L59
    - name: projects
      containerPath: /projects
      subfolder: /${workspaceName}/projects
      claimName: my-claim #probably must be the same across devfile for all `projects` volumes
    - name: plugins
      containerPath: /plugins
      subfolder: /${workspaceName}/plugins
      claimName: my-claim
- id: redhat/java/latest
  volumes:
    - name: projects
      containerPath: /projects
      subfolder: /${workspaceName}/projects
    - name: plugins
      containerPath: /plugins
      subfolder: /${workspaceName}/plugins
- type: dockerimage
  volumes:
    - name: projects
      containerPath: /projects
      subfolder: /${workspaceName}/projects
  1. Or as an alternative it could be a separate section to tune volumes configuration, like:
components:
- id: eclipse/che-theia/7.10.0
  type: cheEditor
- id: redhat/java/latest
  type: chePlugin
- type: dockerimage
  volumes:
    - name: projects
      containerPath: /projects
volumes: #We know which volumes are used in our workspace and tune that here for all components
  - name: project
    pvcSource:
      pvcName: projects
      subfolder: /${workspaceName}/projects

@davidwindell
Copy link
Contributor Author

I don't have Java skills to contribute a patch but I like either options as long as it's possible to add an additional volume (beyond the WS project volume - we don't want to share that).

@Rucadi
Copy link

Rucadi commented Feb 27, 2020

I'm also interested in this issue, in my case, I want to use vendor software already installed in the machine, which size is several GB. I want to share it with the workspace, which contains additional configurations and software that are included in a docker image.

I was also interested specifically in NFS PVC. I hope this gets attention.

@kfox1111
Copy link

kfox1111 commented May 3, 2020

Is there a way to associate UID's with Keycloak users so each users container runs with the appropriate NFS permissions?

@davidwindell
Copy link
Contributor Author

@skabashnyuk any chance of getting this one a priority?

@amisevsk amisevsk added the status/need-triage An issue that needs to be prioritized by the curator responsible for the triage. See https://github. label May 19, 2020
@svkr2k
Copy link

svkr2k commented May 20, 2020

Thank you, @davidwindell, @skabashnyuk, @amisevsk for adding the request.
I too have similar requirement - there are several GBs of files to be shared between all workspace containers.

Kindly let us know the priority of this feature request.
I'm not familiar with PV and PVC concepts, but if someone can help me, i shall try to implement the changes needed. Can someone help me?

@davidwindell
Copy link
Contributor Author

My suggestion would be that the admin would have to pre-create the PV and PVC, then all that would be required would be something like this targeting the PVC name:

  volumes:
     - claimName: pvc-name
       containerPath: "/home/user/media"

@amisevsk
Copy link
Contributor

I would also include a accessMode field in this case, in case the requirement is e.g. sharing a ROX volume.

@ibuziuk
Copy link
Member

ibuziuk commented May 25, 2020

@l0rd adding devex team for setting the right priority. Atm the issue is open for devs and contributions are most welcome cc: @davidwindell @Rucadi

@ibuziuk ibuziuk added the status/open-for-dev An issue has had its specification reviewed and confirmed. Waiting for an engineer to take it. label May 25, 2020
@skabashnyuk skabashnyuk added severity/P1 Has a major impact to usage or development of the system. and removed status/need-triage An issue that needs to be prioritized by the curator responsible for the triage. See https://github. labels May 26, 2020
@svkr2k
Copy link

svkr2k commented Jun 8, 2020

Thank you, @ibuziuk , @skabashnyuk for the updates and changing the proprity to P1.
I'm revisiting this after a long time.
Are any of the experts working on it currently?

(I had a look at the codebase, but didnt have any clue on where to make the changes :-))

@ibuziuk ibuziuk added help wanted Community, we are fully engaged on other issues. Feel free to take this one. We'll help you! and removed status/open-for-dev An issue has had its specification reviewed and confirmed. Waiting for an engineer to take it. labels Jun 8, 2020
@svkr2k
Copy link

svkr2k commented Jun 9, 2020

I agree "NFS" is a good choice, any ideas on using "Local volumes" instead?
If there are multiple pods trying to access the huge file system in NFS, is there a possibility for file access speed slowing down ?
Will "local volume" help in that scenario ?

@che-bot
Copy link
Contributor

che-bot commented Jan 4, 2021

Issues go stale after 180 days of inactivity. lifecycle/stale issues rot after an additional 7 days of inactivity and eventually close.

Mark the issue as fresh with /remove-lifecycle stale in a new comment.

If this issue is safe to close now please do so.

Moderators: Add lifecycle/frozen label to avoid stale mode.

@che-bot che-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 4, 2021
@davidwindell
Copy link
Contributor Author

/remove-lifecycle stale

@che-bot che-bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 4, 2021
@che-bot
Copy link
Contributor

che-bot commented Jul 19, 2021

Issues go stale after 180 days of inactivity. lifecycle/stale issues rot after an additional 7 days of inactivity and eventually close.

Mark the issue as fresh with /remove-lifecycle stale in a new comment.

If this issue is safe to close now please do so.

Moderators: Add lifecycle/frozen label to avoid stale mode.

@che-bot che-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 19, 2021
@l0rd l0rd removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 20, 2021
@l0rd
Copy link
Contributor

l0rd commented Jul 20, 2021

To test with a devfile v2 that allows to specify volumes as components https://docs.devfile.io/devfile/2.1.0/user-guide/api-reference.html

@l0rd l0rd changed the title Ability to attach external/shared PV to workspaces Ability to mount the same PV across mutiple workspaces (one workspace running at a time) Jul 20, 2021
@amisevsk
Copy link
Contributor

/remove-lifecycle stale

@amisevsk
Copy link
Contributor

amisevsk commented Jul 20, 2021

To test with a devfile v2 that allows to specify volumes as components

This is still outside the devfile 2.0 spec -- volume components still get subpathed into the common PVC. I had created an issue devfile/api#374 to address this use case, but it got delayed out of the initial 2.0 release.

@l0rd
Copy link
Contributor

l0rd commented Jul 20, 2021

@davidwindell I have changed the title of this issue because I got confused initially: that's about reusing the same PV again and again on developers workspaces.

In general I think that we should avoid to specify the name of a PVC in a devfile because, if we do that, it won't be portable anymore. I think that injecting PVs using labels and annotations as we do for secret would be a cleaner approach.

@amisevsk
Copy link
Contributor

I think that injecting PVs using labels and annotations as we do for secret would be a cleaner approach.

In that case the devfile would still not be portable, if e.g. the project you're building expects a specific volume to be mounted.

@l0rd
Copy link
Contributor

l0rd commented Jul 20, 2021

In that case the devfile would still not be portable, if e.g. the project you're building expects a specific volume to be mounted.

That's a good point :-)

@davidwindell
Copy link
Contributor Author

At the end of the day, portability doesn't matter when Che is being used internally and all developers are crying out for shared storage.

@l0rd
Copy link
Contributor

l0rd commented Jul 22, 2021

@davidwindell in both cases your problem will be solved.

The modification at the devfile spec level may be harder to get accepted and implemented. Today the devfile is used by Che but also other projects and we are really careful on what we introduce in the spec.

And I am still in favor of PV labelling rather than an explicit reference of a PVC in the devfile spec. PV labelling is like dependency injection: you do not need to hardcode the PVC name in the workspace definition, that will be resolved at runtime.

@amisevsk
Copy link
Contributor

I've created issue devfile/devworkspace-operator#503 on the DevWorkspace Operator side for further discussion. It's not specific to Che, but when Che switches to DWO as the workspace engine, this feature would come to Che as well.

The way I'd see it implemented (not having spent a lot of time thinking about it yet) would be

  1. Create a PVC and apply label controller.devfile.io/mount-to-devworkspace: "true" to it (and additional annotations to control mount path, permissions, etc)
  2. Create a workspace in Che using the Devfile 2.0 / DevWorkspace Operator engine
  3. DWO automatically mounts the PVC from step 1 to the DevWorkspace at the specified path.

@davidwindell WDYT? Does this fit your use cases?

@davidwindell
Copy link
Contributor Author

I think that would work really well!

@skabashnyuk skabashnyuk added engine/devworkspace Issues related to Che configured to use the devworkspace controller as workspace engine. and removed area/che-server labels Aug 17, 2021
@che-bot
Copy link
Contributor

che-bot commented Feb 13, 2022

Issues go stale after 180 days of inactivity. lifecycle/stale issues rot after an additional 7 days of inactivity and eventually close.

Mark the issue as fresh with /remove-lifecycle stale in a new comment.

If this issue is safe to close now please do so.

Moderators: Add lifecycle/frozen label to avoid stale mode.

@che-bot che-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 13, 2022
@che-bot che-bot closed this as completed Feb 20, 2022
@l0rd
Copy link
Contributor

l0rd commented Feb 20, 2022

This is fixed by devfile/devworkspace-operator#544

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
engine/devworkspace Issues related to Che configured to use the devworkspace controller as workspace engine. help wanted Community, we are fully engaged on other issues. Feel free to take this one. We'll help you! kind/enhancement A feature request - must adhere to the feature request template. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. severity/P1 Has a major impact to usage or development of the system.
Projects
None yet
Development

No branches or pull requests

10 participants