-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add workspaces support for the config-artifact-pvc configmap #2330
Comments
/kind feature |
I am trying to understand / get a clear use case here.
Here it sounds like the use case is for handling cached dependencies/artifiacts. Or is the use case for using a workspace to pass files between tasks without a PVC?
Here it sounds like a use case requiring both GCS bucket and a PVC. The use case could also be to use GCS bucket in the pipeline - have a task that syncs data from bucket to an existing workspace in the task/pipeline, and also a task for uploading data? |
Ah, apologies, I see where the confusion comes from. The intended use case is passing files between Tasks in a Pipeline. PipelineResources have this existing behaviour: an This issue exists to discuss whether we provide support in Workspaces for the existing
On seeing this workspace the reconciler could run the same code path as it would were it "linking" PipelineResources, and copy files through that medium. |
Issues go stale after 90d of inactivity. /lifecycle stale Send feedback to tektoncd/plumbing. |
Stale issues rot after 30d of inactivity. /lifecycle rotten Send feedback to tektoncd/plumbing. |
Rotten issues close after 30d of inactivity. /close Send feedback to tektoncd/plumbing. |
@tekton-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/remove-lifecycle rotten |
@vdemeester: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Issues go stale after 90d of inactivity. /lifecycle stale Send feedback to tektoncd/plumbing. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. /lifecycle stale Send feedback to tektoncd/plumbing. |
Stale issues rot after 30d of inactivity. /lifecycle rotten Send feedback to tektoncd/plumbing. |
I'm going to let this one rot. We've got several designs supporting inter-Task storage going on at the moment which don't rely on the legacy pipeline resources artifact configuration stuff. |
/close |
@sbwsg: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Expected Behavior
PipelineResources currently use a PVC (or bucket) configured using the config-artifact-pvc configmap to quietly store data when it's shared between tasks. This uses an internal package called artifact_storage that silently mounts the PVC or syncs to a GCS bucket and uses that to transfer files from one task to the next. We'd like to replicate similar behaviour but have it available for the more general purpose workspaces implementation that pipelines now has.
This issue is for discussion of whether we want to support this same mechanism for controlling storage with Workspaces. The simplest, easiest UX might be for workspaces to just automatically support the configmap and "just work" when the user specifies a "legacyArtifactPVC" workspace, something like this:
Actual Behavior
Workspaces don't currently know about the config-artifact-pvc configmap and each workspace needs its volume to be completely specified as part of the Run it's bound in.
Additional Info
See the Workspaces volumeClaimTemplate issue for another approach to this problem. We may want both that and this, though this is also fair game for discussion here.
The text was updated successfully, but these errors were encountered: