-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Custom Workspace Bindings #3435
Comments
Another really cool use-case that I think this addresses is to express artifact publishing as a workspace. For instance: "write your library to this workspace, and it will be published to Artifactory when your build is complete" This is really useful for platform teams that might want to control where and how publishing happens. For instance, they might not want to expose credentials or signing keys to the users of the platform. |
In #3467 a user is requesting NFS support in Tekton Workspace Bindings. I've tested this locally against Filestore. It works but it's more labor intensive than it ideally should be because a new There are instructions available for using a third-party provisioner which requires installing |
Further to my previous comment, a lot of the use cases that Custom Workspace Bindings might end up providing are already covered by the CSI spec: https://github.com/container-storage-interface/spec/blob/master/spec.md It doesn't seem to me like reinventing or duplicating some of the CSI spec in Tekton is a great way to go here. |
Issues go stale after 90d of inactivity. /lifecycle stale Send feedback to tektoncd/plumbing. |
I captured the problem statement from this issue in TEP-0038 but have punted on proposing an implementation or solution at the moment. /remove-lifecycle stale |
Issues go stale after 90d of inactivity. /lifecycle stale Send feedback to tektoncd/plumbing. |
Stale issues rot after 30d of inactivity. /lifecycle rotten Send feedback to tektoncd/plumbing. |
I think the core of this might be implementable via the pipeline-in-a-pod feature that's under development. For example: composing a test task with a gcs-upload task in a pipeline could be rendered to steps in a single pod with an In conjunction with the CSI spec, existing persistent volume claim support, and custom tasks / controllers, I think there are a lot of ways already to manage custom persistence stories. I'm going to close this issue for the time being. |
Feature request
Allow flexible, configurable support for Workspace Bindings with currently unsupported volume types, provisioning styles, and storage that doesn't manifest as k8s Volumes such as cloud storage buckets.
Use case
As a Tekton operator I want to utilize a kind of storage that Tekton doesn't currently support so that I can integrate with whatever my organization already uses for storage.
Concrete examples include:
This list is non-exhaustive. The idea is to present an interface via which an organization can support essentially any storage mechanism they want.
Caveats
Whatever this feature ends up as should be configurable by platform owners. If a platform wants to only support a subset of custom bindings (or doesn't want to support the feature at all) then they should be able to configure it as such.
Pseudo-code and hand-waving for what this might look like
Random Directory on a Bucket
Imagine you've got a GCS bucket set up that your org shares across multiple teams for their CI/CD workloads to drop data on to. The bucket's configured with a 10-day retention period and each CI/CD workload gets a random directory in the bucket. In the following example a PipelineRun author configures a workspace binding to use a randomized directory on that GCS bucket:
What happens next? There are a lot of possible approaches. Here's one option:
Prior to running the Pipeline Tekton looks up the class
GCSBucketRandomDirectory
in its registry of Custom Workspace Providers. The Tekton controller sends a request to the HTTP server registered for theGCSBucketRandomDirectory
class. The HTTP server responds with some Volume configuration and Steps to inject in to all of that Pipeline's TaskRuns. The Volume config isemptyDir
so each TaskRun Pod effectively starts with an extra empty volume.gcs-download
andgcs-upload
Steps are injected before and after every TaskRun Task to populate theemptyDir
from the random bucket directory and to upload from theemptyDir
to the random bucket directory.In-House Storage Solution
Your company uses an in-house storage solution and exposes an API that teams use to request and release chunks of persistent storage. Your platform team is responsible for CI/CD and needs to use the in-house storage API to make storage available to your various application teams in their pipelines. You write an HTTP server that talks to the in-house storage API to provision and tear down storage for Pipelines and you expose that functionality via a Custom Workspace Binding plugin. Teams then use it like this in their PipelineRuns:
These params are then bundled up in to an HTTP request and sent by the Tekton Controller to your server (registered against the
CompanyXStorageProvider
class) when the PipelineRun starts up. Your server in turn coordinates with the in-house storage API to figure out the nuts-and-bolts. Your server responds with whatever Volume, Step-injection, Sidecar configuration, ConfigMaps, Secrets etc... is needed for the Tasks in the Pipeline to correctly access the storage.One further wrinkle: your HTTP server needs to know when the PipelineRun is done with the piece of storage you exposed so you can tear it down. As part of your initial response payload you return a notification webhook endpoint that should be hit when the PipelineRun is complete, along with a token for the piece of storage that's been claimed. The PipelineRun reconciler hits that endpoint with the token to notify your server that it can now safely release that portion back to the in-house storage API.
Related Work
The text was updated successfully, but these errors were encountered: