Skip to content

Commit

Permalink
Make VolumeResource schedulable 📅
Browse files Browse the repository at this point in the history
In order to actually schedule a pod using two volume resources, we had
to make a couple changes:
- Use a storage class that can be scheduled in a GKE regional cluster
  https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/regional-pd
- Either use the same storage class for the PVC attached automatically
  for input/output linking or don't use the PVC (chose the latter!)

This commit removes automatic PVC copying for input output linking of
the VolumeResource b/c since it itself is a PVC, there is no need to
copy between an intermediate PVCs. This makes it simpler to make a Task
using the VolumeResource schedulable, removes redundant copying, and
removes a side effect where if a VolumeResources output was linked to an
input, the Task with the input would see _only_ the changes made by the
output and none of the other contents of the PVC.

Also removing the docs on the `paths` param (i.e. "overriding where
resources are copied from") because it was implemented such that it
would only work in the output -> input linking PVC case and can't
actually be used by users and it will be removed in tektoncd#1284.
  • Loading branch information
bobcatfish committed Oct 10, 2019
1 parent 311102a commit 193ac90
Show file tree
Hide file tree
Showing 13 changed files with 354 additions and 185 deletions.
10 changes: 5 additions & 5 deletions docs/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -124,16 +124,16 @@ or a [GCS storage bucket](https://cloud.google.com/storage/)
The PVC option can be configured using a ConfigMap with the name
`config-artifact-pvc` and the following attributes:
- size: the size of the volume (5Gi by default)
- storageClassName: the [storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/) of the volume (default storage class by default). The possible values depend on the cluster configuration and the underlying infrastructure provider.
- `size`: the size of the volume (5Gi by default)
- `storageClassName`: the [storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/) of the volume (default storage class by default). The possible values depend on the cluster configuration and the underlying infrastructure provider.
The GCS storage bucket can be configured using a ConfigMap with the name
`config-artifact-bucket` with the following attributes:
- location: the address of the bucket (for example gs://mybucket)
- bucket.service.account.secret.name: the name of the secret that will contain
- `location`: the address of the bucket (for example gs://mybucket)
- `bucket.service.account.secret.name`: the name of the secret that will contain
the credentials for the service account with access to the bucket
- bucket.service.account.secret.key: the key in the secret with the required
- `bucket.service.account.secret.key`: the key in the secret with the required
service account json.
- The bucket is recommended to be configured with a retention policy after which
files will be deleted.
Expand Down
80 changes: 5 additions & 75 deletions docs/resources.md
Original file line number Diff line number Diff line change
Expand Up @@ -119,81 +119,6 @@ spec:
value: /workspace/go
```
### Overriding where resources are copied from
When specifying input and output `PipelineResources`, you can optionally specify
`paths` for each resource. `paths` will be used by `TaskRun` as the resource's
new source paths i.e., copy the resource from specified list of paths. `TaskRun`
expects the folder and contents to be already present in specified paths.
`paths` feature could be used to provide extra files or altered version of
existing resource before execution of steps.

Output resource includes name and reference to pipeline resource and optionally
`paths`. `paths` will be used by `TaskRun` as the resource's new destination
paths i.e., copy the resource entirely to specified paths. `TaskRun` will be
responsible for creating required directories and copying contents over. `paths`
feature could be used to inspect the results of taskrun after execution of
steps.

`paths` feature for input and output resource is heavily used to pass same
version of resources across tasks in context of pipelinerun.

In the following example, task and taskrun are defined with input resource,
output resource and step which builds war artifact. After execution of
taskrun(`volume-taskrun`), `custom` volume will have entire resource
`java-git-resource` (including the war artifact) copied to the destination path
`/custom/workspace/`.

```yaml
apiVersion: tekton.dev/v1alpha1
kind: Task
metadata:
name: volume-task
namespace: default
spec:
inputs:
resources:
- name: workspace
type: git
outputs:
resources:
- name: workspace
steps:
- name: build-war
image: objectuser/run-java-jar #https://hub.docker.com/r/objectuser/run-java-jar/
command: jar
args: ["-cvf", "projectname.war", "*"]
volumeMounts:
- name: custom-volume
mountPath: /custom
```

```yaml
apiVersion: tekton.dev/v1alpha1
kind: TaskRun
metadata:
name: volume-taskrun
namespace: default
spec:
taskRef:
name: volume-task
inputs:
resources:
- name: workspace
resourceRef:
name: java-git-resource
outputs:
resources:
- name: workspace
paths:
- /custom/workspace/
resourceRef:
name: java-git-resource
volumes:
- name: custom-volume
emptyDir: {}
```

## Resource Types
The following `PipelineResources` are currently supported:
Expand Down Expand Up @@ -832,6 +757,11 @@ Data,
}
```

### Volume Resource

- params
- using with regional clusters

Except as otherwise noted, the content of this page is licensed under the
[Creative Commons Attribution 4.0 License](https://creativecommons.org/licenses/by/4.0/),
and code samples are licensed under the
Expand Down
37 changes: 26 additions & 11 deletions examples/pipelineruns/volume-output-pipelinerun.yaml
Original file line number Diff line number Diff line change
@@ -1,3 +1,15 @@
# This example will be using multiple PVCs and will be run against a regional GKE.
# This means we have to make sure that the PVCs aren't created in different zones,
# and the only way to do this is to create regional PVCs.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: regional-disk
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
replication-type: regional-pd
---
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
Expand All @@ -7,6 +19,8 @@ spec:
params:
- name: type
value: volume
- name: storageClassName
value: regional-disk
---
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
Expand All @@ -19,8 +33,10 @@ spec:
value: volume
- name: path
value: special-folder
- name: storageClassName
value: regional-disk
---
# Task writes "some stuff" to a predefined path
# Task writes data to a predefined path
apiVersion: tekton.dev/v1alpha1
kind: Task
metadata:
Expand All @@ -38,11 +54,11 @@ spec:
- name: write-new-stuff-1
image: ubuntu
command: ['bash']
args: ['-c', 'echo some stuff1 > /workspace/output/volume1/stuff1']
args: ['-c', 'echo stuff1 > $(outputs.resources.volume1.path)/stuff1']
- name: write-new-stuff-2
image: ubuntu
command: ['bash']
args: ['-c', 'echo some stuff2 > /workspace/output/volume2/stuff2']
args: ['-c', 'echo stuff2 > $(outputs.resources.volume2.path)/stuff2']
---
# Reads a file from a predefined path and writes as well
apiVersion: tekton.dev/v1alpha1
Expand All @@ -58,7 +74,6 @@ spec:
- name: volume2
type: storage
outputs:
# This Task uses the same volume as an input and an output to ensure this works
resources:
- name: volume1
type: storage
Expand All @@ -68,18 +83,17 @@ spec:
command: ["/bin/bash"]
args:
- '-c'
- '[[ stuff1 == $(cat $(inputs.resources.volume1.path)/stuff1) ]]"'
- '[[ stuff1 == $(cat $(inputs.resources.volume1.path)/stuff1) ]]'
- name: read2
image: ubuntu
command: ["/bin/bash"]
args:
- '-c'
# TODO: should fail
- '[[ stuff == $(cat $(inputs.resources.volume2.path)/stuff1) ]]"'
- '[[ stuff2 == $(cat $(inputs.resources.volume2.path)/stuff2) ]]'
- name: write-new-stuff-3
image: ubuntu
command: ['bash']
args: ['-c', 'echo some stuff3 > /workspace/output/volume1/stuff3']
args: ['-c', 'echo stuff3 > $(outputs.resources.volume1.path)/stuff3']
---
# Reads a file from a predefined path and writes as well
apiVersion: tekton.dev/v1alpha1
Expand All @@ -97,13 +111,13 @@ spec:
command: ["/bin/bash"]
args:
- '-c'
- '[[ stuff == $(cat $(inputs.resources.volume1.path)/stuff1) ]]"'
- '[[ stuff1 == $(cat $(inputs.resources.volume1.path)/stuff1) ]]'
- name: read3
image: ubuntu
command: ["/bin/bash"]
args:
- '-c'
- '[[ stuff3 == $(cat $(inputs.resources.volume1.path)/stuff3) ]]"'
- '[[ stuff3 == $(cat $(inputs.resources.volume1.path)/stuff3) ]]'
---
# First task writees files to two volumes. The next task ensures these files exist
# then writes a third file to the first volume. The last Task ensures both expected
Expand Down Expand Up @@ -141,6 +155,7 @@ spec:
from: [first-create-files]
outputs:
- name: volume1
# This Task uses the same volume as an input and an output to ensure this works
resource: volume1
- name: then-check
taskRef:
Expand All @@ -149,7 +164,7 @@ spec:
inputs:
- name: volume1
resource: volume1
from: [first-create-files]
from: [then-check-and-write]
---
apiVersion: tekton.dev/v1alpha1
kind: PipelineRun
Expand Down
44 changes: 42 additions & 2 deletions pkg/apis/pipeline/v1alpha1/resource_types.go
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ package v1alpha1

import (
"github.com/tektoncd/pipeline/pkg/apis/pipeline"
"github.com/google/go-cmp/cmp"
"golang.org/x/xerrors"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
Expand Down Expand Up @@ -101,16 +102,55 @@ func (tm *InternalTaskModifier) GetVolumes() []v1.Volume {
return tm.Volumes
}

func checkStepNotAlreadyAdded(s Step, steps []Step) error {
for _, step := range steps {
if s.Name == step.Name {
return xerrors.Errorf("Step %s cannot be added again", step.Name)
}
}
return nil
}

// ApplyTaskModifier applies a modifier to the task by appending and prepending steps and volumes.
func ApplyTaskModifier(ts *TaskSpec, tm TaskModifier) {
// If steps with the same name exist in ts an error will be returned. If identical Volumes have
// been added, they will not be added again. If Volumes with the same name but different contents
// have been added, an error will be returned.
func ApplyTaskModifier(ts *TaskSpec, tm TaskModifier) error {
steps := tm.GetStepsToPrepend()
for _, step := range steps {
if err := checkStepNotAlreadyAdded(step, ts.Steps); err != nil {
return err
}
}
ts.Steps = append(steps, ts.Steps...)

steps = tm.GetStepsToAppend()
for _, step := range steps {
if err := checkStepNotAlreadyAdded(step, ts.Steps); err != nil {
return err
}
}
ts.Steps = append(ts.Steps, steps...)

volumes := tm.GetVolumes()
ts.Volumes = append(ts.Volumes, volumes...)
for _, volume := range volumes {
var alreadyAdded bool
for _, v := range ts.Volumes {
if volume.Name == v.Name {
// If a Volume with the same name but different contents has already been added, we can't add both
if d := cmp.Diff(volume, v); d != "" {
return xerrors.Errorf("Tried to add volume %s already added but with different contents", volume.Name)
}
// If an identical Volume has already been added, don't add it again
alreadyAdded = true
}
}
if !alreadyAdded {
ts.Volumes = append(ts.Volumes, volume)
}
}

return nil
}

// PipelineResourceSetupInterface is an interface that can be implemented by objects that know
Expand Down
Loading

0 comments on commit 193ac90

Please sign in to comment.