Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use config-artifact-bucket ConfigMap in pipeline #2319

Closed
holoGDM opened this issue Apr 1, 2020 · 8 comments
Closed

How to use config-artifact-bucket ConfigMap in pipeline #2319

holoGDM opened this issue Apr 1, 2020 · 8 comments
Labels
kind/documentation Categorizes issue or PR as related to documentation. kind/question Issues or PRs that are questions around the project or a particular feature lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@holoGDM
Copy link

holoGDM commented Apr 1, 2020

What config-artifact-bucket is for? I can not find how to use it. When i will create task with output type "storage" in pipeline and pipelinerun i need to configure separate bucket for that output. Thought it will take those default settings.

Eg:

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: build-installer
  namespace: installer
spec:
  workspaces:
    - name: packages
  resources:
    inputs:
      - name: source
        type: git
    outputs:
      - name: binnary
        type: storage
  steps:
    - name: installer-build
      image: golang
      script: |
        #!/usr/bin/env bash
        make build-binnary

And now need to configure in pipeline it in such way:

apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
  name: installer-pipeline
  namespace: installer
spec:
  workspaces:
      - name: artifacts # Name of the workspace in the Pipeline
  resources:
    - name: downloader
      type: git
    - name: installer
      type: git
    - name: s3-bucket
      type: storage
  tasks:
    - name: dwonload-dependencies
      taskRef:
        name: downloader-images-rpms
      workspaces:
        - name: packages
          workspace: artifacts
      resources:
        inputs:
          - name: source-downloader
            resource: downloader
    - name: build-installer
      taskRef:
        name: build-installer
      workspaces:
        - name: packages
          workspace: artifacts
      resources:
        inputs:
          - name: source
            resource: installer
        outputs:
          - name: binnary
            resource: s3-bucket
      runAfter:
        - dwonload-dependencies

and in pipelinerun:

apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
  name: installerrun
  namespace: installer
spec:
  serviceAccountName: build-bot
  pipelineRef:
    name: installer-pipeline
  resources:
    - name: downloader
      resourceRef:
        name: downloader-git
    - name: installer
      resourceRef:
        name: installer-git
    - name: s3-bucket
      resourceRef:
        name: tekton-s3
  workspaces:
    - name: artifacts
      persistentVolumeClaim:
        claimName: installer-pvc

so i need to create separate tekton-s3 PipelineResource manually. Whats for are that settings in config-artifact-bucket if tekton is not taking them for storage output automatically?

/kind question

@tekton-robot tekton-robot added the kind/question Issues or PRs that are questions around the project or a particular feature label Apr 2, 2020
@ghost
Copy link

ghost commented Apr 2, 2020

Hi, sorry for the confusion! I have to admit I had trouble finding documentation on this. The only thing I could find is here: https://github.com/tektoncd/pipeline/blob/master/docs/developers/README.md#how-are-resources-shared-between-tasks

To cut a long story short, PipelineResources can be used as an output from a Task in a Pipeline. Then another Pipeline Task can consume that same PipelineResource as an input, using the from field. The config-artifact-bucket configures the external storage used to transfer the PipelineResource files from one Task to another.

@ghost ghost added the kind/documentation Categorizes issue or PR as related to documentation. label Apr 2, 2020
@holoGDM
Copy link
Author

holoGDM commented Apr 2, 2020

Hello, thank You for reply. I saw that but im still confused. When im creating "output" i still need to add "resourceRef" in "higher" level - PipelineRun with ref to another PipelineResource created BTW it doesn not want to work with s3. Here is my PipelineResource.

apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
  name: tekton-s3
  namespace: installer
spec:
  type: storage
  params:
    - name: type
      value: gcs
    - name: location
      value: s3://ceph-bkt-8d35234a-51a3-450c-8612-b20451bd455b

and here i need to ref to them to satisfy checks during applying:

apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
  name: installerrun
  namespace: installer
spec:
  serviceAccountName: build-bot
  pipelineRef:
    name: installer-pipeline
  resources:
    - name: downloader
      resourceRef:
        name: downloader-git
    - name: installer
      resourceRef:
        name: installer-git
    - name: s3-bucket
      resourceRef:
        name: tekton-s3
  workspaces:
    - name: artifacts
      persistentVolumeClaim:
        claimName: installer-pvc

Without that ref im getting information i defined resource and need to configure it in higher level. Here is how i have them configured in task:

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: build-installer
  namespace: installer
spec:
  workspaces:
    - name: packages
  resources:
    inputs:
      - name: source
        type: git
    outputs:
      - name: binnary
        type: storage
  steps:
    - name: installer-build
      image: golang
      script: |
        #!/usr/bin/env bash
        make build-binnary

Is there difference between:

outputs:
      resources:
....

and

resources:
      outputs:
       ....
      inputs:

?
EDIT:

When Im trying to add it in order like in doc You linked:

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: downloader-images-rpms
  namespace: installer
spec:
  outputs:
      resources:
        - name: s3-workspace
          type: storage

Im getting:

Error from server (BadRequest): error when creating "downloader/build-downloader-task.yaml": admission webhook "webhook.pipeline.tekton.dev" denied the request: mutation failed: cannot decode incoming new object: json: unknown field "outputs"

@bobcatfish
Copy link
Collaborator

Hey @holoGDM it looks like you might be mixing the storage resource + config-artifact-bucket up a bit, they are actually very different things.

config-artifact-bucket is trying to solve a very specific problem: Tasks run on different nodes, so if you want to share data between them, it has to get from node to node somehow. When you link a PipelineResource output in one Task (running on one node) with a PipelineResource input in another Task (running on a different node), config-artifact-bucket lets you control how that PipelineResource data gets between the Tasks. By default Tekton will try to create a PVC for this, but config-artifact-bucket lets you use something like GCS or S3 for this instead, meaning the data is automatically uploaded and downloaded between Tasks.

The storage resource on the other hand lets a Task explicitly indicate that it wants to upload/download.

If you are explicitly trying to upload something to s3, you can probably completely ignore config-artifact-bucket - especially if you can use workspaces to share the data between Tasks, then have a Task that uses the storage resource to do the final upload.

These are the docs on how to use output resources with v1beta1: https://github.com/tektoncd/pipeline/blob/master/docs/tasks.md#specifying-resources - it looks like in your example above outputs + resources are inverted, what you want is something like:

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: downloader-images-rpms
  namespace: installer
spec:
  resources:
      outputs:
        - name: s3-workspace
          type: storage

Looks like https://github.com/tektoncd/pipeline/blob/master/docs/developers/README.md#how-are-resources-shared-between-tasks is out of date :(

bobcatfish added a commit to bobcatfish/pipeline that referenced this issue Apr 8, 2020
With tektoncd#1185 we resolved a lot
of confusion and inconsistency around how we nested things with
input/output, but we didn't update our developer docs :(

In tektoncd#2319 we pointed someone toward these docs and the person ended up
using the old syntax and being confused :(
tekton-robot pushed a commit that referenced this issue Apr 8, 2020
With #1185 we resolved a lot
of confusion and inconsistency around how we nested things with
input/output, but we didn't update our developer docs :(

In #2319 we pointed someone toward these docs and the person ended up
using the old syntax and being confused :(
@holoGDM
Copy link
Author

holoGDM commented Apr 10, 2020

Thanks for reply. I wanted like You wrote to share artifacts between tasks. But when im using output/input between tasks im forced by Tekton checker when im applying my yamls to configure my own s3 buckets (they are not working either - my s3 bucket is creted with rook and i needed finally to create special task for pushing with s3cmd client). According to my workspaces. i wanted to avoid usage of them and use my s3 default bucket but like i wrote im getting information that i specified outputs/intput and i need to configure "resourceRef" for them. Look at my configs above.

EDIT:
Config which You refer too was just test of that order from link which sbwsg provided. Originally my resource config order was ok. Look at my originally posted configs.

Thank You for helping

@tekton-robot
Copy link
Collaborator

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.

/lifecycle stale

Send feedback to tektoncd/plumbing.

@tekton-robot
Copy link
Collaborator

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

/close

Send feedback to tektoncd/plumbing.

@tekton-robot
Copy link
Collaborator

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.

/lifecycle rotten

Send feedback to tektoncd/plumbing.

@tekton-robot tekton-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 13, 2020
@tekton-robot
Copy link
Collaborator

@tekton-robot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

/close

Send feedback to tektoncd/plumbing.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/documentation Categorizes issue or PR as related to documentation. kind/question Issues or PRs that are questions around the project or a particular feature lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

3 participants