Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Argo-events should not require workflows to be in an S3 bucket #41

Closed
shrinandj opened this issue Jun 29, 2018 · 3 comments
Closed

Argo-events should not require workflows to be in an S3 bucket #41

shrinandj opened this issue Jun 29, 2018 · 3 comments
Assignees
Labels
enhancement New feature or request
Milestone

Comments

@shrinandj
Copy link
Contributor

Is your feature request related to a problem? Please describe.
Currently, argo-events requires and artifactLocation in the Trigger. This artifactLocation has the details of the workflow to run when the trigger is fired. This is cumbersome to setup (even if it is with minio).

Describe the solution you'd like
It will be better if there is no need for the S3 bucket. Ideally, the workflow to run could be inlined with the Sensor yaml. If not, there could be other options of have the workflow yaml in a config-map.

Describe alternatives you've considered
None

@shrinandj
Copy link
Contributor Author

Requirements:

ArtifactLocation is a yaml map. The only key currently allowed being "s3":

artifactLocation: 
  s3: 
    accessKey: 
      key: accesskey
      name: artifacts-minio
    bucket: workflows
    endpoint: "artifacts-minio.default:9000"
    insecure: true
    key: hello-world.yaml
    secretKey: 
      key: secretkey
      name: artifacts-minio

Users should be able to specify various types of locations for the K8s resources in the trigger.

Possible options:

Inline:
It should be possible for the user to have the entire yaml directly embedded in the sensor yaml itself.

artifactLocation: 
  inline: |
      apiVersion: argoproj.io/v1alpha1
      kind: Workflow
      metadata: 
        generateName: hello-world-
      spec: 
        entrypoint: whalesay
        templates: 
          - 
            container: 
              args: 
                - "hello world"
              command: 
                - cowsay
              image: "docker/whalesay:latest"
            name: whalesay

file:
Assuming that the sensor controller is run where it has access to the workflow yaml file, having a file option would be great.

Users could have the workflow yamls in a config map that is mounted at some path in the sensor-controller as a volume.

Users could have the workflow yamls in a dedicated volume that is mounted in the sensor-controller.

artifactLocation: 
  file: 
    path: /my-argo-yamls/my-workflow.yaml

url:
Argo submitting workflows using a url. So does kubernetes. It would be good if users can also specify that the artifactLocation is a url.

artifactLocation: 
  url: 
    path: https://raw.githubusercontent.com/my-argo-yamls/my-workflow.yaml

Design:

Changes to types.go

Currently, "Trigger" -> "ResourceObject" -> "ArtifactLocation".

ArtifactLocation only has one member of type S3Artifact.

// ArtifactLocation describes the location for an external artifact
type ArtifactLocation struct {
	S3 *S3Artifact `json:"s3,omitempty" protobuf:"bytes,1,opt,name=s3"`
}

Change ArtifactLocation to include three other types:

// ArtifactLocation describes the location for an external artifact
type ArtifactLocation struct {
	S3 *S3Artifact `json:"s3,omitempty" protobuf:"bytes,1,opt,name=s3"`
        Inline string
        File *FileArtifact
        Url *UrlArtifact
}

type FileArtifact struct {
        Path string
}

type UrlArtifact struct {
        Endpoint string
}

The "store" package has the ArtifactReader interface.
s3.go implements the ArtifactReader for s3.

There will have to be additional implementations for Inline, File and Url

Say store/inline.go, store/file.go and store/url.go and corresponding unit test files.

Implementing the ArtifactReader interface requires implementing the following method:

Read() ([]byte, error)

For inline:
The Read() method might just return what's already there.

For file:
The Read() method will have to read the file from the path specified and return it's contents.

For url:
The Read() method here will have to read the contents from the endpoint and return it's contents.


The following code is executed when a trigger is fired:

	if trigger.Resource != nil {
		creds, err := store.GetCredentials(soc.controller.kubeClientset, soc.controller.Config.Namespace, trigger.Resource.ArtifactLocation)
		if err != nil {
			return err
		}
		reader, err := store.GetArtifactReader(trigger.Resource.ArtifactLocation, creds)
		if err != nil {
			return err
		}
		uObj, err := store.FetchArtifact(reader, trigger.Resource.GroupVersionKind)
		if err != nil {
			return err
		}
		err = soc.createResourceObject(trigger.Resource, uObj)
		if err != nil {
			return err
		}
	}

• The first step of getting credentials today requires credentials to be present. Will have to make it optional for inline, file and url

• The call to store.GetArtifactReader will return the appropriate ArtifactReader object based on whether it is S3, inline, file or url

• store.FetchArtifact will call reader.Read() which should return the byte array of the k8s resource to create.

• Call to soc.createResourceObject and everything onwards remains unchanged.


Comments/suggestions welcome!

@magaldima
Copy link
Contributor

Your proposal is very thorough! I like the idea of having inline, file, and url sources in addition to the s3 ArtifactLocation as it greatly increases the flexibility of sourcing manifests.

Looking forward to this contribution!

shrinandj added a commit to shrinandj/argo-events that referenced this issue Jul 21, 2018
Artifacts can now be inline with the sensor yaml. See ./examples/inline-sensor.yaml
as an example.

Refs argoproj#41
shrinandj added a commit to shrinandj/argo-events that referenced this issue Jul 23, 2018
Artifacts can now be inline with the sensor yaml. See ./examples/inline-sensor.yaml
as an example.

Refs argoproj#41
magaldima pushed a commit that referenced this issue Jul 23, 2018
* Support inline artifacts.

Artifacts can now be inline with the sensor yaml. See ./examples/inline-sensor.yaml
as an example.

Refs #41

* Use correct protobuf offset for inline artifacts.
shrinandj added a commit to shrinandj/argo-events that referenced this issue Jul 25, 2018
shrinandj added a commit to shrinandj/argo-events that referenced this issue Jul 25, 2018
With this change, sensor yamls can have a trigger in which the workflow
yaml is specified as a file path. This will enable cases where workflow
yamls are in config-maps and/or in some persistent volumes.

Refs argoproj#41
shrinandj added a commit to shrinandj/argo-events that referenced this issue Jul 26, 2018
Sensor yamls can now specify the workflow location as a url path with this change.

Refs argoproj#41
magaldima pushed a commit that referenced this issue Jul 26, 2018
With this change, sensor yamls can have a trigger in which the workflow
yaml is specified as a file path. This will enable cases where workflow
yamls are in config-maps and/or in some persistent volumes.

Refs #41
shrinandj added a commit to shrinandj/argo-events that referenced this issue Jul 26, 2018
Sensor yamls can now specify the workflow location as a url path with this change.

Refs argoproj#41
magaldima pushed a commit that referenced this issue Jul 26, 2018
Sensor yamls can now specify the workflow location as a url path with this change.

Refs #41
shrinandj added a commit to shrinandj/argo-events that referenced this issue Jul 26, 2018
Testing Done:

1. Created a sensor with verifycert explicitly set to false. It succeeded.
2. Created a sensor without verifycert. It succeeded.

Refs argoproj#41
magaldima pushed a commit that referenced this issue Jul 26, 2018
* Add verifycert as an argument to URL trigger source

Testing Done:

1. Created a sensor with verifycert explicitly set to false. It succeeded.
2. Created a sensor without verifycert. It succeeded.

Refs #41

* Remove panic() in url.go and file.go.

Return appropriate errors instead.
shrinandj added a commit to shrinandj/argo-events that referenced this issue Jul 26, 2018
Add info about inlined, file and url sources for trigger workflows in the docs as well as changelog.

Refs argoproj#41.
magaldima pushed a commit that referenced this issue Jul 26, 2018
Add info about inlined, file and url sources for trigger workflows in the docs as well as changelog.

Refs #41.
@magaldima
Copy link
Contributor

closing this as this has been fully implemented now. thanks @shrinandj great job here!

@magaldima magaldima added this to the v1.0.0 milestone Jul 26, 2018
juliev0 pushed a commit to juliev0/argo-events that referenced this issue Mar 29, 2022
* Support inline artifacts.

Artifacts can now be inline with the sensor yaml. See ./examples/inline-sensor.yaml
as an example.

Refs argoproj#41

* Use correct protobuf offset for inline artifacts.
juliev0 pushed a commit to juliev0/argo-events that referenced this issue Mar 29, 2022
With this change, sensor yamls can have a trigger in which the workflow
yaml is specified as a file path. This will enable cases where workflow
yamls are in config-maps and/or in some persistent volumes.

Refs argoproj#41
juliev0 pushed a commit to juliev0/argo-events that referenced this issue Mar 29, 2022
Sensor yamls can now specify the workflow location as a url path with this change.

Refs argoproj#41
juliev0 pushed a commit to juliev0/argo-events that referenced this issue Mar 29, 2022
* Add verifycert as an argument to URL trigger source

Testing Done:

1. Created a sensor with verifycert explicitly set to false. It succeeded.
2. Created a sensor without verifycert. It succeeded.

Refs argoproj#41

* Remove panic() in url.go and file.go.

Return appropriate errors instead.
juliev0 pushed a commit to juliev0/argo-events that referenced this issue Mar 29, 2022
Add info about inlined, file and url sources for trigger workflows in the docs as well as changelog.

Refs argoproj#41.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants