Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can the workflow access local files and directories #1050

Closed
mrm1001 opened this issue Oct 18, 2018 · 8 comments
Closed

How can the workflow access local files and directories #1050

mrm1001 opened this issue Oct 18, 2018 · 8 comments
Labels
type/support User support issue - likely not a bug

Comments

@mrm1001
Copy link

mrm1001 commented Oct 18, 2018

Is this a BUG REPORT or FEATURE REQUEST?:

Forgive me if this is not the right place for this. I am very new to Argo and generally infrastructure work.

What happened:

I'm trying to run a local script as one of the steps of the workflow. For example, let's say that I had a python file in my local machine with the logic of this example: https://github.com/argoproj/argo/blob/master/examples/scripts-python.yaml. So I would like to create a container which runs the script instead of having to write the source code.

This is what I tried:

  • using the script tag (like in the examples) only accepts source code, not a local filepath
  • using artifacts to copy a local file (from the host) to the container only accepts http, git or s3. I could not find any example with a local filepath.
  • I could not figure out how to use volumes to mount a local directory.

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:
If this exists already it would be great to have an example.

Environment:

  • Argo version:
$ argo version
  • Kubernetes version :
$ kubectl version -o yaml

Other debugging information (if applicable):

  • workflow result:
$ argo get <workflowname>
  • executor logs:
$ kubectl logs <failedpodname> -c init
$ kubectl logs <failedpodname> -c wait
  • workflow-controller logs:
$ kubectl logs -n kube-system $(kubectl get pods -l app=workflow-controller -n kube-system -o name)
@jessesuen
Copy link
Member

@mrm1001 we have a raw artifact file, where the file needs to be inlined into the workflow spec. See:
https://github.com/argoproj/argo/blob/master/examples/input-artifact-raw.yaml

@jessesuen
Copy link
Member

You can do something like:

argo submit workflow.yaml -p file="`cat <path-to-local-file>`"

@rpocase
Copy link

rpocase commented Jan 29, 2020

@jessesuen Is there a more complete example of your last comment? The best I could guess was you would have an input parameter that populates an input artifact. I.e.

inputs:
   parameters:
   - name: contents
  artifacts:
  - name: fileContents
    path: /tmp/fileContents
    raw:
        data: | 
           "{{inputs.parameters.contents}}"

I seem to get a parse error trying to do something similar. I COULD inline the file content, but the file is rather large and makes reading the workflow harder.

@fjammes
Copy link
Contributor

fjammes commented Jan 5, 2021

Yes this feature would be very interesting for managing large files. Would you have a track please?

@alexec
Copy link
Contributor

alexec commented Jan 5, 2021

Would you have a track please?

eh?

@fjammes
Copy link
Contributor

fjammes commented Jan 7, 2021

I have the same problem than the one described here: #1050 (comment)
It seems that inserting the content of {{inputs.parameters.contents}} in the argo workflow yaml file breaks its indentation and produces a parse error. Would you have a track to solve this indentation problem? (On my side I need to add a local json file and a CA chain file inside an argo resource. )

@alexec
Copy link
Contributor

alexec commented Jan 7, 2021

Could you use podSpecPatch to mount a volume containing your files?

@jalberti
Copy link

jalberti commented Mar 2, 2021

I came across this issue while looking into a way to connect argo to a distributed filesystem, not HDFS. I'm not sure I understand the artifact repository concept fully, so excuse me if my question makes no sense, but I'm trying to have my workflow connect to my files in the most direct fashion ... by mounting the filesystem.

@alexec your comment makes me curious, getting a volume into the pod seems like a good solution to my problem, assuming the volume can be read and written to, but I do not see a "files" repository type?

I also do not see a way to implement a custom artifact repository, is there something like that possible? Without having to fork argo-workflows? K8S has the concept of CSI drivers, if I can access my "remote" aka "repository" as a filesystem in multiple K8S pods simultaneously, I think I should also be able to access it in my argo-workflow somehow, but I can't figure out how?

https://medium.com/asl19-developers/create-readwritemany-persistentvolumeclaims-on-your-kubernetes-cluster-3a8db51f98e3

Thank you for your comments.

icecoffee531 pushed a commit to icecoffee531/argo-workflows that referenced this issue Jan 5, 2022
argoproj#1052)

* feat(eventsource): Support NATS access with auth. Closes argoproj#1050

Signed-off-by: Derek Wang <whynowy@gmail.com>

* refac

Signed-off-by: Derek Wang <whynowy@gmail.com>

* update example

Signed-off-by: Derek Wang <whynowy@gmail.com>

* re-run codegen

Signed-off-by: Derek Wang <whynowy@gmail.com>
@agilgur5 agilgur5 added the type/support User support issue - likely not a bug label Jul 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type/support User support issue - likely not a bug
Projects
None yet
Development

No branches or pull requests

7 participants