-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How can the workflow access local files and directories #1050
Comments
@mrm1001 we have a raw artifact file, where the file needs to be inlined into the workflow spec. See: |
You can do something like:
|
@jessesuen Is there a more complete example of your last comment? The best I could guess was you would have an input parameter that populates an input artifact. I.e. inputs:
parameters:
- name: contents
artifacts:
- name: fileContents
path: /tmp/fileContents
raw:
data: |
"{{inputs.parameters.contents}}" I seem to get a parse error trying to do something similar. I COULD inline the file content, but the file is rather large and makes reading the workflow harder. |
Yes this feature would be very interesting for managing large files. Would you have a track please? |
eh? |
I have the same problem than the one described here: #1050 (comment) |
Could you use |
I came across this issue while looking into a way to connect argo to a distributed filesystem, not HDFS. I'm not sure I understand the artifact repository concept fully, so excuse me if my question makes no sense, but I'm trying to have my workflow connect to my files in the most direct fashion ... by mounting the filesystem. @alexec your comment makes me curious, getting a volume into the pod seems like a good solution to my problem, assuming the volume can be read and written to, but I do not see a "files" repository type? I also do not see a way to implement a custom artifact repository, is there something like that possible? Without having to fork argo-workflows? K8S has the concept of CSI drivers, if I can access my "remote" aka "repository" as a filesystem in multiple K8S pods simultaneously, I think I should also be able to access it in my argo-workflow somehow, but I can't figure out how? Thank you for your comments. |
argoproj#1052) * feat(eventsource): Support NATS access with auth. Closes argoproj#1050 Signed-off-by: Derek Wang <whynowy@gmail.com> * refac Signed-off-by: Derek Wang <whynowy@gmail.com> * update example Signed-off-by: Derek Wang <whynowy@gmail.com> * re-run codegen Signed-off-by: Derek Wang <whynowy@gmail.com>
Is this a BUG REPORT or FEATURE REQUEST?:
Forgive me if this is not the right place for this. I am very new to Argo and generally infrastructure work.
What happened:
I'm trying to run a local script as one of the steps of the workflow. For example, let's say that I had a python file in my local machine with the logic of this example: https://github.com/argoproj/argo/blob/master/examples/scripts-python.yaml. So I would like to create a container which runs the script instead of having to write the source code.
This is what I tried:
script
tag (like in the examples) only accepts source code, not a local filepathhttp
,git
ors3
. I could not find any example with a local filepath.volumes
to mount a local directory.What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
If this exists already it would be great to have an example.
Environment:
Other debugging information (if applicable):
The text was updated successfully, but these errors were encountered: