-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Container sequences #2551
Comments
Would that work for: Avoids the need to pass output parameters around? |
Yes! That's the main advantage |
The main advantage of this feature would be to avoid passing artifacts using an external provider between different tasks in a Workflow, when the intermediary artifacts can be discarded after use. To achieve this, we would make use of ephemeral containers in K8s. The idea is that the controller would create and remove ephemeral containers in a single pod, allowing them to all use the same filesystem I envision something like a steps template: - name: sequence
sequence:
- - name: create-artifact
template: gen-data
- - name: consume-artifact
template: process-data
- name: gen-data
container:
...
outputs:
artifacts:
file: ...
- name: process-data
inputs:
artifacts:
file: ...
container:
... Ideally, users would simply be able to rename NOTE: This feature is still only an idea: we're about to start creating a PoC to see just how viable it is. Nothing is set in stone (not even the name |
Seems like a great idea and very useful. Just a couple thoughts:
For argo on production clusters, it might be a capability not exercised for a while. That said, benefits might outweigh the risks for certain use-cases. |
You are very much correct @ddseapy. We are definitely treating this as an experimental feature |
An update on this: given some limitations placed by K8s on this feature – mainly the inability to replace or modify individual ephemeral containers in a Pod and only replace the entire list of ephemeral containers as an operation – we don't think this feature as described is currently feasible. However, I'll investigate if we can take advantage of this feature for other purposes, such as a streamlined "Retry" node that performs its retries on the same Pod, saving the need to create new ones and download artifacts every time. |
@simster7 could you please close this issue this feature is not possible and open a new issues for "in-place retries" so that issues 👍 is reflective of the popularity of that issue? |
@simster7 bump! |
Closing this as it is currently implausible. Related: #3475 |
Sequenced Containers the Tekton WaySimilar to how Tekton does it: https://github.com/tektoncd/pipeline/tree/master/cmd/entrypoint How this works:
How could workflows uses this? Simpler and more powerful executor: As the binary runs in the same process namespaces as the sub-process, it can easily copy inputs and capture outputs without any of the magic container runtime executors need to use. Specifically, this would very well with This also removes the need for a wait container. This would reduce costs. See #4186 Many steps within a pod: This model would allow the However, this has some scaling issues. We could not run a 1000 step workflow like this. Because each container must be spun up to wait, there will be many cases where we're consuming resources, but doing no useful work. See #2551 There are some really interesting challenges about how pods report back status for the workflow for this. We'd need to multiplex it so we might want to address at the same time as #3961. |
Summary
It should be possible to run multiple steps within the same pod
using ephemeral containers.Motivation
Proposal
TODO
Message from the maintainers:
If you wish to see this enhancement implemented please add a 👍 reaction to this issue! We often sort issues this way to know what to prioritize.
The text was updated successfully, but these errors were encountered: