RFC: Pipeline config #2255
Replies: 42 comments
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
-
My recommendation: when:
- ...
images:
runner: ubuntu:20
maker: golang:latest
# All types show echo the yaml structure, and no "special"
# Handling. This allows for expansion without "custom" knowlage.
steps:
- name: run
runs_on: runner
commands:
- echo ok
- name: make
run_on: maker
environment:
A: b
C: d
secrets:
A: b
C: d
commands:
- make Following the principles:
|
Beta Was this translation helpful? Give feedback.
-
we can but it would break all existing configs, what's the benefit? you can add it to #829 as comment, but for now I don't see a benefit - if it would technically improve a ting or make something possible that was not before it should be considered - but just for the sake of taste we should not break things - call it backwards compatibility |
Beta Was this translation helpful? Give feedback.
-
👍🏾 I really like that idea of having just one way of doing things. Having the option to use arrays or just a single string confuses be quite often and makes things like json-schema a hell to write. |
Beta Was this translation helpful? Give feedback.
-
It would. When we are following the yaml config we can do the following things:
In general, the idea of version is good since it allows one to slowly migrate to a more "readable" yaml. Following standards, IMO, is the best way to expand use. Finally, Once proper yaml exists, we can in general start considering dynamic pipelines. |
Beta Was this translation helpful? Give feedback.
-
Oh another thing, one can see an example of over complication by not following yaml standards in the Containers unmarshel, for i, n := range value.Content {
if i%2 == 1 {
container := Container{}
err := n.Decode(&container)
if err != nil {
return err
}
if container.Name == "" {
container.Name = fmt.Sprintf("%v", value.Content[i-1].Value)
}
c.Containers = append(c.Containers, &container)
}
} Instead of containers := []*Container{}
err := value.Decode(&containers) Which complicates the code and causes errors. Makes things harder to manage or write. |
Beta Was this translation helpful? Give feedback.
-
To be able to have multiple parser (for different versions) in Woodpecker I would like to do some changes to the parsing (exp. creating a clean interface what needs to be inserted and whats the output). I already started that a bit in #802. |
Beta Was this translation helpful? Give feedback.
-
Two points from me:
|
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
I should actually read OP, |
Beta Was this translation helpful? Give feedback.
-
Things that to mind would be:
Use-cases would be highly conditional pipelines (possibly controlled via Commit message), which would allow for easy sub-builds (i.e. when developing woodpecker, only build OCI-images for the agent and server with the default image, so no binaries, no docks, no alpine,...)
Also since v2 is a "breaking" version I would like to try and get as much of the major refractoring / syntax changing done as possible to not break the schema soon to add new features. |
Beta Was this translation helpful? Give feedback.
-
You can just have the job decide what is appropriate for it to do. The job runs actual code, recreating so much of it in the config is probably not necessary. |
Beta Was this translation helpful? Give feedback.
-
I'd tend to not use the relative path as a "git instance" path, if I'd use it at all I'd go with repo relative (allowing reuse within the repo, which may get useful when running pipelines for multiple arches, or having standard prepare steps, that are used in PR pipelines as well as test and release...) Regarding templating, I'm fairly sure it would be the way to go, as you could import, and use the step at multiple places, reconfiguring it as you need it (changing names, just the command, the environment,...). Gitlab-CI which @genofire used as an example handles variables completely differently and in their context you can really achieve most things that way (though also not all). |
Beta Was this translation helpful? Give feedback.
-
If we gona introduce imports and templates ...
|
Beta Was this translation helpful? Give feedback.
-
it we really wana make import secure, we would have to come up with something similar to how go-modules, rust crates etc... are handled :/ |
Beta Was this translation helpful? Give feedback.
-
two questions: should templates at max contain a whole pipeline, only a workflow or only steps
|
Beta Was this translation helpful? Give feedback.
-
Cross-referencing #1504 as another proposal for the Pipeline config... (setting up envs in one step for use in another one, that may also be a plugin) |
Beta Was this translation helpful? Give feedback.
-
what a pure base config not become -> scriptable (turing-complete) that's what isolated environments are for ... |
Beta Was this translation helpful? Give feedback.
-
With imports I'm a bit unsure, to me I'd tend to go with the step as unit of inclusion (and potential templating, which BTW is a native yaml feature), I can see use-cases, where including more than a single step (potentially even a complete pipeline) could be useful though (think standard build pipeline, that just gets a few values set, e.g. debug-runs and release runs) |
Beta Was this translation helpful? Give feedback.
-
well if its so generic, it can be covered by a plugin ... (and/or) that geneates the pipeline ... and so #1400 would cover it without adding complexity to woodpecker itselve |
Beta Was this translation helpful? Give feedback.
-
Is this a reference to #1504 or the templating / imports discussion? If it's re #1504, I'd not call it scriptable, it's more of a way to prepare / setup an environment, which would either need to be repeated often within the pipeline, or is used by a step, that is not capable of setting up the environment as needed (think plugins) |
Beta Was this translation helpful? Give feedback.
-
For the import step, I'd agree that it is possible to do this in a compile step (if that gets implemented) as for yaml templating it's a native yaml feature, that would "just" need a slight refractor of the pipeline config to allow for it (it's mostly, that we name the step by naming the root node instead of having |
Beta Was this translation helpful? Give feedback.
-
And I need to add a correction the templating via merge keys (which I was referring to when talking about templating) is implemented in many yaml-implementations, but isn't in the yaml1.2 spec... (due to a "conflict" with one of the yaml core folks) sorry if I caused confusion... It's documentation is here... https://yaml.org/type/merge.html Also the go-library used by woodpecker (gopkg.in/yaml.v3) supports it. It's implemented in the decode.go (tested in decode_test.go) and is part of resolve.go. |
Beta Was this translation helpful? Give feedback.
-
ah ... that's already on the todo list -> #1192 |
Beta Was this translation helpful? Give feedback.
-
ah sorry didn't notice that one... (it would play nicely with templating though) |
Beta Was this translation helpful? Give feedback.
-
Lets collect ideas for the pipeline config format. 🚀
We should have a critical look at the current config format and propose all possible changes to the format with the following aspects in mind:
General feature requests for the pipeline (can be implemented in the current version already if possible)
remove support for-> single string is array with one itemroot.pipeline.[step].commands
string values in favor of array listsroot.platform
,root.branches
,root.labels
,root.runs_on
,root.depends_on
toroot.when.xxx
to match step filters Usewhen
keyword for pipeline conditions #283root.when
androot.pipeline.[step].when
an array of the current with items using the current settings to allow ORs Allow multiple when filters #686 (Add support for pipeline root.when conditions #770,...)-> non breaking
root.Cache
and functionality in favor of plugins? -> need more discussiongroup
withneeds
if noneeds
is set step will start directlyneeds
instead ofgroup
drop step.group in favour of depends_on to create a DAG #1860from #393
Current idea:
Version could be used to run pipeline parsing with some kind of sub-program for that specific version.
Backend data format / type should not depend on docker types. For example we should not have
networks
as a property in steps.Beta Was this translation helpful? Give feedback.
All reactions