-
-
Notifications
You must be signed in to change notification settings - Fork 369
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFC: Pipeline config #567
Comments
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
My recommendation: when:
- ...
images:
runner: ubuntu:20
maker: golang:latest
# All types show echo the yaml structure, and no "special"
# Handling. This allows for expansion without "custom" knowlage.
steps:
- name: run
runs_on: runner
commands:
- echo ok
- name: make
run_on: maker
environment:
A: b
C: d
secrets:
A: b
C: d
commands:
- make Following the principles:
|
we can but it would break all existing configs, what's the benefit? you can add it to #829 as comment, but for now I don't see a benefit - if it would technically improve a ting or make something possible that was not before it should be considered - but just for the sake of taste we should not break things - call it backwards compatibility |
👍🏾 I really like that idea of having just one way of doing things. Having the option to use arrays or just a single string confuses be quite often and makes things like json-schema a hell to write. |
It would. When we are following the yaml config we can do the following things:
In general, the idea of version is good since it allows one to slowly migrate to a more "readable" yaml. Following standards, IMO, is the best way to expand use. Finally, Once proper yaml exists, we can in general start considering dynamic pipelines. |
Oh another thing, one can see an example of over complication by not following yaml standards in the Containers unmarshel, for i, n := range value.Content {
if i%2 == 1 {
container := Container{}
err := n.Decode(&container)
if err != nil {
return err
}
if container.Name == "" {
container.Name = fmt.Sprintf("%v", value.Content[i-1].Value)
}
c.Containers = append(c.Containers, &container)
}
} Instead of containers := []*Container{}
err := value.Decode(&containers) Which complicates the code and causes errors. Makes things harder to manage or write. |
To be able to have multiple parser (for different versions) in Woodpecker I would like to do some changes to the parsing (exp. creating a clean interface what needs to be inserted and whats the output). I already started that a bit in #802. |
Two points from me:
|
|
Another thing (should I open FRs for these ideas?) would be some kind of This might be augmented with something like apiVersion: woodpecker-ci.org/v2
kind: pipeline
steps:
- name: Lint
badge_groups:
- name: Lint
- name: ContainerBuild
- name: BinaryBuild
- name: Docs
statusEnabled: false
- name: ContainerBuild-debian
badge_groups:
- name: ContainerBuild
- name: ContainerBuild-alpine
badge_groups:
- name: ContainerBuild
- name: BinaryBuild
badge_groups:
- name: BinaryBuild
- name: DocsBuild
badge_groups:
- name: DocsBuild
- name: notify
globalStatusEnabled: false
badge_groups:
- name: Notification I've left out images,... just to show the basic idea. This would give us 5 specific badges + the global one, where the global one ignores the notify step (we've built successfully but not notified). |
I am missing in the list above: |
that looks like a reimplementation of workflows within a workflow :/ |
somewhat looks like it, though the goal isn't to implement a workflow, but to collect states to allow for badges / reporting on parts of the entire workflow (think status page, linting ok, security ok, binary-build ok, docker-build failed, integration testing failed). But I know that this would be a "polish" feature, not a base thing / necessity (though collecting that info can be a bit ugly) |
Could be nice to add way to import pipelines from external repos (similarly how go imports packages): import:
- github.com/org/repo/dir/pipeline.yaml
- git.mycomany.com/org/repo/build.yaml |
I'm unsure on this, it may be more suitable to lightly restructure the yaml and use yaml templating (could still be an import), this would allow including (named) templates which may either be used as is, or have single (or multiple) values overridden. The one problem I see with imports in general is the security risk associated with including "something" (we should at least make it possible to either add checksums, or tags,... also disabling it on a per server basis would be nice (or having a allow list for imports)) Edit: just to add this may get dangerous quickly if the import just changes to including some secret, and uploading it to somewhere... |
To minimize the risk, it would be nice to set an version (git commit id or an git Tag). In the end, you should just import own developed or verified by an code review ... (like on all Software components you use). I am not sure if templating is needed, to parse values, environment variables are enought on gitlab-ci. @lafriks maybe also relativ, like |
I'd tend to not use the relative path as a "git instance" path, if I'd use it at all I'd go with repo relative (allowing reuse within the repo, which may get useful when running pipelines for multiple arches, or having standard prepare steps, that are used in PR pipelines as well as test and release...) Regarding templating, I'm fairly sure it would be the way to go, as you could import, and use the step at multiple places, reconfiguring it as you need it (changing names, just the command, the environment,...). Gitlab-CI which @genofire used as an example handles variables completely differently and in their context you can really achieve most things that way (though also not all). |
If we gona introduce imports and templates ...
|
it we really wana make import secure, we would have to come up with something similar to how go-modules, rust crates etc... are handled :/ |
two questions: should templates at max contain a whole pipeline, only a workflow or only steps
|
Cross-referencing #1504 as another proposal for the Pipeline config... (setting up envs in one step for use in another one, that may also be a plugin) |
what a pure base config not become -> scriptable (turing-complete) that's what isolated environments are for ... |
With imports I'm a bit unsure, to me I'd tend to go with the step as unit of inclusion (and potential templating, which BTW is a native yaml feature), I can see use-cases, where including more than a single step (potentially even a complete pipeline) could be useful though (think standard build pipeline, that just gets a few values set, e.g. debug-runs and release runs) |
well if its so generic, it can be covered by a plugin ... (and/or) that geneates the pipeline ... and so #1400 would cover it without adding complexity to woodpecker itselve |
Is this a reference to #1504 or the templating / imports discussion? If it's re #1504, I'd not call it scriptable, it's more of a way to prepare / setup an environment, which would either need to be repeated often within the pipeline, or is used by a step, that is not capable of setting up the environment as needed (think plugins) |
For the import step, I'd agree that it is possible to do this in a compile step (if that gets implemented) as for yaml templating it's a native yaml feature, that would "just" need a slight refractor of the pipeline config to allow for it (it's mostly, that we name the step by naming the root node instead of having |
And I need to add a correction the templating via merge keys (which I was referring to when talking about templating) is implemented in many yaml-implementations, but isn't in the yaml1.2 spec... (due to a "conflict" with one of the yaml core folks) sorry if I caused confusion... It's documentation is here... https://yaml.org/type/merge.html Also the go-library used by woodpecker (gopkg.in/yaml.v3) supports it. It's implemented in the decode.go (tested in decode_test.go) and is part of resolve.go. |
ah ... that's already on the todo list -> #1192 |
ah sorry didn't notice that one... (it would play nicely with templating though) |
This comment was marked as resolved.
This comment was marked as resolved.
add version to workflow configs -> #1834 |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
Lets collect ideas for the pipeline config format. 🚀
We should have a critical look at the current config format and propose all possible changes to the format with the following aspects in mind:
General feature requests for the pipeline (can be implemented in the current version already if possible)
remove support for-> single string is array with one itemroot.pipeline.[step].commands
string values in favor of array listsroot.platform
,root.branches
,root.labels
,root.runs_on
,root.depends_on
toroot.when.xxx
to match step filters Usewhen
keyword for pipeline conditions #283root.when
androot.pipeline.[step].when
an array of the current with items using the current settings to allow ORs Allow multiple when filters #686 (Add support for pipeline root.when conditions #770,...)-> non breaking
root.Cache
and functionality in favor of plugins? -> need more discussiongroup
withneeds
if noneeds
is set step will start directlyneeds
instead ofgroup
drop step.group in favour of depends_on to create a DAG #1860from #393
Current idea:
Version could be used to run pipeline parsing with some kind of sub-program for that specific version.
Backend data format / type should not depend on docker types. For example we should not have
networks
as a property in steps.The text was updated successfully, but these errors were encountered: