-
Notifications
You must be signed in to change notification settings - Fork 566
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: "dependsOn" for release dependencies #715
Comments
@ilyasotkov Thx! Just curious but how do you like helmfile to process |
@mumoshu It would be ideal if Helmfile would merge everything into a single Helmfile and then resolve dependencies. |
Interesting idea! But doesn't it defeat the purpose of |
Somewhat similar to The feature is definitely a major one and I don't know if implementing it in a backwards-compatible manner is even possible. When running |
Thanks! Much appreciated.
Just curious but why and when you need this? |
Merging releases should be possible. Actually we have a dedicated issue for that #684 |
It could still work quite cleanly when running independent helmfiles: # helmfile1.yaml
releases:
- name: my-release
dependsOn:
group: core
chart: ./charts/app # helmfile2.yaml
releases:
- name: my-core-app
labels:
group: core
chart: ./charts/app If we run only |
The idea comes from Terragrunt's It could be useful if you have a massive |
@ilyasotkov Interesting! So you leave users the responsibility to label releases consistently across all the involved helmfiles? In case the user had mistakenly added the same label to two different releases in separate helmfiles, maybe they have no easy way to notice that? |
Still trying to understand - How is it different from running |
If we have a massive releases:
- name: my-app
labels:
app: my-app
dependsOn:
app: ["storage", "ingress"]
chart: ./charts/app
Edit: in the last case it should actually resolve all dependencies declared for releases matching |
I think that if a user is syncing a collection of helmfiles, it's totally the user's responsibility to make sure he knows what that collection contains. That includes knowing how each release in the collection is labelled. |
@mumoshu Regarding possible implementation, this type of functionality would almost certainly require building a DAG and topologically sorting the list of releases to deploy. Circular dependencies would lead to critical failure. |
I see! I'd prefer But I agree with the necessity of |
I imagine I would get a lot of support tickets from users due to that :) But I have no better idea. |
I meant that Helmfile would give a warning that it didn't try to resolve dependencies (because no Edit: Though Helmfile would already "know" whether the dependencies were deployed earlier (via helm diff) and could use that information to fail early. Edit 2: That's for |
Not sure I understand, what would be the harm (besides the surprise factor)? If a user accidentally adds a label to a random application, then adds a dependsOn for that label to some other release declaration, then both the intended dependency and the random application would get deployed. It's not such a bad a thing unless the random application includes some other unintended changes. And it's not all that different from accidentally adding a label in the current version of Helmfile and wondering why 2 applications were deployed instead of 1 :) |
Good point. True 😄 |
If we were to allow that behavior, I'd prefer requiring an explicit flag for it like
|
@ilyasotkov Noted. Thanks! I am to work on this after #688 and #684 because I'll give you an example usage of this feature based on what we've discussed so far. Let's start by creating your bases:
- path: logging.yaml
inheritValues: true
- path: telemetry.yaml
inheritValues: true
- path: servicemesh.yaml
inheritValues: true
releases:
- name: mymainapp
dependsOn:
- servicemesh
releases:
- name: filebeat
labels:
group: logging
env: prod
values:
- filebeat.prod.values.yaml
- name: filebeat
labels:
group: logging
env: preview
releases:
- name: prometheus
labels:
group: telemetry
env: prod
- name: prometheus
labels:
group: telemetry
env: preview
releases:
- name: istio
labels:
group: servicemesh
env: prod
dependsOn:
- telemetry
- name: istio
labels:
group: servicemesh
env: preview
dependsOn:
- telemetry Now let's run
This isn't how the selector works today, but I'm considering to extend
Alternatively you can just put the same label to all the releases involved in a deployment scenario so that:
|
@ilyasotkov Please see above ⬆️ I'm not sure how |
I think that's a good alternative 👍
In your example
How so? 🤔 In your example mymainapp is doesn't have
This part is super confusing to me 😕 As a new user I'd assume
are the same thing.
That part makes sense. |
@mumoshu Maybe it's best to keep it simple and have releases:
- name: myapp
dependsOn:
- release: prometheus
namespace: telemetry
- name: prometheus
namespace: telemetry dependsOn would be allowed
Implementation details:
|
That way, it would be
|
@ilyasotkov Thanks for the suggestion! It looks feasible. But on the other hand it blurs the whole purpose of this feature to me. If it works only within a single helmfile, why you need this? Can't you just split your helmfile to two or more and order them under Or is it just that you want a simpler(or a more declarative - you feel so?) way than that? |
That would be simpler to start. Sounds good. Also, I love the config syntax you proposed as it is concise while being extendable to support labels or other things in the future 😄
OK. Each helmfile is isolated so that would be natural.
This one is hard.
A possible upside of it would be that you can extract a dependency into a helmfile module and switch implementation by using go template in And,
It looks great! Thanks for all the suggestions and proposals again. |
I really like purely declarative tools (like Terraform and CloudFormation) where the order of declaration doesn't matter. It's a bit more pleasant to reason about (arguably, of course).
That's exactly my proposal. |
With Instead of thinking:
I prefer this:
|
Understood 👍
Okay. Thanks for confirming 👍 |
Ah nope, my mistake. I meant e.g.
Good catch! My mistake. I should have created two
Okay then let's omit this part :) FWIW, it was also based on my frustration that |
Sounds correct! I believe it should be |
...And the gotcha is that even helm v2 allows isolating releases per tiller namespace, helmfile as of today doesn't allow having two or more releases with the same name across different tiller namespaces. So regarding this feature, we'd better start without
|
So here's the plan for the first version of this feature based on @ilyasotkov's awesome proposal above: GoalMake helmfile even more declarative regarding release dependencies Enhancement to the config syntaxThe configuration syntax is enhanced to accept releases:
- name: myapp
dependsOn:
- release:
name: prometheus
- name: prometheus
namespace: telemetry From within Helm v2 release is isolated per tiller namespace. Helm v3 release is isolated per namespace. But this syntax, for its first version, doesn't take that into account - helmfile as of today doesn't support two or more releases with the same name regardless of the tiller namespace and it remains so. Please submit an another feature request to enhance helmfile to accept two or more releases per (tiller) ns, which should also address this. Implementation details
Future plans
|
I like that enhancement as it makes things a bit more clear and a lot more extensible 👍
Great summary of the proposal, thank you @mumoshu for your amazing work and exceptional communication ❤️ |
A possibly unavoidable limitation to this feature is that Are you okay with that? I'd appreciate it if you could share your insights to resolve it fully. I'm thinking that we'd need: helmfiles:
- crds.yaml
releases:
- name: something-depends-on-crd-to-exist-in-the-cluster-before-diffing Instead of the below, which would certainly fail while releases:
- name: crds
- name: something-depends-on-crd-to-exist-in-the-cluster-before-diffing
dependsOn:
release:
name: crds |
Missed to mention you @ilyasotkov :) |
@mumoshu I'm not currently a user of those features ( |
So here's the DAG implementation I'll be integrating into Helmfile. |
…der declaratively Introduces DAG-aware installation/deletion ordering to Helmfile. `needs` controls the order of the installation/deletion of the release: ```yaml relesaes: - name: somerelease needs: - [TILLER_NAMESPACE/][NAMESPACE/]anotherelease ``` All the releases listed under `needs` are installed before(or deleted after) the release itself. For the following example, `helmfile [sync|apply]` installs releases in this order: 1. logging 2. servicemesh 3. myapp1 and myapp2 ```yaml - name: myapp1 chart: charts/myapp needs: - servicemesh - logging - name: myapp2 chart: charts/myapp needs: - servicemesh - logging - name: servicemesh chart: charts/istio needs: - logging - name: logging chart: charts/fluentd ``` Note that all the releases in a same group is installed concurrently. That is, myapp1 and myapp2 are installed concurrently. On `helmdile [delete|destroy]`, deleations happen in the reverse order. That is, `myapp1` and `myapp2` are deleted first, then `servicemesh`, and finally `logging`. Resolves #715
The pull request for this is live at #914. A few notes:
|
…der declaratively (#914) Introduces DAG-aware installation/deletion ordering to Helmfile. `needs` controls the order of the installation/deletion of the release: ```yaml relesaes: - name: somerelease needs: - [TILLER_NAMESPACE/][NAMESPACE/]anotherelease ``` All the releases listed under `needs` are installed before(or deleted after) the release itself. For the following example, `helmfile [sync|apply]` installs releases in this order: 1. logging 2. servicemesh 3. myapp1 and myapp2 ```yaml - name: myapp1 chart: charts/myapp needs: - servicemesh - logging - name: myapp2 chart: charts/myapp needs: - servicemesh - logging - name: servicemesh chart: charts/istio needs: - logging - name: logging chart: charts/fluentd ``` Note that all the releases in a same group is installed concurrently. That is, myapp1 and myapp2 are installed concurrently. On `helmdile [delete|destroy]`, deleations happen in the reverse order. That is, `myapp1` and `myapp2` are deleted first, then `servicemesh`, and finally `logging`. Resolves #715
I know Helmfile can already do release dependencies by ordering releases manually via naming conventions for helmfiles or limiting concurrency.
Ability to explicitly declare dependencies would be a major step towards making Helmfile more declarative.
My initial idea was to base it on release names like so:
It would be even nicer to base it on release labels:
The text was updated successfully, but these errors were encountered: