-
Notifications
You must be signed in to change notification settings - Fork 566
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Needs evaluation happens before templating is done #2048
Comments
I see the same error but I was able to reduce it down to a case where there are slashes ( Here is a minimum reproduction: helmDefaults:
kubeContext: arn:aws:eks:us-east-1:11111111:cluster/cluster-name # Note '/' in name
repositories:
- name: stable
url: https://charts.helm.sh/stable
releases:
- name: bar
namespace: namespace
chart: stable/metrics-server
- name: foo
namespace: namespace
chart: stable/metrics-server
needs:
- bar
# - namespace/bar
# - cluster-name/namespace/bar
# - arn:aws:eks:us-east-1:11111111:cluster/cluster-name/namespace/bar None of the listed $ helmfile template
Adding repo stable https://charts.helm.sh/stable
"stable" has been added to your repositories
in ./helmfile.yaml: release(s) "arn:aws:eks:us-east-1:11111111:cluster/cluster-name/namespace/foo" depend(s) on an undefined release "cluster-name/namespace/bar". Perhaps you made a typo in "needs" or forgot defining a release named "bar" with appropriate "namespace" and "kubeContext"? Note that if I change the context to add an additional slash: - kubeContext: arn:aws:eks:us-east-1:11111111:cluster/cluster-name
+ kubeContext: arn:aws:eks:us-east-1:11111111:cluster/clus/ter-name This is the new error: $ helmfile template
Adding repo stable https://charts.helm.sh/stable
"stable" has been added to your repositories
in ./helmfile.yaml: release(s) "arn:aws:eks:us-east-1:11111111:cluster/clus/ter-name/namespace/foo" depend(s) on an undefined release "ter-name/namespace/bar". Perhaps you made a typo in "needs" or forgot defining a release named "bar" with appropriate "namespace" and "kubeContext"? See specifically |
Forgot to mention. - kubeContext: arn:aws:eks:us-east-1:11111111:cluster/cluster-name
+ kubeContext: arn:aws:eks:us-east-1:11111111:cluster------cluster-name |
@nicholascapo Ah okay, yours seems to be a different issue that comes from the fact that helmfile doesn't support I doubt if it ever worked before. |
@kuzaxak Hey. Thanks for reporting. I need to dig my memory, but I think I completely missed how the use of release template within I doubt not. |
Interesting, that is consistent with what we have observed, I don't think it was caused by your commits just made noisy :-) I wasn't aware of the context name constraint, is that documented somewhere? |
@nicholascapo No, it isn't documented yet. |
Let me confirm- so |
@kuzaxak Hey. So I reread helmfile's implementation code and it turns out that it never worked. The expansion of
in each release is implemented in this function Line 10 in ae942c5
and it doesn't support So what happened to you is, such A possible solution would be to use go template's |
@nicholascapo Could you create another issue dedicated to additional support for If you could attach a complete helmfile config for reproduction, it would help anyone to contribute. |
This probably helms downstream issues like roboll/helmfile#2048 (comment) easily because then you do not need to rely on a special character (like /) within a string key for sorting nodes with hierarchical keys.
Will do. Thank you for detailed information! |
@kuzaxak I believe you can keep using this issue if you really want |
I will try to understand how can we replace them with |
@kuzaxak Thanks. FWIW, I was thinking about one like the below example. Note that I have not tested this specific example, and I may have misunderstood your goal. Hope it helps.
|
We also need helmfile.d/00.yaml releases:
{{ $bar := "bar" }}
- name: {{ $bar }}
namespace: foo helmfile.d/01.yaml releases:
- name: baz
namespace: quux
needs:
- foo/bar |
I'm also having some issues with needs, my error related to the kubecontext is a bit different, it doesn't show the content before the initial slash.
when my context is: |
@Cayan Hey. Yours seems to be exactly the same as #2048 (comment), which is another issue unrelated to the one explained in this thread :) |
Investigated a bit deeply and found interesting case. If release has a needs dependency template with selector will always fail.
Happening because |
@kuzaxak Thanks for your help 🙏 Let me confirm- does it fail even with |
Will check and prepare PR. We are working on adding go getter to the values and looks like we need to fix both. |
With the [latest changes][2] helmfile checking DAG before run template. At the same time complete list of releases was replaced with `selectedReleasesWithNeeds` to improve rendering speed. With that replacement by default all needed releases was excluded from the list and as a result `withDAG` will always fail. I added few tests to cover that logic and prevent regression in future. For more details please check original [issue][1] [1]: roboll/helmfile#2048 [2]: roboll/helmfile#2026 Signed-off-by: Vladimir Kuznichenkov <kuzaxak.tech@gmail.com>
With the [latest changes][2] helmfile checking DAG before run template. At the same time complete list of releases was replaced with `selectedReleasesWithNeeds` to improve rendering speed. With that replacement by default all needed releases was excluded from the list and as a result `withDAG` will always fail. I added few tests to cover that logic and prevent regression in future. For more details please check original [issue][1] [1]: roboll/helmfile#2048 [2]: roboll/helmfile#2026 Signed-off-by: Vladimir Kuznichenkov <kuzaxak.tech@gmail.com>
Hi @mumoshu would you be so kind to review @kuzaxak's PR? If that solves the issue of needs not working in second phase rendering, than we would be very happy! We just update our code stack to use them, as they seemed to work, but probably only because we were in dev mode and deps were installed beforehand. Running a fresh install with needs does not work at all, since we have most definitions only ready after phase 1. |
@Morriz Hey! Which PR, to be sure? I think this turned out to be issues in some hemlfile sub-commands that they didn't correctly support And I thought we already fixed that in helmfile/helmfile#78 as a part of v0.145.0 https://github.com/helmfile/helmfile/releases/tag/v0.145.0 |
Thanks for the quick reply. I hoped to find that bases:
- snippets/defaults.yaml
---
bases:
- snippets/env.gotmpl
---
bases:
- snippets/derived.gotmpl
---
{{ readFile "snippets/templates.gotmpl" }}
{{- $v := .Values }}
{{- $a := $v.apps }}
releases:
- name: gatekeeper
installed: {{ $a | get "gatekeeper.enabled" }}
namespace: gatekeeper-system
chart: ../charts/gatekeeper
disableValidationOnInstall: true
labels:
pkg: gatekeeper
values: ...
- name: gatekeeper-artifacts
installed: {{ $a | get "gatekeeper.enabled" }}
needs: [gatekeeper]
namespace: gatekeeper-system
chart: ../charts/gatekeeper-artifacts
labels:
pkg: gatekeeper
values: ...
- name: gatekeeper-constraints
installed: {{ $a | get "gatekeeper.enabled" }}
needs: [gatekeeper-artifacts]
namespace: gatekeeper-system
chart: ../charts/gatekeeper-constraints
labels:
pkg: gatekeeper
values: ... Jobs exist in each release that wait for the existence of those, but we never see anything being installed. This makes me wonder if it tries to aggregate all manifests at runtime, which would be strange, as that use case is already solved by helms |
I thought |
I will strip everything related to post/pre as that is not what I wanted to illustrate (I just copy/pasted the code from otomi right now). I just hope you can shed clarification on wether the installs are made one after the other. |
How can we make sure that chart A is installed (until the last helm post-install hook has finished) before chart B ? |
I don't think it is possible right now. We faced the same issue and I didn't had time to investigate it or fix it. We are using |
I thought it was a helm limitation that helm can't wait on post-install to finish |
|
Of course that is it. Why my brain suddenly assumed that limitation would not hold anymore I don't know. Hope? I will move the post-install hooks from A to a pre-install hook in B |
I already enjoy the better management because of it, like atomic install/uninstall of a pkg. So thanks for that! |
But now that I think about it, we already use a helm chart that has a post-install job (waiting for some service). And that does block subsequent helmfile releases from installing. So your assumption is proven to not be correct. |
It is simply a limitation for a regular k8s Job to not be blocking |
That is why helm has created its hooks functionality to observe their own jobs to become blocking. I am not seeing any install activity when |
I will create a separate issue for this |
Not happening when using |
hmmmm...is there a way for now to force |
Yessss....setting |
@Morriz Thanks for figuring it out! Based on my understanding of your use-case, I believe |
Re: #2048 (comment) I was reading https://helm.sh/docs/topics/charts_hooks/ and realized that in |
It also works without a readiness probe. The helm folks built it as intended :) |
@mumoshu do you remember if the issue regarding the |
As far as I can see after changes from commit were merged our pipeline fail with the next error:
We are using a template for release:
and release itself:
It seems that function that does the check was executed before templated string from the template was replaced with the actual value.
The text was updated successfully, but these errors were encountered: