Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kustomize vars not applied to Namespace resources #1713

Closed
anth0d opened this issue Oct 30, 2019 · 7 comments
Closed

Kustomize vars not applied to Namespace resources #1713

anth0d opened this issue Oct 30, 2019 · 7 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@anth0d
Copy link

anth0d commented Oct 30, 2019

You can run this to reproduce the issue. (Thanks to @tkellen for the idea on how to share repro steps easily)

Expected: $(MY_NAMESPACE) is actually substituted.

#!/usr/bin/env bash

cd "$(mktemp -d)"

cat <<EOF > kustomization.yml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
vars:
- name: MY_NAMESPACE
  objref:
    kind: ServiceAccount
    name: my-service-account
    apiVersion: v1
  fieldref:
    fieldpath: metadata.namespace
resources:
- namespace.yaml
- other.yaml
EOF

cat <<EOF > namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: \$(MY_NAMESPACE)
EOF

cat <<EOF > other.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: my-service-account
  namespace: some-namespace
EOF

tree
kustomize build .

Output:

2019/10/30 17:36:42 well-defined vars that were never replaced: MY_NAMESPACE
apiVersion: v1
kind: Namespace
metadata:
  name: $(MY_NAMESPACE)
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: my-service-account
  namespace: some-namespace
@jbrette
Copy link
Contributor

jbrette commented Oct 30, 2019

Looks to be working ok. here.

@anth0d
Copy link
Author

anth0d commented Nov 4, 2019

That example doesn't work for me as written. There are lines in "Preparation Step Other0" (kustomization.yml) which are commented out. Also the fully-qualified reference $(ServiceAccount.my-service-account.metadata.namespace) does not work at all for me.

If I add an additional configuration (configurations:) pointing to an additional file containing a varReference, and reference the var name MY_NAMESPACE it works -- but there is no documentation explaining why that would be necessary. I would love to understand the design decision that leads to this behavior because as a user it feels a little confusing why I need multiple configs defined in separate files.

@jbrette
Copy link
Contributor

jbrette commented Nov 5, 2019

Added comment check here. Referencing one object into another is really error prone in Kustomize.
Over the last week alone, #1734 and #1721 had the same cause.

If I grep for varReference in the list of issues, so similar to your issue:
#680
#952
#964
#976
#1190
#1250
#1251
#1268
#1295
#1295
#1367
#1390
#1540
#1553
#1592
#1710
#1713
#1721
#1733
#1734

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 3, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 4, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants