Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[kustomize 2.1.0] cannot read data from configMap #1295

Closed
jinchihe opened this issue Jul 2, 2019 · 22 comments
Closed

[kustomize 2.1.0] cannot read data from configMap #1295

jinchihe opened this issue Jul 2, 2019 · 22 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. vars-related

Comments

@jinchihe
Copy link

jinchihe commented Jul 2, 2019

I hit the problem with kustomize 2.1.0, but that works fine with kustomize 2.0.3, I'm not sure is that broken by kustomize 2.1.0 or design behavior changed?

Base kustomization.yaml

[root@jinchi1 local]# cat ../base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
...
vars:
...
- fieldref:
    fieldPath: data.batchSize
  name: batchSize
  objref:
    apiVersion: v1
    kind: ConfigMap
    name: mnist-map-training
...

Local kustomization.yaml

[root@jinchi1 local]# cat kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

bases:
- ../base
.....
configMapGenerator:
- literals:
  - name=mnist-train-local
  - batchSize=100
 ....
  name: mnist-map-training

[root@jinchi1 local]# kustomize build .
Error: var '{batchSize ~G_~V_ConfigMap {data.batchSize}}' cannot be mapped to a field in the set of known resources
[root@jinchi1 local]# kustomize version
Version: {KustomizeVersion:2.1.0 GitCommit:af67c893d87c5fb8200f8a3edac7fdafd61ec0bd BuildDate:2019-06-18T22:01:59Z GoOs:linux GoArch:amd64}

But that works fine with kustomize v0.2.3. Could someone to take a look? thanks a lot!

@jinchihe
Copy link
Author

jinchihe commented Jul 2, 2019

/cc @monopole @Liujingfang1
Any suggestion for this? Or recommend someone to take a look? Great thanks!

@jinchihe
Copy link
Author

jinchihe commented Jul 2, 2019

Did more deep tests, if both of vars and configMapGenerator are in same kustomization.yaml, that's OK. In other words, both the vars and configMapGenerator need to in base/kustomization.yaml or both in local/kustomization.yaml. But failed if vars defined in base, and configMapGenerator defined in local/kustomization.yaml.

Personally we should fix that, for reasons:

  1. For most the cases, The vars path will be same, user would like to set in the base, but the value is not same, user would like to set in local.
  2. kustomize v2.0.3 supports that, we should have basic compatibility.

@monopole
Copy link
Contributor

monopole commented Jul 2, 2019

nice report thanks.

any chance you could help us by writing a test demonstrating this that we can keep for regression coverage?

like this but focussed on your particular case?

we're a little shorthanded here. :)

@jinchihe
Copy link
Author

jinchihe commented Jul 2, 2019

Not sure I have chance for this, since many workloads accumulation :-( apologize... but will do that if get chance. I think we can fix that and then writing a test demonstrating.

@jbrette
Copy link
Contributor

jbrette commented Jul 17, 2019

@jinchihe This is fixed by PR.

Did reproduce your issue here using some of them minst CRDs.

@monopole The corresponding go regression test is here

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 15, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 14, 2019
@ryandawsonuk
Copy link

ryandawsonuk commented Nov 22, 2019

I see #1316 was closed without merging. The example that led to this seems to still have this problem in the most recent versions of kustomize v2 and v3. Are we any closer to deciding whether this is to be considered a bug that should be fixed or if it's to be thought of as by design?

@jinchihe
Copy link
Author

jinchihe commented Dec 2, 2019

The problem still exists in kustomize v3.4.0.

# /test123/kustomize version
{Version:kustomize/v3.4.0 GitCommit:2c9635967a2b1469d605a91a1d040bd27c73ca7d BuildDate:2019-11-12T05:00:57Z GoOs:linux GoArch:amd64}
# /test123/kustomize build .
Error: var '{batchSize ~G_v1_ConfigMap {data.batchSize}}' cannot be mapped to a field in the set of known resources

/remove-lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Dec 2, 2019
@jinchihe
Copy link
Author

jinchihe commented Dec 2, 2019

@jbrette @monopole Could you please help to double confirm and fix this? thanks a lot!
/P1

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 1, 2020
@jinchihe
Copy link
Author

jinchihe commented Mar 2, 2020

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 2, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 31, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 1, 2020
@k8s-ci-robot k8s-ci-robot added the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jul 1, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@igorcalabria
Copy link

This issue still persists in kustomize v4.2.0. If the generator and vars are defined in the same file it works, but if the var is on one file(say base, for example) and the generator is on another file(production) it won't work.

@a-nych
Copy link

a-nych commented May 18, 2022

/reopen
/remove-lifecycle rotten

Reopening as this still persists in v4.5.2

@k8s-ci-robot
Copy link
Contributor

@a-nych: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen
/remove-lifecycle rotten

Reopening as this still persists in v4.5.2

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label May 18, 2022
@a-nych
Copy link

a-nych commented May 18, 2022

@monopole Could you please reopen this issue?

1 similar comment
@Kristin0
Copy link

Kristin0 commented Sep 3, 2022

@monopole Could you please reopen this issue?

@tingsl409
Copy link

An old issue, might not be relavent anymore but I found that changing the overlays generator behavior to merge could solve the issue:

base/kustomization.yml:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - ./deployment.yml

configMapGenerator:
- name: gg

vars:
- name: TEST
  objref:
    kind: ConfigMap
    name: gg
    apiVersion: v1
  fieldref:
    fieldpath: data.wut

overlays/local/kustomization.yml:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - ../../base

configMapGenerator:
- name: gg
  behavior: merge
  literals:
  - wut=helloWorld

At least it works in my environment .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. vars-related
Projects
None yet
Development

No branches or pull requests

10 participants