Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Empty validation field when generating CRDs with multiple versions #349

Closed
sebgl opened this issue Oct 23, 2019 · 7 comments
Closed

Empty validation field when generating CRDs with multiple versions #349

sebgl opened this issue Oct 23, 2019 · 7 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@sebgl
Copy link

sebgl commented Oct 23, 2019

I think there's a bug which prevents the trivial versions option to work as expected.

How to reproduce:

  • setup an apis/ pkg using a kubebuilder directory structure, containing multiple versions of the same CRD, with a different spec:
* apis/mycrd/v1alpha1
* apis/mycrd/v1beta1
  • generate the CRD manifests using crd:trivialVersions=true
  • according to the code, the CRD top-level validation field should be filled with the schema of the version which has storage: true:

https://github.com/kubernetes-sigs/controller-tools/blob/master/pkg/crd/gen.go#L98-L116

  • the resulting CRD has an empty validation field, which should not happen, I think?

Unless we explicitly want to have no validation set when multiple versions are specified? (Which currently happens, I think, as an undesired side-effect in the code).

@sebgl
Copy link
Author

sebgl commented Oct 23, 2019

Looking at the code, the validation and schema fields are mutated in toTrivialVersions():
https://github.com/kubernetes-sigs/controller-tools/blob/master/pkg/crd/gen.go#L98-L116

I think we process each individual groupKind as many times as there are corresponding versions, but since we then end up mutating the same CRD multiple times, its validation field becomes nil starting the second iteration. I don't think that's intended?

I tested with having 2 versions of the same resource: FindKubeKinds() returns 2 items (the exact same one for each version), and processes them both.
The second call to toTrivialVersions() processes a structure which already has its validation and schema fields modified.

I have a PR with a fix almost ready.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 21, 2020
@sebgl
Copy link
Author

sebgl commented Jan 24, 2020

/remove-lifecycle stale

I think this is still an issue. #350 is pending review. #365 (comment) adds integration tests to catch this bug.

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 24, 2020
sebgl added a commit to sebgl/cloud-on-k8s that referenced this issue Jan 29, 2020
A bug in controller-tools prevent validation to appear in the generated
CRDs if trivialVersions is set and multiple versions do not share the
exact same schema.
See kubernetes-sigs/controller-tools#349.

This commit removes the usage of `crd:trivialVersions=true`, to
"manually" (through Kustomize) apply the CRD modifications so we end up
with the trivialVersion scheme. That is: a single `validation` field
matching the OpenAPISchema of the currently stored version (v1).
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 23, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 23, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants