Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Helm chart TODO list #5161

Closed
15 of 17 tasks
aledbf opened this issue Feb 24, 2020 · 17 comments
Closed
15 of 17 tasks

Helm chart TODO list #5161

aledbf opened this issue Feb 24, 2020 · 17 comments

Comments

@aledbf
Copy link
Member

aledbf commented Feb 24, 2020

  • Rename chart to ingress-nginx
  • add common labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
  • default backend should be disabled by default
  • webhook should be enabled by default
  • script to generate existing static deployments running helm template
  • update docs to change helm repository URL: https://kubernetes.github.io/ingress-nginx
  • migrate e2e test prow job
  • migrate circle-ci lint job
  • rename git tags to be clear (chart-X.XX.XX and controller-X.XX.X)
  • Update labels to follow: https://helm.sh/docs/topics/chart_best_practices/labels/#standard-labels
  • Ensure support for Helm v2 && v3
  • Create Migration Documentation for users moving from stable/nginx-ingress
  • Check the minimum version of the controller image? (restrict to the last three versions)
  • e2e tests should use a local directory of the chart instead of repository
  • make dev-env must use local chart
  • add hostPort support for deployment
  • add support for custom --health-check-path flag
@ChiefAlexander
Copy link
Contributor

Update labels to follow: https://helm.sh/docs/topics/chart_best_practices/labels/#standard-labels
Ensure support for Helm v2 && v3
Create Migration Documentation for users moving from stable/nginx-ingress

@aledbf
Copy link
Member Author

aledbf commented Feb 25, 2020

make dev-env must use local chart
e2e tests should use a local directory of the chart instead of repository

Done. Any attempt to modify the chart can be checked in e2e before merging any change.
Using the chart to start the local dev environment can help to introduce changes and test it locally without a PR

@kfox1111
Copy link

"webhook should be enabled by default:"

I install it typically per tenant as nginx-ingress is not entirely safe in a multitenant environment unless you give each tenant their own. (too many options for one user to configure the webserver in a way detrimental to another user)

I'm ok enabling the webhook by default as I can just turn it off. But, the concept of having the webhook is useful but cluster wide. Can the webhook be made its own chart, and conditionally enabled in the main chart? That way those of us using it in a multitenant environment can install the webhook globally all by itself, and then install multiple main charts with the webhook turned off?

@aledbf
Copy link
Member Author

aledbf commented Feb 25, 2020

I'm ok enabling the webhook by default as I can just turn it off.

Yes, you can do exactly that:

  admissionWebhooks:
    enabled: false

@aledbf
Copy link
Member Author

aledbf commented Feb 25, 2020

Can the webhook be made its own chart, and conditionally enabled in the main chart?

Actually, if you have a multitenant environment, I can assume you already are using different ingress.class annotations? The webhook code already checks that and does not inspect ingresses from different classes.

@kfox1111
Copy link

No. Each tenant uses the same ingress class. As far as the user is concerned, their services are just going out to the same network so use the same annotation on all their namespaces. The concept that each tenant gets their own nginx-ingress is an implementation detail of the cluster that they don't usually even notice. Each nginx-ingress instance is set to only listen to one namespace so there is no cross communications. This greatly simplifies the documentation/tech support as well, as they only need to be told which ingress class to use generically, rather then one specific to each namespace.

@aledbf
Copy link
Member Author

aledbf commented Feb 25, 2020

@kfox1111 interesting.

That use case is already supported in the webhook.

if n.cfg.Namespace != "" && ing.ObjectMeta.Namespace != n.cfg.Namespace {

This means the change in the default will not break your existing approach to provide an ingress controller per namespace.

@aledbf
Copy link
Member Author

aledbf commented Feb 25, 2020

That way those of us using it in a multitenant environment can install the webhook globally all by itself, and then install multiple main charts with the webhook turned off?

That approach will not work. The webhook is part of the ingress controller. When the webhook receives an AdmissionReview, we build a new model with the content of the informers, replacing the old ingress definition with the new one, checking there are no errors when we generate the nginx.conf file or the internal model.

@kfox1111
Copy link

So the webhook is not used for object validation, but for building/caching the config inside the controller? Maybe I'm misunderstanding what you said.

admission control registration is usually restricted to cluster admins rather then namespace admins so is typically not used for watching things at a per namespace level as usually those launching it do not have permissions to the entire cluster.

@aledbf
Copy link
Member Author

aledbf commented Feb 25, 2020

So the webhook is not used for object validation

Yes, is used for that. In the context of ingress-nginx, the most important validation is to check the definition do not break the generation of nginx.conf or produce an invalid configuration that would avoid the reload and posterior failure of probes, triggering a CrashLoopBackOff.

@kfox1111
Copy link

Ah. Ok. So, I guess what we need to do at our site is deploy one with webhook enabled but only listening to a namespace no one uses. That would let it do the validation for the entire cluster. Then disable the webhook for each of the tenants. Thanks for the info/help.

@aledbf
Copy link
Member Author

aledbf commented Apr 18, 2020

@ChiefAlexander any chance you can take a look in the tasks

  • Ensure support for Helm v2 && v3
  • Create Migration Documentation for users moving from stable/nginx-ingress

I plan to release the next version in a week.
Thanks

@ChiefAlexander
Copy link
Contributor

The Migration Documentation might take a bit. I'll make sure Helm v2&v3 work tonight.

@ChiefAlexander
Copy link
Contributor

Confirmed that installing with both the latest version of Helmv2 and Helmv3 works as expected using the default values with clean installs.

@aledbf
Copy link
Member Author

aledbf commented Apr 26, 2020

Closing. The only remaining item is the blog post for the migration path from the stable repository.

@alexellis
Copy link

It would be nice to have a much simpler way to enable host mode like we do in the arkade app

@ayamGelo
Copy link

Heyyo, I come with good news maybe
Is this still in TODO? Create Migration Documentation for users moving from stable/nginx-ingress

I just want to inform, nginx-ingress with nodeport will break the upgrade of new ingress-nginx
Because it detect there's services kind changes and existing nodeport already used,
I have tried to rename nameOverride in helm values to nginx-ingress but it does'nt affected
No zero downtime can be used

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants