Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Roadmap to 1.0 #2480

Closed
aledbf opened this issue May 8, 2018 · 14 comments
Closed

WIP: Roadmap to 1.0 #2480

aledbf opened this issue May 8, 2018 · 14 comments
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@aledbf
Copy link
Member

aledbf commented May 8, 2018

  • Web administration UI

  • Kubectl plugin https://gist.github.com/aledbf/db25dbc723fc7d71345a9657abfd766d

  • Installation UX (plugin)

  • Help debugging Ingress errors

  • Monitoring:

  • Minikube:

    • change from RC to Deployment
  • Autoscaling:

    • HPA with custom metrics
  • Dynamic Configuration without reload

    • Zero-downtime deployment for upstreams
  • Tutorials/guides/examples

  • Documentation - technical writer

  • GRPC

  • Caching

  • Routing by HTTP Header and Method

  • Canary Releases

  • OOTB Cert manager integration

  • Extensions (plugins)

    • Provide extensions points during the request lifecycle
    • Like Auth (for OpenID connect)
    • Enhance request
    • Using external resources (serverless functions)
  • Migrate to CRDs

  • Remove annotations

  • Migrate from Ingress/annotation using the kubectl plugin manually (yaml output) or automatically

  • Aspirational

@kfox1111
Copy link

kfox1111 commented May 8, 2018

Is the web ui something different then kube-dashboard? If so, why?

Is installation ux different then a helm chart?

What does migration to CRD's mean? Likewise, removing annotations?

@aledbf
Copy link
Member Author

aledbf commented May 8, 2018

Is the web ui something different then kube-dashboard? If so, why?

You only get the same feature than kubectl get ingress, i.e. you don't get information about the service, port, tls, annotations, etc...

Is installation ux different then a helm chart?

Yes, setting up the controller should not require knowledge of go templates or yaml files

What does migration to CRD's mean? Likewise, removing annotations?

Migrate the current configuration in the configmap and annotations to CRDs to provide the semantics we cannot express with annotations where everything is a string.

@kfox1111
Copy link

kfox1111 commented May 8, 2018

So, is that reason to create a new ui, or a reason to enhance kube-dashboard to support those things?

helm does not require you to know go templates or yaml files. As an op, a proliferation of how to install things can be difficult to handle. When the installation tools are nestable like helm charts are, you can more easily tie all the things together. For example, you could make a helm chart that contains nginx-ingress, prometheus, and grafana subcharts. then out of the box you get all the monitoring stuff all at once. And you don't have to maintain the install code for the other parts. When someone updates the main prometheus chart with a new feature, then nginx-ingress's chart gets that stuff for free.

Ah. So would there be an annotation left that points to the crd for the rest of the settings? or would the crud somehow point at ingress objects that it should be applied to? Are there any other ingress controllers going to take this approach? Currently there are a lot of helm charts that allow end users to specify ingress annotations. Will they all need to be changed to create crd's as well with user specifiable content, or will annotations also be supported for all the things that can be strings still? I kind of like the idea the annotations would become validate-able via crd's though. It does complicate the deployment a bit as you need to load one and only one crd registration before the (possibly multiple) nginx-ingress controller class instances. (I usually use 2 ingress classes, internet & private). And would probably require extra rbac permissions on the end users behalf for the crd's. So, I guess there are a lot of tradeoffs here.... Could both annotations for the things they can work with and crd's be supported so we don't trade off one set of problems for another? Alternately, could we push for an ingress controller specific section in the Ingress object be added instead of a crd?

@aledbf
Copy link
Member Author

aledbf commented May 8, 2018

So, is that reason to create a new ui, or a reason to enhance kube-dashboard to support those things?

#109
kubernetes/dashboard#1832

Alternately, could we push for an ingress controller specific section in the Ingress object be added instead of a crd?

This requires a change in the Ingress spec located in k8s core. I don't see this happening any time soon.

Ah. So would there be an annotation left that points to the crd for the rest of the settings? or would the crud somehow point at ingress objects that it should be applied to?

For simplicity, the CRD name is the same one than the Ingress so you don't need an annotation. Using an annotation to point to a specific CRD should also be supported.

Are there any other ingress controllers going to take this approach?

This is an example https://github.com/heptio/contour/blob/master/design/ingressroute-design.md

Will they all need to be changed to create crd's as well with user specifiable content, or will annotations also be supported for all the things that can be strings still?

Only for a couple of releases, then we need to deprecate annotations and focus only on the CRD

So, I guess there are a lot of tradeoffs here....

As always :)

@yuyang0
Copy link

yuyang0 commented Jul 17, 2018

+1 for Canary Releases

@chenquanzhao
Copy link

@aledbf #2560 is just for canary releases and the implementation is also based on nginx split_clients directive, and can achieve the release target.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 19, 2018
@aledbf aledbf removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 3, 2018
@aledbf aledbf mentioned this issue Feb 9, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 29, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 28, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@kfox1111
Copy link

/reopen
/remove-lifecycle rotten

@k8s-ci-robot
Copy link
Contributor

@kfox1111: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen
/remove-lifecycle rotten

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jul 29, 2019
@aledbf aledbf reopened this Jul 29, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 27, 2019
@aledbf aledbf closed this as completed Oct 27, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

6 participants