Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal to donate Kubernetes Event-driven Autoscaling (KEDA) as Sandbox project #335

Closed
tomkerkhove opened this issue Jan 14, 2020 · 26 comments · Fixed by #383
Closed

Proposal to donate Kubernetes Event-driven Autoscaling (KEDA) as Sandbox project #335

tomkerkhove opened this issue Jan 14, 2020 · 26 comments · Fixed by #383

Comments

@tomkerkhove
Copy link
Contributor

tomkerkhove commented Jan 14, 2020

Hi,

We'd like to propose to donate Kubernetes Event-driven Autoscaling (KEDA) as a sandbox project.

Since the new process is still in flux I'm basing this on the current proposal template, but have left out External Dependencies & Release methodology and mechanics as this is still WIP but not relevant at this stage.

CC @jeffhollan @anirudhgarg @duglin
Relates to kedacore/keda#501

Proposal

Name of project: Kubernetes Event-driven Autoscaling

Description:

Deploying applications on Kubernetes has become trivial, but that's only where it begins. As a platform gains more momentum, it has to survive and adapt - Autoscaling is an important aspect to accomplish this.

While Kubernetes provides application autoscaling out-of-the-box with Horizontal Pod Autoscalers (HPA), it is sometimes too limited as it only covers resource metrics while often you want to scale on external metrics instead. In reality, that means you have to deploy 3r party metric adapters based on the system you depend on, but you can only run one in the same cluster and have to manage it.

In 2019, Kubernetes Event-driven Autoscaling (KEDA) was launched by Microsoft and Red Hat to build an open application-oriented scaling mechanism that is vendor-agnostic and acts as an external metric aggregater which allows you to use 0-to-n scaling based on a variety of external metrics for different vendors. On November 19th, 2019 KEDA v1.0 has been released which makes it production-ready and provides enterprise-grade security.

With Kubernetes Event-driven Autoscaling (KEDA) we aim to make application autoscaling super simple, regardless of the data source that you are using, by abstracting away the metric adapters and allowing KEDA customers to only focus on their scaling needs of their platform.

Customers can describe how their workloads, deployments or jobs, should scale based on the criteria and deploy it as a first-class resource on Kubernetes and that's it - Kubernetes Event-driven Autoscaling (KEDA) handles the rest.

Instead of reinventing the wheel, Kubernetes Event-driven Autoscaling (KEDA) is extending Kubernetes - When a workload has to scale from 0-to-n instances, it will automatically create an Horizontal Pod Autoscalers (HPA) until there is no work left and it gets removed again.

Kubernetes Event-driven Autoscaling (KEDA) provides a variety of 15+ scalers which allows customers to automatically scale workloads based on external systems. Our portfolio includes AWS, GCP, Microsoft Azure & Huawei cloud as well as other technologies such as Kafka, Prometheus, NATS and more but new scalers can be added very easily.

By leveraging scale-to-0, Kubernetes Event-driven Autoscaling (KEDA) allows customers to build resource-friendly applications by making the unused resouces available to other applications in the Kubernetes cluster that really need them.

Kubernetes Event-driven Autoscaling (KEDA) provides production-grade security by supporting pod identities, like Azure AD Pod Identity, to avoid secret management and allow for authentication re-use across multiple scalers. This allows existing deployments to run under the same minimal permissions while KEDA scalers can use higher-priviledged authentication to gain the required metrics. With this approach, we allow developers to focus on their workload while ops manages the authentication & configuration of the autoscaling, although it can be managed end-to-end by a full-stack as well.

Statement on alignment with CNCF mission:

Kubernetes Event-driven Autoscaling (KEDA)'s mission is to make application autoscaling simple allowing Kubernetes users to focus on their workloads, not the scaling infrastructure. As part of that mission, we want to support as many customers as possible by being vendor neutral and are open to scale on any system.

The Kubernetes Event-driven Autoscaling (KEDA) project does not want to reinvent the wheel but build on standards instead and is complimentary to Kubernetes. Next to that, we have support for CNCF projects like Prometheus and NATS by providing scalers for them as well and package our product as a Helm chart (2.x & 3.x) which is available on Helm Hub. Lastly, we are vendor-neutral by supporting all major clouds and open-source technologies like Redis.

While the scaling features of Kubernetes Event-driven Autoscaling (KEDA) are important, we are a strong believer of making the product itself operable as well. By using Operator SDK we also want to allow operators to efficiently manage the infrastructure necessary to run Kubernetes Event-driven Autoscaling (KEDA) and are planning to provide a better operational experience as a whole by providing a CLI, dashboard, etc.

Security is one of our main focuses and we will keep on investing in that - This is why pod identity has been one of our main focuses for 1.0 and will continue to support more identity providers over time.

Sponsors from TOC: To Be Determined

Preferred maturity level: Sandbox

License: MIT

Source control: Github (https://github.com/kedacore)

Initial Committers:

Founding Maintainers:

  • Jeff Hollan (Microsoft)
  • Anirudh Garg (Microsoft)
  • Aarthi Saravanakumar (Microsoft)
  • Yaron Schneider (Microsoft)
  • Ahmed ElSayed (Microsoft)
  • Zbynek Roubalik (Red Hat)
  • Ben Browning (Red Hat)
  • Tom Kerkhove (Codit)

Additional Maintainers:

  • Shekhar Patnaik (Microsoft)
  • Zach Dunton (Smartfrog)

Infrastructure requests (CI / CNCF Cluster):

We do not have infrastructure requests as we've got everything already setup:

  • GitHub Actions for CI/CD
  • Docker images are stored on Docker Hub
  • Helm charts are hosted on GitHub Pages

Communication Channels:

Website: https://keda.sh

Social media accounts:

Existing sponsorship: Microsoft and Red Hat

Community size:

The community around KEDA is already fairly big given it was only introduced in 2019 with 37 contributors from 10+ companies including Microsoft, Red Hat, Pivotal, Readify and more and has 1.6k stars on GitHub.

The Docker images are available on Docker Hub and have already been downloaded for more than 100K+ times and is gaining momentum in the operator space.

@lizrice
Copy link
Contributor

lizrice commented Jan 14, 2020

Great! Thanks @tomkerkhove.

@quinton-hoole @raravena80 please could SIG Runtime take a look and give the TOC a recommendation?

@raravena80
Copy link
Contributor

Thanks @lizrice we'll discuss it in the upcoming meeting (01/16/20). Fyi, we will be following this process.

@jeffhollan
Copy link
Contributor

Thanks - and know there has been conversations around SIGs (Apps and Runtime) and how it relates to the serverless WG lead by @duglin but worth noting we did present in November to that group and had strong agreement this fit well. We’ve actually had a few follow on meetings as well. Let me know if anything needed from me for 1/16

@raravena80
Copy link
Contributor

Update: We discussed this on the 1/16 meeting however SIG-Runtime is just being stood up and I was the only chair present in the meeting. I'd like to discuss with other chairs, tech-leads (@quinton-hoole, @dfeddema, @k82cn) about what TOC recommendation template to use (if any at this point?). We take any suggestions too.

@tomkerkhove
Copy link
Contributor Author

Thanks for the update @raravena80!

@lizrice probably knows concerning your template question.

@tomkerkhove
Copy link
Contributor Author

Any feedback on this @quinton-hoole, @dfeddema, @k82cn?

@erinaboyd
Copy link
Contributor

@raravena80 We haven't officially voted/published a template, but it will be based on this: https://docs.google.com/document/d/1x3jlFsP0Z5DGyXiU3mwPMxH5HGnoi_ZvUcpczBMGwz4/edit?usp=sharing

@raravena80
Copy link
Contributor

Thanks, @erinaboyd. Since it hasn't been voted/published, my take is that in the meantime for sandbox we'll use slide 2 here: https://docs.google.com/presentation/d/1xQHPHI7U2WxIUJm5zSj1MamWNr-Ru7_4nABrRkveDBU

If no objections from @quinton-hoole, @dfeddema, @k82cn? I'd like to invite the project to present in the SIG-runtime meeting.

@tomkerkhove
Copy link
Contributor Author

Thank you @raravena80!

We are happy to present to SIG Runtime, if I get it correctly the next meeting is on 6th of February at 8am (USA Pacific)?

@jeffhollan You are definately the best man to pitch KEDA, are you up for doing that?

@k82cn
Copy link
Contributor

k82cn commented Jan 31, 2020

If no objections from @quinton-hoole, @dfeddema, @k82cn? I'd like to invite the project to present in the SIG-runtime meeting.

That'll be great to have a presentation at sig-runtime meeting :)

@jeffhollan
Copy link
Contributor

Definitely happy to do so. Can plan on joining the Feb 6 meeting. Let me know if you need anything else as well.

@raravena80
Copy link
Contributor

@tomkerkhove @jeffhollan actually, we already have a 20 min presentation scheduled on Feb 6th. Feb 20th works better for the presentation, all co-chairs have a meeting 30 minutes past the SIG-Runtime meeting, does that work? If yes, you can add it to the agenda with the time it's supposed to take here:
https://docs.google.com/document/d/1k7VNetgbuDNyIs_87GLQRH2W5SLgjgOhB6pDyv89MYk/
You are welcome to join the Feb 6th meeting too.

@jeffhollan
Copy link
Contributor

Yep that will work. I'm anxious to keep this progressing 😄 but can wait for 20 more days to present. I'll join the 6th meeting as would be great to meet a few of you folks - we've been working most closely with the Serverless WG - so not as familiar with the SIG-Runtime. And will also plan on the 20th for now. Thanks for the help

@tomkerkhove
Copy link
Contributor Author

Thank you for having us present KEDA in SIG-Runtime standup @raravena80!

It was a pleasure and we've opened a PR to SIG-Runtime recommendation list, thanks for considering!
cncf/tag-runtime#10

Once that's merged we are looking for sponsors, if anybody is willing to sponsor us from TOC, feel free to reach out.

We've already had interest from @duglin, are you still willing to sponsor us?

@tomkerkhove
Copy link
Contributor Author

SIG-Runtime is now officially supporting us as the recommendation has been merged and is available on - https://github.com/cncf/sig-runtime/blob/master/recommendations/sandbox/keda.md

Thank you @raravena80!

We are actively looking for sponsors - Is anybody from TOC interested in supporting us?

@lizrice
Copy link
Contributor

lizrice commented Mar 2, 2020

As someone with some history in autoscaling in a previous life, I'm happy to sponsor KEDA to Sandbox.

@tomkerkhove
Copy link
Contributor Author

Awesome, thank you @lizrice!

@tomkerkhove
Copy link
Contributor Author

While we are working to get more TOC sponsors, I've noticed that new proposals have to use PRs instead. Is this something we have to do as well? Otherwise I'm happy to start this in parallel.

@michelleN
Copy link
Member

michelleN commented Mar 5, 2020

I'm happy to be a TOC sponsor for KEDA.

cc/ @amye @lizrice

@tomkerkhove
Copy link
Contributor Author

tomkerkhove commented Mar 5, 2020 via email

@xiang90
Copy link
Contributor

xiang90 commented Mar 9, 2020

I would like to sponsor KEDA for sandbox project.

@jeffhollan
Copy link
Contributor

Thank you @xiang90, @michelleN, and @lizrice ! Let us know what we should do for next steps now that we've secured the 3 sponsors (@amye as well since I saw you helped with the backlog before)

@caniszczyk
Copy link
Contributor

@jeffhollan can you create a PR for keda here? https://github.com/cncf/toc/tree/master/proposals/sandbox

List the confirmed TOC sponsors and we will work with you to do the asset transfer and set up the project maintainers with the CNCF servicedesk :)

@caniszczyk
Copy link
Contributor

@jeffhollan you can email me caniszczyk@linuxfoundation.org for the asset transfer dance

@caniszczyk
Copy link
Contributor

we will close this now that there is an official PR in: #383

@tomkerkhove
Copy link
Contributor Author

Thank you so much @xiang90!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants