Skip to content

Commit

Permalink
Merge pull request #383 from jeffhollan/keda-sandbox
Browse files Browse the repository at this point in the history
adding keda sandbox proposal
  • Loading branch information
caniszczyk authored Mar 18, 2020
2 parents 20fb811 + 2291a24 commit 48d0353
Showing 1 changed file with 87 additions and 0 deletions.
87 changes: 87 additions & 0 deletions proposals/sandbox/keda.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@
=== KEDA CNCF Sandbox Project Proposal

*Name of project:* KEDA - Kubernetes Event-Driven Autoscaling

*Description:*

Deploying applications on Kubernetes has become trivial, but that's only where it begins. As a platform gains more momentum, it has to survive and adapt - Autoscaling is an important aspect to accomplish this.

While Kubernetes provides application autoscaling out-of-the-box with Horizontal Pod Autoscalers (HPA), it is sometimes too limited as it only covers resource metrics while often you want to scale on external metrics instead. In reality, that means you have to deploy 3r party metric adapters based on the system you depend on, but you can only run one in the same cluster and have to manage it.

In 2019, Kubernetes Event-driven Autoscaling (KEDA) was launched by Microsoft and Red Hat to build an open application-oriented scaling mechanism that is vendor-agnostic and acts as an external metric aggregater which allows you to use 0-to-n scaling based on a variety of external metrics for different vendors. On November 19th, 2019 KEDA v1.0 has been released which makes it production-ready and provides enterprise-grade security.

With Kubernetes Event-driven Autoscaling (KEDA) we aim to make application autoscaling super simple, regardless of the data source that you are using, by abstracting away the metric adapters and allowing KEDA customers to only focus on their scaling needs of their platform.

Customers can describe how their workloads, deployments or jobs, should scale based on the criteria and deploy it as a first-class resource on Kubernetes and that's it - Kubernetes Event-driven Autoscaling (KEDA) handles the rest.

Instead of reinventing the wheel, Kubernetes Event-driven Autoscaling (KEDA) is extending Kubernetes - When a workload has to scale from 0-to-n instances, it will automatically create an Horizontal Pod Autoscalers (HPA) until there is no work left and it gets removed again.

Kubernetes Event-driven Autoscaling (KEDA) provides a variety of 15+ scalers which allows customers to automatically scale workloads based on external systems. Our portfolio includes AWS, GCP, Microsoft Azure & Huawei cloud as well as other technologies such as Kafka, Prometheus, NATS and more but new scalers can be added very easily.

By leveraging scale-to-0, Kubernetes Event-driven Autoscaling (KEDA) allows customers to build resource-friendly applications by making the unused resouces available to other applications in the Kubernetes cluster that really need them.

Kubernetes Event-driven Autoscaling (KEDA) provides production-grade security by supporting pod identities, like Azure AD Pod Identity, to avoid secret management and allow for authentication re-use across multiple scalers. This allows existing deployments to run under the same minimal permissions while KEDA scalers can use higher-priviledged authentication to gain the required metrics. With this approach, we allow developers to focus on their workload while ops manages the authentication & configuration of the autoscaling, although it can be managed end-to-end by a full-stack as well.

KEDA was presented at KubeCon 2019 (https://www.youtube.com/watch?v=ZK2SS_GXF-g).

*Statement on alignment with CNCF mission:*

Kubernetes Event-driven Autoscaling (KEDA)'s mission is to make application autoscaling simple allowing Kubernetes users to focus on their workloads, not the scaling infrastructure. As part of that mission, we want to support as many customers as possible by being vendor neutral and are open to scale on any system.
The Kubernetes Event-driven Autoscaling (KEDA) project does not want to reinvent the wheel but build on standards instead and is complimentary to Kubernetes. Next to that, we have support for CNCF projects like Prometheus and NATS by providing scalers for them as well and package our product as a Helm chart (2.x & 3.x) which is available on Helm Hub. Lastly, we are vendor-neutral by supporting all major clouds and open-source technologies like Redis.
While the scaling features of Kubernetes Event-driven Autoscaling (KEDA) are important, we are a strong believer of making the product itself operable as well. By using Operator SDK we also want to allow operators to efficiently manage the infrastructure necessary to run Kubernetes Event-driven Autoscaling (KEDA) and are planning to provide a better operational experience as a whole by providing a CLI, dashboard, etc.
Security is one of our main focuses and we will keep on investing in that - This is why pod identity has been one of our main focuses for 1.0 and will continue to support more identity providers over time.
*Sponsors from TOC:* Liz Rice, Michelle Noorali, and Xiang Li
*Unique identifier:* keda
*Preferred maturity level:* Sandbox
*License:* MIT
*Source control:* Github (https://github.com/kedacore)
*Initial Committers (leads):*
Jeff Hollan (Microsoft)
Anirudh Garg (Microsoft)
Aarthi Saravanakumar (Microsoft)
Yaron Schneider (Microsoft)
Ahmed ElSayed (Microsoft)
Zbynek Roubalik (Red Hat)
Ben Browning (Red Hat)
Tom Kerkhove (Codit)
*Additional Maintainers:*
Shekhar Patnaik (Microsoft)
Zach Dunton (Smartfrog)
*Infrastructure requests (CI / CNCF Cluster):*
None
*Communication Channels:*
* IM/Slack: #keda on https://kubernetes.slack.com
* Issue tracker: https://github.com/kedacore/keda
* Mailing List: None
* Website: https://keda.sh
* Community Meeting Notes: https://hackmd.io/cEi_FerdQvyTvB1i-5U0kg
*Social media accounts:*
Twitter: @kedaorg
*Existing sponsorship:* Microsoft and Red Hat
*Community size:*
* 1.8k stars
* 50 contributors
* 149 forks
The Docker images are available on Docker Hub and have already been downloaded for more than 100K+ times and is gaining momentum in the operator space.

0 comments on commit 48d0353

Please sign in to comment.