-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduce autoscalers for inbox listener and stream service #18
Conversation
a4b295b
to
b63e90e
Compare
I think we can go further with the environment overrides - there is no need for our values file to define the azure-specific stuff like storageAccountName, serviceUrl. These are just details on how the user wants to auth their pods to a cloud provider, rather than part of our chart. We can remove those from values.yaml and instead instruct users to supply env vars as necessary. Similarly, I don't think "gcp-cloud-credential" needs to appear in our charts, we can instead allow the user to mount volumes via templates, using their values file. I have not changed this in the current patch though. |
@@ -1,9 +1,9 @@ | |||
apiVersion: batch/v1 | |||
kind: CronJob | |||
metadata: | |||
name: garbage-collector | |||
name: {{ .Values.garbageCollector.name }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need to template this?
@@ -1,9 +1,9 @@ | |||
apiVersion: batch/v1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are all these moved out of the folders? Is there some problem with the folders?
|
||
garbageCollector: | ||
name: garbage-collector | ||
schedule: "*/10 * * * *" # every 10 minutes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should these three items (schedule, failedJobs, successfulJobs) be under a deployment
section?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure why we did away with the folders - those seemed nice? Do other helm charts in the wild use folders?
If you want to make names configurable I think you'll need to update the stream-service-svc.yaml
file to reference a name as well.
What's the purpose of the env
override? Is this a common pattern? I think we should remove this and expose configuration options explicitly.
Does this fail gracefully if keda is not installed? Do we need to allow the user to select a namespace other than keda
?
Adds autoscalers for the inbox-listener and stream service. The inbox listener autoscaler relies on KEDA. The stream service autoscaler is a standard HPA. KEDA has some unfortunate friction with helm. It cannot be added as a dependency to the chart and installed in a reasonable way, because it relies on CRDs, which helm will not be responsible for updating or installing in order. Based on conversation in kedacore/charts#226, there are two alternative approaches open to us: * We can instruct users to install KEDA on their cluster prior to installing our deployment. The command for this is, helm install keda --version 2.9.1 --namespace keda kedacore/keda --create-namespace * We can vendor KEDA's CRDs and put them into our chart, under a crd directory. This should enable a single-command install, but comes with the downsides that, 1. Helm will not update or manage the keda CRDs for us 2. If users already have KEDA installed (certainly possible) we will encounter a conflict. Mainly due to the second item, this PR takes the first approach.
This allows alternative mechanisms for GCP auth to be used, for instance the minikube gcp-auth plugin.
Restructures the templates directory so-as not to suggest that every resource type be under a dedicated folder. In most cases we just have one instance of these resource types, and creating separate directories for e.g scaled object seems like overkill.
b63e90e
to
26903e9
Compare
I don't like the additional creds changes
We're going in a different direction for now: #28 |
Adds autoscalers for the inbox-listener and stream service. The inbox
listener autoscaler relies on KEDA. The stream service autoscaler is a
standard HPA.
KEDA has some unfortunate friction with helm. It cannot be added as a
dependency to the chart and installed in a reasonable way, because it
relies on CRDs, which helm will not be responsible for updating or
installing in order.
Based on conversation in kedacore/charts#226,
there are two alternative approaches open to us:
We can instruct users to install KEDA on their cluster prior to
installing our deployment. The command for this is,
helm install keda --version 2.9.1 --namespace keda kedacore/keda --create-namespace
We can vendor KEDA's CRDs and put them into our chart, under a crd
directory. This should enable a single-command install, but comes with
the downsides that,
encounter a conflict.
Mainly due to the second item, this PR takes the first approach.