-
Notifications
You must be signed in to change notification settings - Fork 98
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow resource allocation #2216
Allow resource allocation #2216
Conversation
6189db0
to
8fe094b
Compare
@anwbtom can you run |
Thanks for the tip @kate-osborn, just ran the command and it only made a change in the readme of the chart. |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #2216 +/- ##
=======================================
Coverage 87.62% 87.62%
=======================================
Files 96 96
Lines 6715 6715
Branches 50 50
=======================================
Hits 5884 5884
Misses 774 774
Partials 57 57 ☔ View full report in Codecov by Sentry. |
Proposed changes
Problem: When using autoscaling kubernetes clusters based on resource requests (like Karpenter implemenations or fargate kind of setups), pods will get evicted when the request 0 cpu and memory whilst the node it landed on is strapped for one of those resources, this change will give use the capability to set resource requests & limits in a way users see fit.
Solution: Allow use to set resource values (https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container)
Testing: Tested by overriding the default clusters and checking in k8s if resource settings where taken over.
Checklist
Before creating a PR, run through this checklist and mark each as complete.
Release notes
If this PR introduces a change that affects users and needs to be mentioned in the release notes,
please add a brief note that summarizes the change.