Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow resource allocation #2216

Merged
merged 6 commits into from
Jul 15, 2024

Conversation

anwbtom
Copy link
Contributor

@anwbtom anwbtom commented Jul 10, 2024

Proposed changes

Problem: When using autoscaling kubernetes clusters based on resource requests (like Karpenter implemenations or fargate kind of setups), pods will get evicted when the request 0 cpu and memory whilst the node it landed on is strapped for one of those resources, this change will give use the capability to set resource requests & limits in a way users see fit.

Solution: Allow use to set resource values (https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container)

Testing: Tested by overriding the default clusters and checking in k8s if resource settings where taken over.

Checklist

Before creating a PR, run through this checklist and mark each as complete.

  • I have read the CONTRIBUTING doc
  • I have added tests that prove my fix is effective or that my feature works
  • I have checked that all unit tests pass after adding my changes
  • I have updated necessary documentation
  • I have rebased my branch onto main
  • I will ensure my PR is targeting the main branch and pulling from my branch from my own fork

Release notes

If this PR introduces a change that affects users and needs to be mentioned in the release notes,
please add a brief note that summarizes the change.

 Add capability to set resource requests and limits on nginx-gateway deployment

@anwbtom anwbtom requested a review from a team as a code owner July 10, 2024 07:43
@github-actions github-actions bot added documentation Improvements or additions to documentation enhancement New feature or request helm-chart Relates to helm chart labels Jul 10, 2024
@anwbtom anwbtom force-pushed the feature/allow-resource-allocation branch from 6189db0 to 8fe094b Compare July 11, 2024 07:26
@kate-osborn
Copy link
Contributor

@anwbtom can you run make generate-helm-docs and commit the changes? That should fix the pipeline

@anwbtom
Copy link
Contributor Author

anwbtom commented Jul 12, 2024

Thanks for the tip @kate-osborn, just ran the command and it only made a change in the readme of the chart.
Hopefully that fixes stuff & things :)

Copy link

codecov bot commented Jul 12, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 87.62%. Comparing base (25f2b89) to head (81d33d9).

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #2216   +/-   ##
=======================================
  Coverage   87.62%   87.62%           
=======================================
  Files          96       96           
  Lines        6715     6715           
  Branches       50       50           
=======================================
  Hits         5884     5884           
  Misses        774      774           
  Partials       57       57           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@sjberman sjberman removed documentation Improvements or additions to documentation enhancement New feature or request labels Jul 15, 2024
@sjberman sjberman enabled auto-merge (squash) July 15, 2024 14:06
@sjberman sjberman merged commit d7597c3 into nginxinc:main Jul 15, 2024
44 checks passed
@lucacome lucacome changed the title Feature/allow resource allocation Allow resource allocation Jul 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
helm-chart Relates to helm chart release-notes
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

3 participants