Skip to content
This repository has been archived by the owner on Dec 15, 2021. It is now read-only.

Should there be a default CPU/Memory limits for the function deployments? #611

Open
murali-reddy opened this issue Feb 28, 2018 · 2 comments

Comments

@murali-reddy
Copy link
Contributor

Is this a BUG REPORT or FEATURE REQUEST?:

Feature request.

What happened:

Opening an issue as follow up to the discussion in #606. By default deployment used for function has no upper limit of CPU and memory. It would be ideal to put uppercap on the resources?

This should prevent rogue function to hog memory resources for e.g. which eventually may leading node not available.

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version):
  • Kubeless version (use kubeless version):
  • Cloud provider or physical cluster:
@murali-reddy
Copy link
Contributor Author

https://docs.aws.amazon.com/lambda/latest/dg/limits.html

@sayanh
Copy link
Contributor

sayanh commented Jun 20, 2018

@murali-reddy Additionally IMO, default and configurable CPU/memory limits for init containers are also required.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants