Skip to content
This repository has been archived by the owner on Dec 15, 2021. It is now read-only.

Add CPU limit argument to function commands #606

Merged

Conversation

SomeoneWeird
Copy link
Contributor

@SomeoneWeird SomeoneWeird commented Feb 23, 2018

Issue Ref: #605

Description:

Adds --cpu argument to function deploy and function update to enable setting CPU limits, required for HPA.

TODOs:

  • Ready to review
  • Automated Tests
  • Docs

@SomeoneWeird SomeoneWeird force-pushed the add-cpu-limit-parameter branch from 04af1fd to 016ad2b Compare February 23, 2018 00:18
@@ -209,6 +214,7 @@ func init() {
deployCmd.Flags().StringP("trigger-topic", "", "", "Deploy a pubsub function to Kubeless")
deployCmd.Flags().StringP("schedule", "", "", "Specify schedule in cron format for scheduled function")
deployCmd.Flags().StringP("memory", "", "", "Request amount of memory, which is measured in bytes, for the function. It is expressed as a plain integer or a fixed-point interger with one of these suffies: E, P, T, G, M, K, Ei, Pi, Ti, Gi, Mi, Ki")
deployCmd.Flags().StringP("cpu", "", "", "Request amount of cpu for the function.")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you might want to add a sample or some more words to explain the accepted format for the cpu number. https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#meaning-of-cpu


To autoscale based on CPU usage, it is *required* that your function has been deployed with CPU request limits.

Do do this, use the `--cpu` parameter when deploying your function. Please see the [Meaning of CPU](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#meaning-of-cpu) for the format of the value that should be passed.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo: To do this

@ngtuna
Copy link
Contributor

ngtuna commented Feb 23, 2018

I add few comments. The overall code lgtm. Would you like to keep this thread by adding commits to validate the kubeless autoscale create command or I can do it in a separate PR ? Thanks for working on this :-)

if mem != "" {
funcMem, err := parseMemory(mem)
if mem != "" || cpu != "" {
funcMem, err := parseResource(mem)
Copy link
Contributor

@andresmgot andresmgot Feb 23, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure that I follow, are both men and cpu required? With this code if one of the two values is not set the function will fail (trying to parse an empty string). Is that expected?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

parseResource was changed to return an empty resources object if an empty string is passed as input - happy to change if you want but was the easiest way to get stuff working

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah, true, I missed that. It's okay then.

@ngtuna
Copy link
Contributor

ngtuna commented Feb 24, 2018

as @sebgoa suggested, we might want to add default cpu and memory limits if those flags aren't set

@SomeoneWeird
Copy link
Contributor Author

@ngtuna @sebgoa Hmm, why do you think we should do that? If you don't specify limits you're basically giving them unlimited, changing that could have adverse effects on peoples services imo

@sebgoa
Copy link
Contributor

sebgoa commented Feb 26, 2018

@SomeoneWeird I thought that specifying a default for cpu and/or mem would avoid the situation that you faced. If we don't set default limits, a user might try to do the same as you to test auto-scaling and will think it does not work and will have to go on the same debugging spree as you did.

@sebgoa
Copy link
Contributor

sebgoa commented Feb 26, 2018

@murali-reddy please comment

@SomeoneWeird
Copy link
Contributor Author

@sebgoa I recommend that we also change autoscale create to check if there's a CPU limit already in place when specifying --metric cpu, if there isn't one the autoscale function can fail and we can tell the user. I think we should do that in a separate PR.

@sebgoa
Copy link
Contributor

sebgoa commented Feb 26, 2018

Gotcha, yes that sounds good. No need for default limits then because the autoscale create will fail.

Folks might still be confused if they create an hpa by hand though but at least we can document it properly.

thanks, let's merge once @murali-reddy gets his eyes on this

@murali-reddy
Copy link
Contributor

Changes in this PR LGTM. On a seperate note I do think having default value make sense. What if some one writes a rogue function hogging memory and CPU? May be a discussion point for a sepearte issue/pr.

@SomeoneWeird
Copy link
Contributor Author

I agree we should tackle that in a seperate issue 👍

@murali-reddy murali-reddy merged commit c981fef into vmware-archive:master Feb 28, 2018
@SomeoneWeird SomeoneWeird deleted the add-cpu-limit-parameter branch February 28, 2018 12:33
murali-reddy pushed a commit that referenced this pull request Mar 12, 2018
* Add CPU limit argument to function commands

* Fix doc typo

* Add better description to --cpu argument
murali-reddy pushed a commit that referenced this pull request Mar 12, 2018
* Add CPU limit argument to function commands

* Fix doc typo

* Add better description to --cpu argument
murali-reddy added a commit that referenced this pull request Mar 22, 2018
* CRD API objects  for functions, http, kafka, cronjob triggers and
corresponding auto-generated informer, listers, clienset and deep copy
generated function

* CRD controllers for the function, kafka triggers, cronjob triggers, http
triggers CRD objects.

* new server side binaries kubeless-controller-manager and kafka-controller

* updated CLI to incorporate implict creation of triggers with
"kubeless function deploy" and move ingress object creation logic
to http trigger controller

* updated to build scripts

* updated to manifests

Co-authored-by: Andres <andres.mgotor@gmail.com>
Co-authored-by: Tuna <ng.tuna@gmail.com>

* Updates to examples used in integration tests
Co-authored-by: Andres <andres.mgotor@gmail.com>
Co-authored-by: Tuna <ng.tuna@gmail.com>

* kafka even consumer with new go kafka lib dependencies
Co-authored-by: Tuna <ng.tuna@gmail.com>

* update vendor libs
Co-authored-by: Tuna <ng.tuna@gmail.com>

* update docker files
Co-authored-by: Tuna <ng.tuna@gmail.com>
Co-authored-by: Andres <andres.mgotor@gmail.com>

* updates to utils and langruntime

* update .travis.yaml

* delete old controller docker file

* better kafka even consumer logging

* fix travis ci issue related to kafka broker

* keep the old GKE version 1.7.12-gke.1

* Use bitnami-bot docker account

* Fix http request method for the Kafka consumer

* Push Kafka image

* Fix content-type in consumer HTTP message

* Fix GKE version

* Removed duplicated files

* Update runtime docs

* Do proper check for topic update

* skip IT test with trigger update through

* Add CPU limit argument to function commands (#606)

* Add CPU limit argument to function commands

* Fix doc typo

* Add better description to --cpu argument

* fix glide.lock and vendor libs, that got messed up with master merge

* addressing review comments

* update .gitignore with new controller images

* Review

* Review

* use 3-way merge Patch only to update the CRD objects

* nodejs, remove runtime internal data from sandbox, avoid code concat (#633)

* nodejs, remove runtime internal data from sandbox, avoid code concat

* update according to comments

* keep compatibility with module.exports

* adjust kubeless.jsonnet

* Move images to the kubeless repository

* Revert "use 3-way merge Patch only to update the CRD objects"

This reverts commit 358f8cb.

* Avoid reflect.DeepEqual for deployment comparison

* Fix empty body for NodeJS functions

* fix GKE CI failures due to upstream issue kubernetes/kubernetes#60538

* Fix error catching in Python runtime. Remove old files

* Send message as jsons if necessary in kafka-consumer

* Move serviceSpec to function object. Fix service modifications
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants