-
Notifications
You must be signed in to change notification settings - Fork 380
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
support init mode for metric model #278
support init mode for metric model #278
Conversation
…async until the data is accumlulating enough
7581052
to
ffeb399
Compare
/LGTM |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some nits.
initMode = predictionconfig.ModelInitMode(initModeStr) | ||
} | ||
|
||
historyLength, exists := config["cpu-model-history-length"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about moving the constant to a common file, like EVPACaller-%s-%s
、 cpu-model-history-length
、24h
etc.?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about moving the constant to a common file, like
EVPACaller-%s-%s
、cpu-model-history-length
、24h
etc.?
Yes, we can do it later
@@ -166,9 +184,22 @@ func getMemConfig(props map[string]string) *predictionconfig.Config { | |||
marginFraction = "0.15" | |||
} | |||
|
|||
initModeStr, exists := props["mem-model-init-mode"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This part of memory is almost same like cpu, how about a function ?
} | ||
|
||
historyLength, exists := props["mem-model-history-length"] | ||
if !exists { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why history length of cpu and memory is not same, cpu is 24h but memory is 48h?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why history length of cpu and memory is not same, cpu is 24h but memory is 48h?
This is just an empirical value borrowed from vpa. memory is incompressible resource, so use longer history data is more safe and robust. cpu is compressible resource, and generally it is daily cycle because of people traffic
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
got it.
type ModelInitMode string | ||
|
||
const ( | ||
// means recover or init the algorithm model directly from history datasource, this process may block because it is time consuming for data fetching & model gen |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we make sure which mode is default?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we make sure which mode is default?
this param is specified by caller or user, if it is not specified, default is history mode, original logic
|
||
var initError error | ||
switch c.initMode { | ||
case config.ModelInitModeLazyTraining: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lazy training mode is not from history?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lazy training mode is not from history?
yes, it means accumulating data from real time data source, then when the data length is enough, it can do predict. i have not thought a better naming
…async until the data is accumlulating enough
What type of PR is this?
Feature
What this PR does / why we need it:
support init percentile model by different modes, by default, it always use history mode, keep the original logic.
for recommendation, it is once task use original algorithm, so it does not impact.
for tsp, it has params and caller, so it does not impact.
for evpa, use lazytrainning mode to do predict until it is accumulating data to enough window. this way the prediction is more robust and safe.
Which issue(s) this PR fixes:
Fixes #212
Special notes for your reviewer: