-
Notifications
You must be signed in to change notification settings - Fork 157
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add tuning by stochastic search #37
Comments
would it be worth writing up the design for tuning strategies? However, should the best strategy (as per tuning) not be queriable via a "fitted model" interface which has these as parameters? E.g., in a the same (interface) way as you would query coefficients and CI of a linear model. |
Good point. The "best strategy" is indeed implemented via a "fitted model" interface, as shown in the README.md. The If a user wants details about a fit-result (e.g., coefficients of a linear model), then he would seek these in the |
As to writing up design for genetic algorithm: Good idea. Do we have any volunteers? |
Okay, I'm planning to implement this soon. Below is the doc-string for the implementation I am proposing. (Yes, it could be more user-friendly.) Feedback on the proposal very welcome. I plan to start this early next week (16/17 March). Context:
Proposed doc-string for random search tuning strategy
Instantiate a random search tuning strategy for searching over Supported ranges:
Example range 1:
Example range 2:
Note: All the AlgorithmModels for evaulation are generated by sampling each range See also |
this sounds fantastic. I'm a bit confused by your example, shouldn't the third one be a bounded distr? or do you automatically truncate over the range? I also wonder a few things:
Apologies if theses questions are poorly formulated and great work as always |
@tlienart Thanks for that!
Yes,
So, instead of passing
Also sounds like a nice idea but a non-trivial API complication. How would the interface for passing context to a sampler look like?
I'm not sure I understand the proposal. Are you suggesting that some hyper parameters be sampled less often (ie are kept fixed while others change?). Can you explain a situation where this might be beneficial? (Assuming here that we are not leaving the realm of ordinary random sampling which does not consider history of previous evaluations.) What do you mean by "pass a number of samples"? Or do you mean samplers? Could you give me a little more detail? |
We have a
Grid
tuning strategy but should add a stochastic tuning strategyStochastic <: TuningStrategy
with a correspondingfit
method forTunedModel{Stochastic, <:Model}
. The implementer should aquaint themselves with the nested parameter API (see [src/parameters.jl] and [test/parameters.jl]). To this end, I suggest first giving theiterator(::NumericRange, resolution)
anditerator(::NominalRange)
methods stochastic versions, perhaps by adding with a keyword argumentstochastic=true
.The text was updated successfully, but these errors were encountered: