-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add cBO (1D Problem) #76
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few comments:
- I noticed that you use
train_x
in standard BO andtrain_x_norm
in cost BO. Is there a specific reason for that? It leads to different results at the first iteration for two algorithms. For proper comparison, one should probably use the same inputs. - Multiple comments and docstrings refer to the multi-fidelity model, e.g.
def cost_acqfun(models, train_data, Data, cost_params, params, ieval, isnorm=False):
"""
cost based acquisition function- EI based
Args:
models: Multi-fidelity and low-fidelity krigging model
But I don't think we are using a multi-fidelity model here.
Modified with the requested changes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Modified the analyses
Fixed the docstring
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! Question about the structured BO part: you use a piecewise mean function with a "breaking point" t. However, the prior over t is a uniform distribution between 10 and 15, which is way outside of the train_x_norm range. Seems like this needs to be fixed. By the way, is the piecewise function really needed here?
The piecewise function is just a demonstration to showcase how it will work with some incorrect information (which it worked still!). On the prior distribution, the normal values are changed to real values and therefore the unif distribution is provided as part of the real parameter space (non-normalized one) [0, 15]. Please look below. @jit
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. I will move it to examples/contrib, where I hope to gather examples of GPax applications for various "real-world" problems from different domains.
Codecov ReportAll modified and coverable lines are covered by tests ✅
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## main #76 +/- ##
==========================================
+ Coverage 95.59% 95.62% +0.02%
==========================================
Files 51 51
Lines 3930 3930
==========================================
+ Hits 3757 3758 +1
+ Misses 173 172 -1
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
Yes definitely, I am advising Shakti (Sergei’s postdoc) to apply cBO on nano indentation problem. We can use that notebook as real problem. |
Added notebook in example folder for 1D version. The 2D version is larger than 25 MB, so wont be able to upload. Please review. Thank you