-
Notifications
You must be signed in to change notification settings - Fork 724
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Vasilis/autoinference #307
Conversation
… but rather create auxiliary numpy arrays that store the numerator and denominator of every node. This enables consistent feature_importance calculation and also potentially more accurate shap_values calcualtion.
…ure_importances_. Added tests that the feature_importances_ API is working in test_drlearner and test_dml.
Co-authored-by: Keith Battocchi <kebatt@microsoft.com>
…onML into vasilis/feature_importances
…ree level was causing trouble, since due to sample splitting feature_improtance can many times be negative (increase in variance) due to honesty and sample splitting. Now averaging the un-normalized feature importance. There is still a small caveat in the current version of how we use impurity. Added that as a TODO.
…orest, that now makes feature_importances_ exactly correct and no need to re-implement the method. Now impurities are computed on the estimation sample and replacing the pre-calculated node impurities.
…rallel_add_trees_ of ensemble.py. This leads to 6 fold speed-up as we were doing many slicing operations to sparse matrices before, which are very slow!
…ear model final inference for DMLCateEstimator, to allow for RLM usage
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Love the idea of enabling inference by default when it's efficient. I don't see which changes enable arbitrary linear models in this set of commits - what am I missing?
…ept in summary(). Fixed failing test_cate_interpreter by passing explicitly inference=None.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly fine, except for a weird whitespace issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Co-authored-by: Keith Battocchi <kebatt@microsoft.com>
Co-authored-by: Keith Battocchi <kebatt@microsoft.com>
Co-authored-by: Keith Battocchi <kebatt@microsoft.com>
…nto vasilis/autoinference
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, thanks!
This enables the following usecase: