-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Python] Refactors scikit-learn API to allow a list of evaluation metrics #3254
[Python] Refactors scikit-learn API to allow a list of evaluation metrics #3254
Conversation
@StrikerRUS can you help to review this PR? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gramirezespinoza Thank you very much for your PR which proposes great API simplification! Please consider addressing some initial review comments below.
@StrikerRUS tghanks for your comments! I've included all your recommendations into the code. Also, please see my comment above about the unit-test I had to modify. I also fixed a couple of linting errors. Nevertheless, I don't know why this action is failing:
Any idea? |
@gramirezespinoza you can retry to rebase to master branch, also cc @jameslamb . |
oh sorry! didn't see this comment since I wasn't a reviewer. @gramirezespinoza there's no way that failure is related to your code. If you can please merge in the changes from |
d8acfb4
to
4fe33f6
Compare
Rebased to latest master, thanks @guolinke and @jameslamb |
The CI build failed. I had a look and the logs show this error:
Any idea why is this failing? |
@gramirezespinoza
Sorry for the inconvenience! This test fails often due to network issues. I re-run it. |
@StrikerRUS no worries, thanks for re-running it! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gramirezespinoza Thank you very much for addressing my previous comments! Looks good overall. Please consider addressing some new issues from one more review round below.
…rameter eval_metric of the class (and subclasses of) LGBMModel. Also adds unit tests for this functionality
…metrics to eval_metric parameter
…the "train" and "cv" functions can also receive a list of callables
Apply suggestions from code review Co-authored-by: Nikita Titov <nekit94-08@mail.ru>
…ions of scikit-learn Apply suggestions from code review Co-authored-by: Nikita Titov <nekit94-08@mail.ru>
Co-authored-by: Nikita Titov <nekit94-08@mail.ru>
24f9b83
to
73ec26e
Compare
@StrikerRUS thanks again for the comments. I've implemented the changes requested and rebased to newest master. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gramirezespinoza Many thanks for this PR! I believe it will simplify user experience significantly.
This pull request has been automatically locked since there has not been any recent activity since it was closed. To start a new related discussion, open a new issue at https://github.com/microsoft/LightGBM/issues including a reference to this. |
This PR refactors the code in LGBM's scikit-learn API to allow the user to pass a list of evaluation metrics to the parameter
eval_metric
in thefit
method ofLGBMModel
and its subclasses.The snippet bellow shows an example of the possible uses of this functionality:
Where
eval_metric
receives a list that can combine custom and built-in evaluation functions. The output looks like this:The user can monitor during training as many metrics as needed.
@StrikerRUS rightly mentioned in PR 3165 and in issue 2182 that it is possible to monitor multiple metrics already. This PR leverages that functionality to make it a bit more intuitive.
This PR is backwards compatible.