-
-
Notifications
You must be signed in to change notification settings - Fork 8.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Incorrect best_ntree_limit
.
#6615
Comments
R binding is consistent with the old Python behavior, see |
I agree that we should deprecate @trivialfis Do you plan to make another release? Should it be 1.3.2.post0 (hot-fix) or 1.3.3 (another patch release)? |
I will make a 1.3.3 release on Python side. Thinking about how to add a test that can check for an incorrect behavior. |
PR in #6616 . Will backport. |
The fix #6579 got in 1.3.2 was actually wrong. Here's a brief summary:
The old (before fix)
best_ntree_limit
ignores thenum_class
parameters, which is incorrect. In before we workarounded it in c++ layer to avoid possible breaking changes on other language bindings. But the Python interpretation stayed incorrect. The PR fixed that in Python to considernum_class
, but didn't remove the old workaround, so tree calculation in predictor is incorrect, seePredictBatch
inCPUPredictor
.Proposal: Revert the fix for now, and deprecate the parameter in next release.
cc @pseudotensor @hcho3
The text was updated successfully, but these errors were encountered: