Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Incorrect best_ntree_limit. #6615

Closed
trivialfis opened this issue Jan 18, 2021 · 4 comments · Fixed by #6616
Closed

Incorrect best_ntree_limit. #6615

trivialfis opened this issue Jan 18, 2021 · 4 comments · Fixed by #6616

Comments

@trivialfis
Copy link
Member

The fix #6579 got in 1.3.2 was actually wrong. Here's a brief summary:

The old (before fix) best_ntree_limit ignores the num_class parameters, which is incorrect. In before we workarounded it in c++ layer to avoid possible breaking changes on other language bindings. But the Python interpretation stayed incorrect. The PR fixed that in Python to consider num_class, but didn't remove the old workaround, so tree calculation in predictor is incorrect, see PredictBatch in CPUPredictor.

Proposal: Revert the fix for now, and deprecate the parameter in next release.

cc @pseudotensor @hcho3

@trivialfis
Copy link
Member Author

R binding is consistent with the old Python behavior, see callbacks.R.

@hcho3
Copy link
Collaborator

hcho3 commented Jan 18, 2021

I agree that we should deprecate ntree_limit argument and encourage the use of model slicing. For now, let's revert the change to minimize the risk of breaking the release.

@trivialfis Do you plan to make another release? Should it be 1.3.2.post0 (hot-fix) or 1.3.3 (another patch release)?

@trivialfis
Copy link
Member Author

I will make a 1.3.3 release on Python side. Thinking about how to add a test that can check for an incorrect behavior.

@trivialfis
Copy link
Member Author

PR in #6616 . Will backport.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants