Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set reg_lambda=0 for scikit-learn-like random forest classes. #4558

Merged
merged 2 commits into from
Jun 21, 2019

Conversation

canonizer
Copy link
Contributor

No description provided.

@RAMitchell
Copy link
Member

These are the experiments that led to this, soon to be published in a blog. L2 penalty only increases bias in random forest models, whereas boosting models can compensate for this bias by boosting more rounds.
bias_variance_decomposition_gradient_boosting_lambda_(l2_penalty)
bias_variance_decomposition_random_forest_lambda_(l2_penalty)

I think we should set lambda to 1e-5 instead of 0 to prevent numerical instability in weight calculation -G/(H+ lambda) as the hessian tends to 0.

@canonizer
Copy link
Contributor Author

Done.

@RAMitchell RAMitchell merged commit 9fa29ad into dmlc:master Jun 21, 2019
@mtjrider mtjrider deleted the fea-ext-rf-lambda branch August 19, 2019 17:43
@lock lock bot locked as resolved and limited conversation to collaborators Nov 17, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants