We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
early stopping is working wrong using MAP metrics. In my test case it stoped the valid_sets's MAP from going up, which is the opposite.
code:
params = { 'boosting_type': 'gbdt', 'objective': 'lambdarank', 'metric': ['map'], 'eval_at':[3], 'num_leaves': 128 - 1, 'learning_rate': 0.025, 'feature_fraction': 0.6, 'bagging_fraction': 0.8, 'bagging_freq': 1, 'verbose': 0 }
lgb_train = lgb.Dataset(X_train, y_train,) lgb_eval = lgb.Dataset(X_test, y_test) lgb_train.set_group(train_group) lgb_eval.set_group(test_group)
gbm = lgb.train(params, lgb_train, num_boost_round=1000, valid_sets=[lgb_train,lgb_eval], # eval training data feature_name=features_to_use,early_stopping_rounds=40)
output: [1] training's map@3: 0.13987 valid_1's map@3: 0.139808 Train until valid scores didn't improve in 40 rounds. [2] training's map@3: 0.143765 valid_1's map@3: 0.14367 [3] training's map@3: 0.144295 valid_1's map@3: 0.144137 [4] training's map@3: 0.144681 valid_1's map@3: 0.144817 [5] training's map@3: 0.144765 valid_1's map@3: 0.144774 [6] training's map@3: 0.144763 valid_1's map@3: 0.144848 [7] training's map@3: 0.145091 valid_1's map@3: 0.145071 [8] training's map@3: 0.145347 valid_1's map@3: 0.145071 [9] training's map@3: 0.145396 valid_1's map@3: 0.145443 [10] training's map@3: 0.145482 valid_1's map@3: 0.145347 [11] training's map@3: 0.145546 valid_1's map@3: 0.145379 [12] training's map@3: 0.145445 valid_1's map@3: 0.145358 [13] training's map@3: 0.145471 valid_1's map@3: 0.145421 [14] training's map@3: 0.14545 valid_1's map@3: 0.145379 [15] training's map@3: 0.145576 valid_1's map@3: 0.1454 [16] training's map@3: 0.145578 valid_1's map@3: 0.145485 [17] training's map@3: 0.145572 valid_1's map@3: 0.145528 [18] training's map@3: 0.1457 valid_1's map@3: 0.145538 [19] training's map@3: 0.14572 valid_1's map@3: 0.145602 [20] training's map@3: 0.145685 valid_1's map@3: 0.145528 [21] training's map@3: 0.14578 valid_1's map@3: 0.145687 [22] training's map@3: 0.145842 valid_1's map@3: 0.145559 [23] training's map@3: 0.145876 valid_1's map@3: 0.145475 [24] training's map@3: 0.145861 valid_1's map@3: 0.145485 [25] training's map@3: 0.145945 valid_1's map@3: 0.145634 [26] training's map@3: 0.146001 valid_1's map@3: 0.14575 [27] training's map@3: 0.145967 valid_1's map@3: 0.145708 [28] training's map@3: 0.145954 valid_1's map@3: 0.145687 [29] training's map@3: 0.145949 valid_1's map@3: 0.145697 [30] training's map@3: 0.145977 valid_1's map@3: 0.145825 [31] training's map@3: 0.146001 valid_1's map@3: 0.145697 [32] training's map@3: 0.14602 valid_1's map@3: 0.145666 [33] training's map@3: 0.146119 valid_1's map@3: 0.145676 [34] training's map@3: 0.146112 valid_1's map@3: 0.14575 [35] training's map@3: 0.146083 valid_1's map@3: 0.145719 [36] training's map@3: 0.146143 valid_1's map@3: 0.145761 [37] training's map@3: 0.146134 valid_1's map@3: 0.14574 [38] training's map@3: 0.146183 valid_1's map@3: 0.145676 [39] training's map@3: 0.146147 valid_1's map@3: 0.145623 [40] training's map@3: 0.146186 valid_1's map@3: 0.145644 [41] training's map@3: 0.146207 valid_1's map@3: 0.145761 Early stopping, best iteration is: [1] training's map@3: 0.13987 valid_1's map@3: 0.139808
I think it is obviously a bug or sth.. Both of the train and valid set are just doing OK before the training is stopped.
The text was updated successfully, but these errors were encountered:
bace639
@wxchan It seems we miss map in https://github.com/Microsoft/LightGBM/blob/master/python-package/lightgbm/basic.py#L1807 . I just add it. Can you help to check other metrics ?
map
Sorry, something went wrong.
fix #778
b9ebfb0
6956a52
No branches or pull requests
early stopping is working wrong using MAP metrics. In my test case it stoped the valid_sets's MAP from going up, which is the opposite.
code:
params = {
'boosting_type': 'gbdt',
'objective': 'lambdarank',
'metric': ['map'],
'eval_at':[3],
'num_leaves': 128 - 1,
'learning_rate': 0.025,
'feature_fraction': 0.6,
'bagging_fraction': 0.8,
'bagging_freq': 1,
'verbose': 0
}
lgb_train = lgb.Dataset(X_train, y_train,)
lgb_eval = lgb.Dataset(X_test, y_test)
lgb_train.set_group(train_group)
lgb_eval.set_group(test_group)
gbm = lgb.train(params,
lgb_train,
num_boost_round=1000,
valid_sets=[lgb_train,lgb_eval], # eval training data
feature_name=features_to_use,early_stopping_rounds=40)
output:
[1] training's map@3: 0.13987 valid_1's map@3: 0.139808
Train until valid scores didn't improve in 40 rounds.
[2] training's map@3: 0.143765 valid_1's map@3: 0.14367
[3] training's map@3: 0.144295 valid_1's map@3: 0.144137
[4] training's map@3: 0.144681 valid_1's map@3: 0.144817
[5] training's map@3: 0.144765 valid_1's map@3: 0.144774
[6] training's map@3: 0.144763 valid_1's map@3: 0.144848
[7] training's map@3: 0.145091 valid_1's map@3: 0.145071
[8] training's map@3: 0.145347 valid_1's map@3: 0.145071
[9] training's map@3: 0.145396 valid_1's map@3: 0.145443
[10] training's map@3: 0.145482 valid_1's map@3: 0.145347
[11] training's map@3: 0.145546 valid_1's map@3: 0.145379
[12] training's map@3: 0.145445 valid_1's map@3: 0.145358
[13] training's map@3: 0.145471 valid_1's map@3: 0.145421
[14] training's map@3: 0.14545 valid_1's map@3: 0.145379
[15] training's map@3: 0.145576 valid_1's map@3: 0.1454
[16] training's map@3: 0.145578 valid_1's map@3: 0.145485
[17] training's map@3: 0.145572 valid_1's map@3: 0.145528
[18] training's map@3: 0.1457 valid_1's map@3: 0.145538
[19] training's map@3: 0.14572 valid_1's map@3: 0.145602
[20] training's map@3: 0.145685 valid_1's map@3: 0.145528
[21] training's map@3: 0.14578 valid_1's map@3: 0.145687
[22] training's map@3: 0.145842 valid_1's map@3: 0.145559
[23] training's map@3: 0.145876 valid_1's map@3: 0.145475
[24] training's map@3: 0.145861 valid_1's map@3: 0.145485
[25] training's map@3: 0.145945 valid_1's map@3: 0.145634
[26] training's map@3: 0.146001 valid_1's map@3: 0.14575
[27] training's map@3: 0.145967 valid_1's map@3: 0.145708
[28] training's map@3: 0.145954 valid_1's map@3: 0.145687
[29] training's map@3: 0.145949 valid_1's map@3: 0.145697
[30] training's map@3: 0.145977 valid_1's map@3: 0.145825
[31] training's map@3: 0.146001 valid_1's map@3: 0.145697
[32] training's map@3: 0.14602 valid_1's map@3: 0.145666
[33] training's map@3: 0.146119 valid_1's map@3: 0.145676
[34] training's map@3: 0.146112 valid_1's map@3: 0.14575
[35] training's map@3: 0.146083 valid_1's map@3: 0.145719
[36] training's map@3: 0.146143 valid_1's map@3: 0.145761
[37] training's map@3: 0.146134 valid_1's map@3: 0.14574
[38] training's map@3: 0.146183 valid_1's map@3: 0.145676
[39] training's map@3: 0.146147 valid_1's map@3: 0.145623
[40] training's map@3: 0.146186 valid_1's map@3: 0.145644
[41] training's map@3: 0.146207 valid_1's map@3: 0.145761
Early stopping, best iteration is:
[1] training's map@3: 0.13987 valid_1's map@3: 0.139808
I think it is obviously a bug or sth.. Both of the train and valid set are just doing OK before the training is stopped.
The text was updated successfully, but these errors were encountered: