Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rework learning to rank. #8822

Merged
merged 9 commits into from
Jun 9, 2023
Merged

Rework learning to rank. #8822

merged 9 commits into from
Jun 9, 2023

Conversation

trivialfis
Copy link
Member

@trivialfis trivialfis commented Feb 17, 2023

Early PR, would like to get some comments from the community as I'm also new to the LTR.

  • Rework the lambdamart implementation for all three objectives.
  • Default to NDCG.
  • Try to implement the position debiasing described by Xgboost Unbiased Learning to Rank Implementation. #6143 .
  • Make GPU computation deterministic for both metric and objective.
  • Add support for truncation.
  • Check the input label.
  • Fix ranking metrics with weights.
  • Add documentation for all objectives.
  • Support sklearn cross-validation.

Debiasing

I'm new to learning to rank in general. It seems using simulated clicks is a common practice in unbiased LTR development. I tried to make a version of the click simulation using svmrank as described in the paper and showed in https://github.com/ULTR-Community/ULTRA.git , the one in the demo is just svmrank replaced by xgboost. I haven't really looked into the ULTRA repository, my simulation code might be incorrect.

I have tested the debiasing method with MSLR-WEB10K with simulated clicks, it leads to significant over-fitting.

As suggested by @thvasilo , we can have an additional field position for handling missing documents. Other than missing docs coming from practical reasons like corrupted user logs, we can have missing docs simply because they are preserved as the test set. I partially implemented the support and then removed it. The position should be in sync with the NDCG calculation, if we were to use the additional position info to calculate the bias, we should also use it to calculate the delta NDCG. This makes the code a bit more complicated and I'm not sure how useful it's in practice. I don't have access to any real user click data for testing. Would be great if I can compare the simulation result with a real dataset.

Lastly, I don't have access to the yahoo dataset, my request to that dataset was denied. The yahoo dataset was used in the paper as the only benchmarking dataset. Would be great if someone can help and compare the implementation with the results in the paper and the original implementation by the authors.

Closes

Close #6143
Close #6955
Close #5561
Close #6709
Close #6707
Close #6352

Related:

Results

mq2007.csv
mq2008.csv
mslr10k.csv
mslr30k.csv

@trivialfis trivialfis changed the title [WIP] Rework learning to rank with NDCG. [WIP] Rework learning to rank. Feb 25, 2023
@chen1st
Copy link

chen1st commented Mar 2, 2023

Now ndcg@n can only be an evaluation metrics, why it can't be an objective.

@trivialfis
Copy link
Member Author

@chen1st that's part of the PR, to support truncation level.

@trivialfis trivialfis force-pushed the ltr-ndcg branch 2 times, most recently from 0883cb4 to e3c3159 Compare April 21, 2023 09:02
@trivialfis trivialfis marked this pull request as ready for review April 21, 2023 09:02
@trivialfis trivialfis changed the title [WIP] Rework learning to rank. Rework learning to rank. Apr 21, 2023
Copy link
Collaborator

@hcho3 hcho3 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I reviewed the documentation part of this pull request. Nice discussion on learning-to-rank, including the method for choosing different hyperperameters.
The demo code is fantastic, as the built-in click data simulator (PDM) greatly enhances understanding.

I have some stylistic recommendations.

doc/contrib/coding_guide.rst Outdated Show resolved Hide resolved
python-package/xgboost/testing/data.py Outdated Show resolved Hide resolved
python-package/xgboost/testing/data.py Show resolved Hide resolved
python-package/xgboost/testing/data.py Show resolved Hide resolved
python-package/xgboost/testing/data.py Show resolved Hide resolved
doc/tutorials/learning_to_rank.rst Outdated Show resolved Hide resolved
doc/tutorials/learning_to_rank.rst Outdated Show resolved Hide resolved
doc/tutorials/learning_to_rank.rst Outdated Show resolved Hide resolved
doc/tutorials/learning_to_rank.rst Outdated Show resolved Hide resolved
doc/tutorials/learning_to_rank.rst Outdated Show resolved Hide resolved
* Simplify the implementation for both CPU and GPU.

Fix JSON IO.

Check labels.

Put idx into cache.

Optimize.

File tag.

Weights.

Trivial tests.

Compatibility.

Lint.

Fix swap.

Device weight.

tidy.

Easier to read R failure.

enum.

Fix global configuration.

Tidy.

msvc omp.

dask.

Remove ndcg specific parameter.  Drop label type for smaller PR.

Fix rebase.

Fixes.

Don't mess with includes.

Fixes.

Format.

Use omp util.

Restore some old code.

Revert.

Port changes from the work on quantile loss.

python binding.

param.

Cleanup.

conditional parallel.

types.

Move doc.

fix.

need metric rewrite.

rename ctx.

extract.

Work on metric.

Metric

Init estimation.

extract tests, compute ties.

cleanup.

notes.

extract optional weights.

init.

cleanup.

old metric format.

note.

ndcg cache.

nested.

debug.

fix.

log2.

Begin CUDA work.

temp.

Extract sort and latest cuda.

truncation.

dcg.

dispatch.

try different gain type.

start looking into ub.

note.

consider writing a doc.

check exp gain.

Reimplement lambdamart ndcg.

* Simplify the implementation for both CPU and GPU.

Fix JSON IO.

Check labels.

Put idx into cache.

Optimize.

File tag.

Weights.

Trivial tests.

Compatibility.

Lint.

Fix swap.

Device weight.

tidy.

Easier to read R failure.

enum.

Fix global configuration.

Tidy.

msvc omp.

dask.

Remove ndcg specific parameter.  Drop label type for smaller PR.

Fix rebase.

Fixes.

Don't mess with includes.

Fixes.

Format.

Use omp util.

Restore some old code.

Revert.

Port changes from the work on quantile loss.

python binding.

param.

Cleanup.

conditional parallel.

types.

Move doc.

fix.

need metric rewrite.

rename ctx.

extract.

Work on metric.

Metric

Init estimation.

extract tests, compute ties.

cleanup.

notes.

extract optional weights.

init.

cleanup.

old metric format.

note.

ndcg cache.

nested.

debug.

fix.

log2.

Begin CUDA work.

temp.

Extract sort and latest cuda.

truncation.

dcg.

dispatch.

try different gain type.

start looking into ub.

note.

consider writing a doc.

check exp gain.

Start looking into unbiased.

lambda.

Extract the ndcg cache.

header.

cleanup namespace.

small check.

namespace.

init with param.

gain.

extract.

groups.

Cleanup.

disable.

debug.

remove.

Revert "remove."

This reverts commit ea025f9.

sigmoid.

cleanup.

metric name.

check scores.

note.

check map.

extract utilities.

avoid inline.

fix.

header.

extract more.

note.

note.

note.

start working on map.

fix.

continue map.

map.

matrix.

Remove map.

note.

format.

move check.

cleanup.

use cached discount, use double.

cleanup.

Add position to the Python interface.

pass it into lambda.

Full ratio.

rank.

comment.

some work on GPU.

compile.

move cache initialization.

descending.

Fix arg sort.

basic ndcg score.

metric weight.

config.

extract.

pass position again.

Define a metric decorator.

position.

decorate metric..

return.

note.

irrelevant docs.

fix weights.

header.

Share the bias.

Use position

check info.

use cache for param.

note.

prepare to work on deterministic gpu.

rounding.

Extract op.

cleanup.

Use it.

check label.

ditch launchn.

rounding.

Move rounding into cache.

fix check label.

GPU fixes.

Irrelevant doc.

try to avoid inf.

mad.

Work on metric cache.

Cleanup sort.

use cache.

cache others.

revert.

add test for metric.

fixes.

msg.

note.

remove reduce by key.

comments.

check position.

stream.

min.

small cleanup.

use atomic for now.

fill.

no inline.

norm.

remove op.

start gpu.

cleanup.

use gpu for update.

segmented reduce.

revert.

comments.

comments.

fix.

comments.

fix bounds.

comments.

cache.

pointer.

fixes.

no spark.

revert.

Cleanup.

cleanup.

work on gain type.

fix.

notes.

make metric name.

remove.

revert.

revert.

comment.

revert.

Move back into rank metric.

Set name in objective.

fix.

Don't configure.

note.

merge tests.

accept empty group.

fixes.

float.

revert and fix.

not mutable.

prototype for cache.

extract.

convert to DMatrix.

cache.

Extract the cache.

Port changes.

fix & cleanup.

cleanup.

cleanup.

Rename.

restore.

remove.

header.

revert.

rename.

rename.

doc.

cleanup.

doc.

cleanup.

tests.

tests.

split up.

jvm parameters.

doc.

Fix.

Use cache in cox.

Revert "Use cache in cox."

This reverts commit e1cec37.

Remove pairwise.

iwyu.

rename.

Move.

Merge.

ranking utils.

Fixes.

rename.

Comments.

todos.

Small cleanup.

doc.

Start working on demo.

move some code here.

rename.

Update doc.

Update doc.

Work on demo.

work on demo.

demo.

Demo.

Specify the max rel degree.

remove position.

Fix.

Work on demo.

demo.

Using only one fold.

cache.

demo.

schema.

comments.

Lint.

fix test.

automake.

macos.

schema.

test.

schema.

lint.

fix tests.

Implement MAP and pair sampling.

revert sorting.

Work on ranknet.

remove.

Don't upgrade cost if larger than.

Extract GPU make pairs.

error message.

Remove.

Cleanup some gpu tests.

Move.

Move NDCG test.

fix weights.

Move rest of the tests.

Remove.

Work on tests.

fixes.

Cleanup.

header.

cleanup.

Update document.

update document.

fix build.

cpplint.

rename.

Fixes and cleanup.

Cleanup tests.

lint.

fix tests.

debug macos non-openmp checks.

macos.

fix ndcg test.

Ensure number of threads is smaller than the number of inputs.

fix.

Debug macos.

fixes.

Add weight normalization.

Note on reproducible result.

Don't normalize if it's binary.

old ctk.

Use old objective.

Update doc.

Convert pyspark tests.

black.

Fix rebase.

Fix rebase.

Start looking into CV.

Hacky score function.

extract parsing.

Cleanup and tests.

Lint & note.

test check.

Update document.

Update tests & doc.

Support custom metric as well.

c++-17.

cleanup old metrics.

rename.

Fixes.

Fix cxx test.

test cudf.

start converting tests.

pylint.

fix data load.

Cleanup the tests.

Parameter tests.

isort.

Fix test.

Specify src path for isort.

17 goodies.

Fix rebase.

Start working on ranking cache tests.

Extract CPU impl.

test debiasing.

use index.

ranking cache.

comment.

some work on debiasing.

save the estimated bias.

normalize by default.

GPU norm.

fix gpu unbiased.

cleanup.

cleanup.

Remove workaround.

Default to topk.

Restore.

Cleanup.

Revert change in algorithm.

norm.

Move data generation process in testing for reuse.

Move sort samples as well.

cleanup.

Generate data.

lint.

pylint.

Fix.

Fix spark test.

avoid sampling with unbiased.

Cleanup demo.

Handle single group simulation.

Numeric issues.

More numeric issues.

sigma.

naming.

Simple test.

tests.

brief description.

Revert "brief description."

This reverts commit 0b3817a.

rebase.

symbol.

Rebase.

disable normalization.

Revert "disable normalization."

This reverts commit ef3133d2b4a76714f3514808c6e2ae5937e6a8c2.

unused variable.

Apply suggestions from code review

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>

Use dataclass.

Fix return type.

doc.

Minor fixes.

Add test for custom gain.

cleanup.

wording.

start working on precision.

comments.

initial work on precision.

Cleanup GPU ranking metric.

rigorous.

work on test.

adjust test.

Tests.

Work on binary classification support.

cpu.

mention it in document.

callback.

tests.
This reverts commit 6030aa9.
Copy link
Member

@RAMitchell RAMitchell left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very nice documentation.

@trivialfis trivialfis merged commit 1fcc26a into dmlc:master Jun 9, 2023
@trivialfis trivialfis deleted the ltr-ndcg branch June 9, 2023 15:31
@trivialfis
Copy link
Member Author

whew ...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
4 participants