This repository contains the implementation of evaluation metrics for recommendation systems.
We have compared similarity, candidate generation, rating, ranking metrics performance on 5 different datasets -
MovieLens 100k, MovieLens 1m, MovieLens 10m, Amazon Electronics Dataset and Amazon Movies and TV Dataset.
Summary of experiment with instructions on how to replicate this experiment can be find below.
Majority of this repository work is taken from - https://github.com/recommenders-team/recommenders
@misc{jadon2023comprehensive,
title={A Comprehensive Survey of Evaluation Techniques for Recommendation Systems},
author={Aryan Jadon and Avinash Patil},
year={2023},
eprint={2312.16015},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
Paper Link - https://arxiv.org/abs/2312.16015
- recommenders: Folder containing the recommendations algorithms implementations.
- similarity_metrics: Folder containing scripts for running experiments of similarity metrics.
- candidate_generation_metrics: Folder containing scripts for running experiments of candidate generations metrics.
- rating_metrics: Folder containing scripts for running experiments of rating metrics.
- ranking_metrics: Folder containing scripts for running experiments of ranking metrics.
Install the dependencies using requirements.txt
pip install -r requirements.txt
or
conda env create -f environment.yml
Run the Similarity Metrics Experiments using -
chmod +x run_similarity_metrics_experiments.sh
./run_similarity_metrics_experiments.sh
Run the Candidate Generation Metrics Experiments using -
chmod +x run_candidate_generation_metrics_experiments.sh
./run_candidate_generation_metrics_experiments.sh
Run the Rating Metrics Experiments using -
chmod +x run_rating_metrics_experiments.sh
./run_rating_metrics_experiments.sh
Run the Ranking Metrics Experiments using -
chmod +x run_ranking_metrics_experiments.sh
./run_ranking_metrics_experiments.sh