Skip to content

Latest commit

 

History

History
132 lines (119 loc) · 4.06 KB

MovieLens-100K.md

File metadata and controls

132 lines (119 loc) · 4.06 KB

Experiment Settings

Dataset: MovieLens-100K

Metircs: Precision@5, Recall@5, HR@5, nDCG@5,MRR@5

Task defination: We format the dataset into tasks and each user can be represented as a task. Therefore, we set the task proportion of training: validation: test as 8:1:1. For each task, we randomly select 10 interactions as query set, and the others as support set.

Data type and filtering: We use MovieLens-100K for the rating and click experiment respectively. For rating settings, we use the original rating scores. For click settings, as many papers do, we consider rating scores equal or above 4 as positive labels, and others are negative. Moreover, we set the user interaction number interval as [13,100] as many papers do.

The common configurations are listed as follows.

# Dataset config
USER_ID_FIELD: user_id
ITEM_ID_FIELD: item_id

load_col:
    inter: [user_id, item_id, rating]
    item: [item_id,movie_title,release_year,class]
    user: [user_id,age,gender,occupation,zip_code]
user_inter_num_interval: [13,100]

# Training and evaluation config
epochs: 10
train_batch_size: 32
valid_metric: mrr@5

# Evaluate config
eval_args:
    group_by: task
    order: RO
    split: {'RS': [0.8,0.1,0.1]}
    mode : labeled

# Meta learning config
meta_args:
    support_num: none
    query_num: 10

# Metrics
metrics: ['precision','recall','hit','ndcg','mrr']
metric_decimal_place: 4
topk: 5

Hyper Parameter Tuning

Model Best Hyper Parameter Tuning Range
FOMeLU embedding_size: [16];
train_batch_size: [256];
lr: [0.01];
mlp_hidden_size: [[64,64]]
embedding_size: [8,16,32,64,128,256];
train_batch_size: [8,16,32,64,128,256];
lr: [0.0001,0.001,0.01,0.05,0.1,0.2,0.5,1.0];
mlp_hidden_size: [[8,8],[16,16],[32,32],[64,64],[128,128],[256,256]]
MAMO embedding: [8];
train_batch_size: [8];
lambda (lr): [0.01];
beta: [0.05]
embedding: [8,16,32,64,128,256];
train_batch_size: [8,16,32,64,128,256];
lambda (lr): [0.0001,0.001,0.01,0.05,0.1,0.2,0.5,1.0];
beta: [0.05,0.1,0.2,0.5,0.8,1.0]
TaNP embedding: [256];
train_batch_size: [128];
lr: [0.01];
lambda: [0.05]
embedding: [8,16,32,64,128,256];
train_batch_size: [8,16,32,64,128,256];
lr: [0.0001,0.001,0.01,0.05,0.1,0.2,0.5,1.0];
lambda: [0.05,0.1,0.2,0.5,0.8,1.0]
LWA embedding_size: [8];
train_batch_size: [8];
lr: [0.01];
embeddingHiddenDim: [256]
embedding_size: [8,16,32,64,128,256];
train_batch_size: [8,16,32,64,128,256];
lr: [0.0001,0.001,0.01,0.05,0.1,0.2,0.5,1.0];
embeddingHiddenDim: [8,16,32,64,128,256]
NLBA embedding_size: [16];
train_batch_size: [8];
lr: [0.01];
recHiddenDim: [32]
embedding_size: [8,16,32,64,128,256];
train_batch_size: [8,16,32,64,128,256];
lr: [0.0001,0.001,0.01,0.05,0.1,0.2,0.5,1.0];
recHiddenDim: [8,16,32,64,128,256]
MetaEmb embedding_size: [128];
train_batch_size: [8];
lr: [0.01];
alpha: [0.2]
embedding_size: [8,16,32,64,128,256];
train_batch_size: [8,16,32,64,128,256];
lr: [0.0001,0.001,0.01,0.05,0.1,0.2,0.5,1.0];
alpha: [0.05,0.1,0.2,0.5,0.8,1.0]
MWUF embedding_size: [256];
train_batch_size: [8];
warmLossLr: [0.1];
indexEmbDim: [64]
embedding_size: [8,16,32,64,128,256];
train_batch_size: [8,16,32,64,128,256];
warmLossLr: [0.0001,0.001,0.01,0.05,0.1,0.2,0.5,1.0];
indexEmbDim: [8,16,32,64,128,256]