This repo includes python/tensorflow implementations of several algorithms proposed for modeling grocery shopping behavior.
These alogrithms have been implemented
- the product representation learning model -- triple2vec
- the incremental module -- adaLoyal for next-basket recommendations
both proposed in
Mengting Wan, Di Wang, Jie Liu, Paul Bennett, Julian McAuley, "Representing and Recommending Shopping Baskets with Complementarity, Compatibility, and Loyalty", in CIKM'18 [bibtex]
Algorithms proposed in
Mengting Wan, Di Wang, Matt Goldman, Matt Taddy, Justin Rao, Jie Liu, Dimitrios Lymberopoulos, Julian McAuley, "Modeling Consumer Preferences and Price Sensitivities from Large-Scale Grocery Shopping Transaction Logs", in WWW'17 [bibtex]
will be added in the future.
If you would like to extend or compare with our algorithms, or use our source code, please consider citing the above two papers.
Any questions feel free to contact Mengting Wan (m5wan@ucsd.edu).
Requirement:
- Python 3.6+ (older version has not been tested)
- Tensorflow 1.6.0+ (older version has not been tested)
-
Please first download the complete dataset from here and release the files under
./data/
. This is a relatively large dataset which includes more than 3 million orders. We could start with a small subset of users to test the algorithms instantly. -
Preprocess the dataset
python ./src/parser.py --data_name instacart --thr_item 10 --thr_user 0 --subset_user 0.1
This will randomly sample transactions associated with 10% users and filter out products with <10 transactions. Please consider adjusting these thresholds if you plan to run the algorithms on the complete dataset.
The processed files will be saved as
-
./data/instacart.data.csv
: csv file which can be read bypandas
and must include the following columns:UID
(integers used to represent user IDs),PID
(a list of integers to represent product IDs in the current transaction),flag
(train, validation or test). Each row represents each transaction/basket record. -
./data/instacart.meta.csv
: csv file which can be read bypandas
, including meta-data of products.Note: In order to run adaLoyal, transactions in this file need to be sorted in chronological order.
Note: In order to run triple2vec, product IDs need to be sorted based on their popularities (i.e., PID=0 represents the most popular product). This will boost the negative sampling process in the noise contrastive estimation loss functions applied in representation learning algorithms.
- Run triple2vec
python ./src/main.py --data_name instacart --mode embedding --method_name triple2vec --dim 32 --lr 1.0 --batch_size 1000 --n_neg 5
This will first generate training samples and cache it under ./output/sample/
(optional). Then product and user embeddings will be dumped under ./output/param/
.
- Run personalized recommendation using item/user generated from triple2vec
python ./src/main.py --data_name instacart --mode recommendation --method_name triple2vec --dim 32 --lr 1.0 --batch_size 1000 --n_neg 5
- Run personalized recommendation using item/user generated from triple2vec and apply adaLoyal
python ./src/main.py --data_name instacart --mode recommendation --method_name triple2vec --dim 32 --lr 1.0 --batch_size 1000 --n_neg 5 --l0 0.8
where the initial loyalty is set as l0=0.8
.
All results will be saved under ./output/result/
.
We can also test some simple baselines on this dataset
- rank products based on their overall popularities in the training set
python ./src/main.py --data_name instacart --mode recommendation --method_name popRec
- rank products based on user-wise item purchase frequency
python ./src/main.py --data_name instacart --mode recommendation --method_name popRec
Ad-hoc needs can be added in this module ./src/recommendation/recommender.py
.
- Implement within-basket recommendation
- Complete documentation
- Implement price sensitivities