Skip to content

konev-artem/powerful-benchmarker

 
 

Repository files navigation

Powerful Benchmarker

Installation

Clone this repo, then:

pip install -r requirements.txt

Set paths in constants.yaml

  • experiment_folder: experiments will be saved at <experiment_folder>/<experiment_name>
  • dataset_folder: datasets will be downloaded here. For example, <dataset_folder>/mnistm and <dataset_folder>/office31
  • conda_env and slurm_folder are for running jobs on slurm. (I haven't uploaded the slurm-related code yet.)

Running hyperparameter search

Example 1: DANN on MNIST->MNISTM task

python main.py --experiment_name dann_experiment --dataset mnist \
--src_domains mnist --target_domains mnistm --adapter DANNConfig \
--download_datasets --start_with_pretrained

Example 2: MCC on OfficeHome Art->Real task

python main.py --experiment_name mcc_experiment --dataset officehome \
--src_domains art --target_domains real --adapter MCCConfig \
--download_datasets --start_with_pretrained

Example 3: Specify validator, batch size, etc.

python main.py --experiment_name bnm_experiment --dataset office31 \
--src_domains dslr --target_domains amazon --adapter BNMConfig \
--batch_size 32 --max_epochs 500 --patience 15 \
--validation_interval 5 --num_workers 4 --num_trials 100 --n_startup_trials 100 \
--validator entropy_diversity --optimizer_name Adam \
--download_datasets --start_with_pretrained

Note on algorithm/validator names

Some names in the code don't match the names in the paper. It would be good to change the names in the code, but I'm going to delay doing that, in case I have to rerun experiments and combine new dataframes with existing saved dataframes.

Here are the main differences between code and paper:

Code Paper
--validator entropy_diversity Information Maximization (IM) validator
--adapter TEConfig MinEnt algorithm
--adapter TEDConfig IM algorithm

Notebooks

The notebooks folder currently contains:

Citing the paper

If you'd like to cite the paper, paste this into your latex bib file:

@misc{musgrave2021unsupervised,
      title={Unsupervised Domain Adaptation: A Reality Check}, 
      author={Kevin Musgrave and Serge Belongie and Ser-Nam Lim},
      year={2021},
      eprint={2111.15672},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Checkout the metric-learning branch.

About

A library for ML benchmarking. It's powerful.

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 63.9%
  • Jupyter Notebook 35.5%
  • Other 0.6%