This repository contains the code for reproducing the results in "Adapting Models to Signal Degradation using Distillation", BMVC, 2017 (Originally with the title "Cross Quality Distillation" on arXiv)
@inproceedings{su2017adapting,
Author = {Jong-Chyi Su and Subhransu Maji},
Title = {Adapting Models to Signal Degradation using Distillation},
Booktitle = {British Machine Vision Conference (BMVC)},
Year = {2017}
}
Code is tested on Ubuntu 14.04 with MATLAB R2014b and MatConvNet package.
Link to the project page.
Code is borrowed heavily from B-CNN (https://bitbucket.org/tsungyu/bcnn).
- Follow instructions on VLFEAT and MatConvNet project pages to install them first. Our code is built on MatConvNet version
1.0-beta18
. - Change the path in
setup.m
- Download datasets
- Birds: CUB-200-2011 dataset.
- Cars: Stanford cars dataset
- Run
save_images.m
to create degraded images (Need to download and install Structured Edge Detector for generating edge images and tpsWarp for generating distorted images) - Download and put pre-trained vgg models under
data/models/
(vgg-m and vgg-vd are used in the paper) - Run
run_CQD.m
for training all the baseline models and distillation model
Please see Table 1 in the paper.
Thanks Tsung-Yu Lin for sharing the codebase and MatConvNet team.
Please contact jcsu@cs.umass.edu if you have any question.