Source code for R*CNN, created by Georgia Gkioxari at UC Berkeley.
R*CNN was initialiy described in an arXiv tech report
R*CNN is released under the BSD License
If you use R*CNN, please consider citing:
@article{rstarcnn2015,
Author = {G. Gkioxari and R. Girshick and J. Malik},
Title = {Contextual Action Recognition with R\*CNN},
Booktitle = {ICCV},
Year = {2015}
}
-
Requirements for
Caffe
andpycaffe
(see: Caffe installation instructions)Note: Caffe must be built with support for Python layers!
# In your Makefile.config, make sure to have this line uncommented
WITH_PYTHON_LAYER := 1
- Python packages you might not have:
cython
,python-opencv
,easydict
-
Clone the RstarCNN repository
# Make sure to clone with --recursive git clone --recursive https://github.com/gkioxari/RstarCNN.git
-
Build the Cython modules
cd $ROOT/lib make
-
Build Caffe and pycaffe
cd $ROOT/caffe-fast-rcnn # Now follow the Caffe installation instructions here: # http://caffe.berkeleyvision.org/installation.html # If you're experienced with Caffe and have all of the requirements installed # and your Makefile.config in place, then simply do: make -j8 && make pycaffe
Train a R*CNN classifier. For example, train a VGG16 network on VOC 2012 trainval:
./tools/train_net.py --gpu 0 --solver models/VGG16_RstarCNN/solver.prototxt \
--weights reference_models/VGG16.v2.caffemodel
Test a R*CNN classifier
./tools/test_net.py --gpu 0 --def models/VGG16_RstarCNN/test.prototxt \
--net output/default/voc_2012_trainval/vgg16_fast_rstarcnn_joint_iter_40000.caffemodel
-
PASCAL VOC 2012 Action Dataset
Place the VOCdevkit2012 inside the
$ROOT/data
directoryDownload the selective search regions for the images from here and place them inside the
$ROOT/data/cache
directory -
Berkeley Attributes of People Dataset
Download the data from here and place them inside the
$ROOT/data
directory -
Stanford 40 Dataset
Download the data from here and place them inside
$ROOT/data
directory. R*CNN achieves 90.85% on the test set (trained models provided in 5) -
Reference models
Download the VGG16 reference model trained on ImageNet from here (500M)
-
Trained models
Download the models as described in the paper from here (3.6G)