Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DenseNet feature pyramid computation #308

Closed
wants to merge 92 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
92 commits
Select commit Hold shift + click to select a range
fc3503f
import of ffld patchwork code; initial skeleton of pyramid stitching …
forresti Jan 15, 2014
176b0cc
add DenseNet prototxt files (from the future, without padding layers)
moskewcz Mar 14, 2014
1821069
stitching 2000x2000 planes from within pycaffe.cpp
forresti Jan 17, 2014
f00eb3f
designing jpeg pyramid -> caffe copying stuff
forresti Jan 17, 2014
ecbd74e
setting up input and output blobs for featpyramid
forresti Jan 17, 2014
f723879
getting stitched planes and visualizing them
forresti Jan 18, 2014
c005e6b
creating multiple scales / planes and returning them to python
forresti Jan 18, 2014
96bfea8
add disabled code for rand padding scales inside planes
forresti Jan 19, 2014
4f25c7c
remove old featpyramid test; keeping the multiscale test
forresti Jan 21, 2014
a99c574
add STITCHPYRAMID_{SRC,HDRS} to Makefile (but no usage yet)
moskewcz Mar 13, 2014
5944af5
convinced the make system to also build PyramidStitcher when we do 'm…
forresti Jan 21, 2014
f76ca85
use single imagenet mean pixel for data centering in feature pyramid
moskewcz Mar 18, 2014
78fb085
streamlining upsampling/downsampling code. going to return scale fact…
forresti Jan 22, 2014
c05ae56
new downsampling param passing looks good.
forresti Jan 22, 2014
d506d66
remove Eigen debris
forresti Jan 22, 2014
b1c06e0
testing boost::python functonality of returning dictionaries from C++…
forresti Jan 22, 2014
7c1ea80
now returning a DICT instead of a LIST from extract_features(). the n…
forresti Jan 22, 2014
0af3167
avoid having 'scales' go out of scope...
forresti Jan 22, 2014
45a327f
returning scales that make sense
forresti Jan 22, 2014
e904f25
cleanup. not officially supporting 'return unstitched features' to py…
forresti Jan 23, 2014
178c693
-- initial work on matlab exports for feature pyramid extraction.
moskewcz Jan 24, 2014
3decf2f
add disabled code for writing pyramid to image files
forresti Jan 23, 2014
a2f3361
-- use p_vect_float instead of mxArray for buffering feature pyramid …
moskewcz Jan 24, 2014
30b8049
moved stitch_pyramid from caffe/python/caffe/imagenet/stitch_pyramid …
forresti Jan 27, 2014
0cbbe39
fixed demo
forresti Jan 27, 2014
12e2fcd
rename extract_featpyramid -> convnet_featpyramid in matlab API
forresti Jan 27, 2014
0e17940
partially implimented padding: background->mean; edges linear interp …
forresti Jan 27, 2014
ad8a025
add interpolation-to-mean to corner padding for image pyramid patchwo…
moskewcz Jan 28, 2014
b0af64c
add timers
moskewcz Mar 18, 2014
77b207b
pycaffe/stitchpyramid related makefile tweaks; add missing (PyArrayOb…
moskewcz Jan 29, 2014
efbd4cb
change default plane size in featpyramid matlab demo to 1100x1100.
moskewcz Feb 5, 2014
d7027dd
indentation and comments
forresti Jan 29, 2014
6dee408
add visualization to demo
forresti Jan 29, 2014
7b785fc
setting up code to rip a bbox out of a pyra
forresti Jan 30, 2014
252737a
running hog pyra for regression test
forresti Jan 30, 2014
84fa075
testing scale calc
forresti Jan 30, 2014
59f7a39
bbox slicing from pyra looks reasonable now. (in hog space... will te…
forresti Jan 30, 2014
f5073c5
working on only using pyramids and no on-demand features in DPM train…
forresti Jan 31, 2014
fba4623
first pass at precomputing features instead of warping
forresti Jan 31, 2014
ed97d45
calling precompute_gt_bbox_features() from the voc5 training code
forresti Jan 31, 2014
e0aafd0
looks like there's systematic shift of bboxes slightly left from wher…
forresti Feb 5, 2014
ed74ae5
centering approx bbox on each exemplar in hog space. (previously, was…
forresti Feb 5, 2014
223e01a
centering hog features, this time with ceil() instead of round(). als…
forresti Feb 5, 2014
1b2bfb3
python API now returns imwidth, imheight of input image
forresti Feb 6, 2014
e8fa849
setting up shared DenseNet_Params class for both the matlab and pytho…
forresti Feb 6, 2014
8ca7682
factoring shared matlab+python code up to featpyra_common.cpp
forresti Feb 6, 2014
a2083d2
convnet_featpyramid matlab function takes parameters as second arg
moskewcz Feb 6, 2014
cfb7491
changes to convnet_featpyramid matlab interface
moskewcz Feb 6, 2014
aaa149a
remove test_io function from matcaffe
moskewcz Feb 6, 2014
15aa69a
convnet_featpyramid matlab iface: add imwidth and imheight to output
moskewcz Feb 6, 2014
7643481
convnet_featpyramid matlab iface: add placeholder feat_padx and feat_…
moskewcz Feb 7, 2014
3824282
convnet_featpyramid matlab/python iface changes
moskewcz Feb 7, 2014
655c534
adding the ability to ignore big scales if they don't fit in planes
forresti Feb 7, 2014
ef6fa03
solving problem of 'some pyra scales are too big for planes.' pruning…
forresti Feb 7, 2014
071051e
pruning scales that are smaller than template size
forresti Feb 7, 2014
6b10cf6
debugging voc5 + densenet
forresti Feb 7, 2014
d1ca61e
add a matcaffe debugging printout (only printed prior to assert failu…
moskewcz Feb 7, 2014
0393fe1
assert that individual images fit in the planes
forresti Feb 7, 2014
019cfbf
getting 33% ap on inria with densenet + 1component.
forresti Feb 12, 2014
79021c4
add layer_strides() vector/accessor to net.{cpp,h}
moskewcz Feb 12, 2014
c4961d9
weaken too-strong assertion in PyramidStitcher.cpp on {x,y}Max versus…
moskewcz Feb 12, 2014
d1220a7
rename convnet_subsampling_ratio -> sbin
moskewcz Feb 12, 2014
515ff5e
use get_sbin(net_) to get sbin in pycaffe and matcaffe.
moskewcz Feb 12, 2014
a2a178e
matcaffe: init google logging with "matcaffe" as name
moskewcz Feb 13, 2014
99e3a83
makefile: remove unneeded -UNDEBUG, add commented -DENABLE_ALLOC_TRAC…
moskewcz Feb 13, 2014
2c2ec64
enabling user-configurability of 'minimum desired scale' in the form …
forresti Feb 14, 2014
eacacdb
swap matcaffe demo plane size 2000 -> 1100 (again)
moskewcz Feb 14, 2014
688419f
matcaffe: almost-minimal commit to fix/add feat_min{Width,Height} sup…
moskewcz Feb 14, 2014
0d1cfb2
matcaffe demo: set interval=10 (the default) to agree with comment
moskewcz Feb 14, 2014
cbaea77
remove unneeded and unused defaults from stitch_pyramid() decl
moskewcz Feb 14, 2014
294054f
forgot to initialize img_minHeight_. fixed.
forresti Feb 14, 2014
30d999a
better scheme for convnet_featpyramid to unpack user's params
forresti Feb 14, 2014
42dde72
testing out the 'user-selectable min image scale' stuff in python and…
forresti Feb 14, 2014
67ffe15
improve google logging init for matcaffe but note that it is still wr…
moskewcz Feb 14, 2014
0c83487
simplified demo
forresti Feb 15, 2014
745c0f7
handling 'pull bbox out of pyra' for boxes that are near the edge.
forresti Feb 15, 2014
869ef4d
fix paths in demo
forresti Feb 18, 2014
7c9966f
tweaking demos; looking at receptive fields
forresti Feb 21, 2014
41b3e9b
remove 'keyboard' call
forresti Feb 21, 2014
1b634f8
automating pascal evaluation, and setting up feature caching
forresti Feb 23, 2014
ea449b1
fleshing out cached feature functionality
forresti Feb 23, 2014
4db11fa
various cpu mode tweaks
forresti Mar 4, 2014
cd364b2
sped stitch_pyramid up a bit (maybe 2x?) by avoiding calls to uninlin…
forresti Mar 3, 2014
819aaf4
tweaks to model and training script
forresti Mar 4, 2014
8e8c3a0
less restrictive check on filename
forresti Mar 13, 2014
25a399d
pyramid padding calculation
forresti Mar 13, 2014
d208df7
Makefile: add mkoctfile_mat rule to build .mex file with octave (for …
moskewcz Mar 14, 2014
d342fca
fix Makefile to use $(SHARED_LDFLAGS) instead of a hard-coded '-share…
moskewcz Mar 14, 2014
3c85b8f
add DENSENET_MERGE_TODO and add DenseNet header to README.md
moskewcz Mar 26, 2014
bd3dfa8
update README.md with DenseNet arXiv paper link; fix/polish attributi…
moskewcz Apr 8, 2014
8717483
Update README.md
forresti Apr 8, 2014
cbc7d8c
remove relics of hardcoded sbin
forresti Apr 11, 2014
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 15 additions & 0 deletions DENSENET_MERGE_TODO
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@

list of the issues blocking the merge of the DenseNet feature branch:

- critical
-- replacement of GPL'd code (including removal from history)
-- update build process / Makefile to match current practice
-- tests
-- trivial: remove DenseNet README.md header
-- trivial: remove this todo file

- unclear neccessity, semantics, and/or priority
-- general cleanup of commit sequence (probably mostly squashing)
-- input interface changes (i.e. jpeg filname as input -> ?)
-- output interface changes (?, but probably something: image support size, multiple layer output, alignment, etc.)
-- if still reading image files after any iface changes and removal of GPL code, use XX instead of YY
43 changes: 36 additions & 7 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,10 @@ NONGEN_CXX_SRCS := $(shell find \
-name "*.cpp" -or -name "*.hpp" -or -name "*.cu" -or -name "*.cuh")
LINT_REPORT := $(BUILD_DIR)/cpp_lint.log
FAILED_LINT_REPORT := $(BUILD_DIR)/cpp_lint.error_log
# STITCHPYRAMID is for stitching multiresolution feature pyramids. (exclude test files)
STITCHPYRAMID_SRC := $(shell find src/stitch_pyramid ! -name "test_*.cpp" -name "*.cpp")
STITCHPYRAMID_HDRS := $(shell find src/stitch_pyramid -name "*.h")
STITCHPYRAMID_SO := src/stitch_pyramid/libPyramidStitcher.so
# PY$(PROJECT)_SRC is the python wrapper for $(PROJECT)
PY$(PROJECT)_SRC := python/$(PROJECT)/py$(PROJECT).cpp
PY$(PROJECT)_SO := python/$(PROJECT)/py$(PROJECT).so
Expand Down Expand Up @@ -96,13 +100,18 @@ LIBRARIES := cudart cublas curand \
PYTHON_LIBRARIES := boost_python python2.7
WARNINGS := -Wall

COMMON_FLAGS := -DNDEBUG -O2 $(foreach includedir,$(INCLUDE_DIRS),-I$(includedir))
COMMON_FLAGS := -O2 $(foreach includedir,$(INCLUDE_DIRS),-I$(includedir))
#COMMON_FLAGS := -O3 -DENABLE_ALLOC_TRACE $(foreach includedir,$(INCLUDE_DIRS),-I$(includedir))
#COMMON_FLAGS := -DNDEBUG -O2 $(foreach includedir,$(INCLUDE_DIRS),-I$(includedir))
CXXFLAGS += -pthread -fPIC $(COMMON_FLAGS)
NVCCFLAGS := -ccbin=$(CXX) -Xcompiler -fPIC $(COMMON_FLAGS)
LDFLAGS += $(foreach librarydir,$(LIBRARY_DIRS),-L$(librarydir)) \
$(foreach library,$(LIBRARIES),-l$(library))
PYTHON_LDFLAGS := $(LDFLAGS) $(foreach library,$(PYTHON_LIBRARIES),-l$(library))

#SHARED_LDFLAGS := -shared -Wl,--no-undefined # for gcc, this is probably a more sane default: any undefined syms will give a link error
SHARED_LDFLAGS := -shared


##############################
# Define build targets
Expand Down Expand Up @@ -139,22 +148,42 @@ tools: init $(TOOL_BINS)

examples: init $(EXAMPLE_BINS)


stitch: $(STITCHPYRAMID_SO)

.PHONY : stitch

$(STITCHPYRAMID_SO): $(STITCHPYRAMID_HDRS) $(STITCHPYRAMID_SRC)
$(CXX) $(SHARED_LDFLAGS) -o $(STITCHPYRAMID_SO) $(STITCHPYRAMID_SRC) $(CXXFLAGS) -ljpeg

py$(PROJECT): py

py: init $(STATIC_NAME) $(PY$(PROJECT)_SRC) $(PROTO_GEN_PY)
$(CXX) -shared -o $(PY$(PROJECT)_SO) $(PY$(PROJECT)_SRC) \
py: init $(STATIC_NAME) $(PY$(PROJECT)_SRC) $(PROTO_GEN_PY) $(STITCHPYRAMID_SO)
$(CXX) $(SHARED_LDFLAGS) -o $(PY$(PROJECT)_SO) $(PY$(PROJECT)_SRC) -L./src/stitch_pyramid -lPyramidStitcher -I./src/stitch_pyramid \
$(STATIC_NAME) $(CXXFLAGS) $(PYTHON_LDFLAGS)
@echo

mat$(PROJECT): mat

mat: init $(STATIC_NAME) $(MAT$(PROJECT)_SRC)
$(MATLAB_DIR)/bin/mex $(MAT$(PROJECT)_SRC) $(STATIC_NAME) \
CXXFLAGS="\$$CXXFLAGS $(CXXFLAGS) $(WARNINGS)" \
CXXLIBS="\$$CXXLIBS $(LDFLAGS)" \
mat: init $(STATIC_NAME) $(MAT$(PROJECT)_SRC) $(STITCHPYRAMID_SO)
$(MATLAB_DIR)/bin/mex -g $(MAT$(PROJECT)_SRC) $(STATIC_NAME) \
CXXFLAGS="\$$CXXFLAGS $(CXXFLAGS) $(WARNINGS)" -I./python/caffe \
CXXLIBS="\$$CXXLIBS $(LDFLAGS)" -L./src/stitch_pyramid -lPyramidStitcher \
-o $(MAT$(PROJECT)_SO)
@echo

mkoctfile_mat$(PROJECT): mkoctfile_mat

# CXXFLAGS="-Wall -fpic -O2" mkoctfile --mex matlab/caffe/matcaffe.cpp libcaffe.a -pthread -I/usr/local/include -I/usr/include/python2.7 -I/usr/local/lib/python2.7/dist-packages/numpy/core/include -I./src -I./include -I/usr/local/cuda/include -I/opt/intel/mkl/include -Wall -L/usr/lib -L/usr/local/lib -L/usr/local/cuda/lib64 -L/usr/local/cuda/lib -L/opt/intel/mkl/lib -L/opt/intel/mkl/lib/intel64 -lcudart -lcublas -lcurand -lprotobuf -lopencv_core -lopencv_highgui -lglog -lmkl_rt -lmkl_intel_thread -lleveldb -lsnappy -lpthread -lboost_system -lopencv_imgproc -L/home/moskewcz/git_work/caffe/python/caffe/stitch_pyramid -lPyramidStitcher -I./python/caffe -o matlab/caffe/caffe


mkoctfile_mat: init $(STATIC_NAME) $(MAT$(PROJECT)_SRC) $(STITCHPYRAMID_SO)
CXXFLAGS="$$CXXFLAGS $(CXXFLAGS)" \
DL_LDFLAGS="$$DL_LDFLAGS $(SHARED_LDFLAGS)" \
LFLAGS="$$LFLAGS $(LDFLAGS) -L./src/stitch_pyramid -lPyramidStitcher" \
mkoctfile --mex -g -v $(MAT$(PROJECT)_SRC) $(STATIC_NAME) -o $(MAT$(PROJECT)_SO)
@echo

$(NAME): init $(PROTO_OBJS) $(OBJS)
$(CXX) -shared -o $(NAME) $(OBJS) $(CXXFLAGS) $(LDFLAGS) $(WARNINGS)
@echo
Expand Down
72 changes: 72 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,75 @@
# -- WARNING: THIS IS AN FORK/FEATURE BRANCH OF [CAFFE](http://github.com/BVLC/caffe) (PR PENDING). --
## DenseNet

[DenseNet: Implementing Efficient ConvNet Descriptor Pyramids](http://arxiv.org/abs/1404.1869)<br>
Forrest Iandola, Matt Moskewicz, Sergey Karayev, Ross Girshick, Trevor Darrell, and Kurt Keutzer.<br>
Arxiv technical report, April 2014.

<b>Licensing</b><br>
Except where noted in individual files, all new code files / changes in this branch are:
Copyright (c) 2013 Matthew Moskewicz and Forrest Iandola
and are BSD 2-Clause licensed with the same as the original source (see [LICENSE](LICENSE)).

The two example images are taken from the PASCAL vision benchmark set.

<b>DenseNet APIs in Matlab and Python</b><br>
The DenseNet API is fairly similar to the popular `featpyramid.m` HOG extraction API from the [voc-release5 Deformable Parts Model code](https://github.com/rbgirshick/voc-dpm/blob/master/features/featpyramid.m). Our primary API function is called `convnet_featpyramid()`.

<b>Running DenseNet in Matlab</b><br>
`caffe/matlab/caffe/featpyramid_matcaffe_demo.m` is a good example to start with. Or, we can walk through it together here:

```matlab
%Caffe setup:
model_def_file = 'CAFFE_ROOT/python/caffe/imagenet/imagenet_rcnn_batch_1_input_1100x1100_output_conv5.prototxt'
% NOTE: you'll have to get the pre-trained ILSVRC network
model_file = 'path/to/alexnet_train_iter_470000';
caffe('init', model_def_file, model_file);
caffe('set_mode_gpu') %CPU mode works too
caffe('set_phase_test')

%using DenseNet:
image = 'myImage.jpg' %must be JPEG
pyra = convnet_featpyramid(image)
```

<b>Running DenseNet in Matlab (advanced users)</b><br>
```matlab
% (you need to do Caffe setup first, as shown in above example)
image = 'myImage.jpg'

%optional parameters (code will still run with incomplete or nonexistant pyra_params):
pyra_params.interval = 5; %octaves per pyramid scale
pyra_params.img_padding = 16 %padding around image (in pixels)
pyra_params.feat_minWidth = 6; %select smallest scale in pyramid (in output feature dimensions)
pyra_params.feat_minHeight = 6; %in output feature dimensions
pyra = convnet_featpyramid(image, pyra_params)

%taking a look at the output pyramid:
scales: [40x1 double] %resolution of each pyramid scale
feat: {40x1 cell} %descriptors (one cell array per scale)
imwidth: 353 %input image size in pixels
imheight: 500
feat_padx: 1 %border padding around descriptors (img_padding/sbin)
feat_pady: 1
sbin: 16 %approx. downsampling factor from pixels to descriptors
padx: 1 %extra copy of feat_pad{x,y}. silly...should remove?
pady: 1
num_levels: 40 %num scales in pyramid
valid_levels: [40x1 logical]
```

<b>Running DenseNet in Python</b><br>
The Python API is similar to the Matlab API described above.
`caffe/python/caffe/featpyramid_demo.py` is a good starting point for using DenseNet in Python.


Other notes:
- As with many other operations in Caffe, you'll need to download a pretrained Alexnet CNN prior to running our DenseNet demo.
- For most of our default examples, we use the 'Alexnet' network and output descriptors from the conv5 layer. You can adjust these decisions by editing the 'prototxt' files used at setup time.


## Original Caffe README.md follows

[Caffe: Convolutional Architecture for Fast Feature Extraction](http://caffe.berkeleyvision.org)

Created by [Yangqing Jia](http://daggerfs.com), UC Berkeley EECS department.
Expand Down
79 changes: 79 additions & 0 deletions include/caffe/featpyra_common.hpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
#include "caffe/imagenet_mean.hpp"
#include "boost/shared_ptr.hpp"
#include "caffe/caffe.hpp"
#include <vector>

namespace caffe {

using namespace std;
using boost::shared_ptr;

//switch RGB to BGR indexing (for Caffe convention)
inline int get_BGR(int channel_RGB) { assert( channel_RGB < 3 ); return 2 - channel_RGB; }

struct densenet_params_t {
uint32_t interval; // # scales per octave
uint32_t img_padding; // in image pixels (at scale=1.0). could later allow per-dim img_padx/img_pady as alternative.
uint32_t feat_minWidth; //smallest desired output scale, in terms of descriptor dimensions
uint32_t feat_minHeight;
densenet_params_t( void ) { // default values
interval = 10;
img_padding = 16;
feat_minWidth = 1;
feat_minHeight = 1;
}
};

typedef vector< float > vect_float;
typedef shared_ptr< vect_float > p_vect_float;
typedef vector< p_vect_float > vect_p_vect_float;

typedef vector< uint32_t > vect_uint32_t;
typedef shared_ptr< vect_uint32_t > p_vect_uint32_t;

// a sketch of a possible shared output type for python/matlab interfaces. missing dims for feats.
struct densenet_output_t {
p_vect_float imwidth; // not including padding
p_vect_float imheight;
p_vect_uint32_t feat_padx;
p_vect_uint32_t feat_pady;
p_vect_float scales;
vect_p_vect_float feats;
uint32_t nb_planes;
};

static void raw_do_forward( shared_ptr<Net<float> > net_, vect_p_vect_float const & bottom ) {
vector<Blob<float>*>& input_blobs = net_->input_blobs();
CHECK_EQ(bottom.size(), input_blobs.size());
for (unsigned int i = 0; i < input_blobs.size(); ++i) {
assert( bottom[i]->size() == uint32_t(input_blobs[i]->count()) );
const float* const data_ptr = &bottom[i]->front();
switch (Caffe::mode()) {
case Caffe::CPU:
memcpy(input_blobs[i]->mutable_cpu_data(), data_ptr,
sizeof(float) * input_blobs[i]->count());
break;
case Caffe::GPU:
cudaMemcpy(input_blobs[i]->mutable_gpu_data(), data_ptr,
sizeof(float) * input_blobs[i]->count(), cudaMemcpyHostToDevice);
break;
default:
LOG(FATAL) << "Unknown Caffe mode.";
} // switch (Caffe::mode())
}
//const vector<Blob<float>*>& output_blobs = net_->ForwardPrefilled();
net_->ForwardPrefilled();
}

// get sbin as product of strides. note that a stride of 0 in the
// caffe strides vector indictes a layer has no stride. such layers
// are ignored in this calculation
static uint32_t get_sbin( shared_ptr<Net<float> > net_ ) {
vect_uint32_t const & strides = net_->layer_strides();
uint32_t ret = 1;
for( vect_uint32_t::const_iterator i = strides.begin(); i != strides.end(); ++i ) { if( *i ) { ret *= (*i); } }
return ret;
}

template< typename T > inline std::string str(T const & i) { std::stringstream s; s << i; return s.str(); } // convert T i to string
}
18 changes: 18 additions & 0 deletions include/caffe/imagenet_mean.hpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
#ifndef IMAGENET_MEAN_H
#define IMAGENET_MEAN_H

//mean of all imagenet classification images
// calculation:
// 1. mean of all imagenet images, per pixel location
// 2. take the mean image, and get the mean pixel for R,G,B
// (did it this way because we already had the 'mean of all images, per pixel location')


#define IMAGENET_MEAN_R 122.67f
#define IMAGENET_MEAN_G 116.66f
#define IMAGENET_MEAN_B 104.00f

static float const IMAGENET_MEAN_RGB[3] = {IMAGENET_MEAN_R, IMAGENET_MEAN_G, IMAGENET_MEAN_B};
static float const IMAGENET_MEAN_BGR[3] = {IMAGENET_MEAN_B, IMAGENET_MEAN_G, IMAGENET_MEAN_R};

#endif
3 changes: 3 additions & 0 deletions include/caffe/net.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,8 @@ class Net {
inline const string& name() { return name_; }
// returns the layer names
inline const vector<string>& layer_names() { return layer_names_; }
// returns the layer strides
inline const vector<uint32_t>& layer_strides() { return layer_strides_; }
// returns the blob names
inline const vector<string>& blob_names() { return blob_names_; }
// returns the blobs
Expand Down Expand Up @@ -91,6 +93,7 @@ class Net {
// Individual layers in the net
vector<shared_ptr<Layer<Dtype> > > layers_;
vector<string> layer_names_;
vector<uint32_t> layer_strides_;
vector<bool> layer_need_backward_;
// blobs stores the blobs that store intermediate results between the
// layers.
Expand Down
43 changes: 43 additions & 0 deletions matlab/caffe/convnet_featpyramid.m
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@

% wrapper for DenseNet convnet_pyramid.
% provides some extra output params that you wouldn't get with caffe('convnet_featpyramid', ...)

% YOU MUST CALL caffe('init', ...) BEFORE RUNNING THIS.
function pyra = convnet_featpyramid(imgFname, pyra_params)

%set defaults for params not passed by user
if( ~exist('pyra_params') || ~isfield(pyra_params, 'interval') )
pyra_params.interval = 5;
end
if( ~exist('pyra_params') || ~isfield(pyra_params, 'img_padding') )
pyra_params.img_padding = 16;
end
if( ~exist('pyra_params') || ~isfield(pyra_params, 'feat_minWidth') )
pyra_params.feat_minWidth = 1;
end
if( ~exist('pyra_params') || ~isfield(pyra_params, 'feat_minHeight') )
pyra_params.feat_minHeight = 1;
end

% compute the pyramid:
pyra = caffe('convnet_featpyramid', imgFname, pyra_params);

% add DPM-style fields:
pyra.padx = pyra.feat_padx; % for DPM conventions
pyra.pady = pyra.feat_pady;
pyra.num_levels = length(pyra.scales);
pyra.valid_levels = true(pyra.num_levels, 1);

pyra.imsize = [pyra.imheight pyra.imwidth];
pyra.feat = permute_feat(pyra.feat); % [d h w] -> [h w d]
pyra.scales = double(pyra.scales); %get_detection_trees prefers double
end

% input: pyra.feat{:}, with dims [d h w]
% output: pyra.feat{:}, with dims [h w d]
function feat = permute_feat(feat)
for featIdx = 1:length(feat)
feat{featIdx} = permute( feat{featIdx}, [2 3 1] );
end
end

Loading