Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Blobs are N-D arrays (for N not necessarily equals 4) #1486

Closed
wants to merge 65 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
65 commits
Select commit Hold shift + click to select a range
2f869e7
clarify draw_net.py usage: net prototxt, not caffemodel
shelhamer Jan 25, 2015
61c63f6
[docs] ask install + hardware questions on caffe-users
shelhamer Jan 25, 2015
4cc8195
[docs] send API link to class list
shelhamer Jan 29, 2015
1f7c3de
[docs] add check mode hint to CPU-only mode error
shelhamer Jan 29, 2015
8b96472
Brief explanation of SLICE layer's attributes
boechat107 Jan 30, 2015
75d0e16
lint 1f7c3de
shelhamer Jan 30, 2015
e3c895b
Merge pull request #1817 from boechat107/patch-1
shelhamer Jan 30, 2015
1e0d49a
Correct 'epochs' to 'iterations'
Feb 16, 2015
3e9b050
Merge pull request #1879 from bamos/patch-1
sguada Feb 16, 2015
f998127
Merge pull request #1849 from BVLC/next
shelhamer Feb 20, 2015
af01b9c
Updated the path for get_ilsvrc_aux.sh to match what is found in the …
Feb 20, 2015
5ee85b7
Merge pull request #1914 from eerwitt/master
shelhamer Feb 20, 2015
eabbccd
[build] fix dynamic linking of tools
shelhamer Feb 20, 2015
682d9da
Merge pull request #1921 from shelhamer/fix-tool-linking
shelhamer Feb 20, 2015
5a26333
check caffe tool runs in runtest
shelhamer Feb 21, 2015
a1e951d
ignore pycharm files
Feb 22, 2015
fca05c3
set proper CMAKE_INSTALL_RPATH for _caffe.so and tools
Feb 22, 2015
645aa03
fixed bug in install-tree: _caffe.so installed by install(TARGET ...)…
Feb 22, 2015
5e06d16
minor cmake sumamry log fix
Feb 22, 2015
569ae01
cpp_lint.py fails silently with Python3 (which is the default on some…
jsupancic Feb 22, 2015
cb1f4d6
Merge pull request #1939 from Nerei/bugfix/install_rpath_for_pycaffe
shelhamer Feb 22, 2015
845f9ea
APPLE was misspelled. in Line 27
spmallick Feb 24, 2015
486360d
Merge pull request #1948 from spmallick/patch-1
longjon Feb 24, 2015
c091197
Merge pull request #1941 from jsupancic/cpp_lint_python2
longjon Feb 24, 2015
360dbfd
Merge pull request #1926 from shelhamer/test-caffe-tool
longjon Feb 24, 2015
54037d3
Making python3 work with cmake and the new python wrapper
philkr Feb 17, 2015
b915f9d
Merge pull request #1923 from philkr/python3_master
longjon Feb 24, 2015
4a3887a
fixed matcaffe printout to specify num of args (now including train/t…
forresti Feb 25, 2015
d2beb8a
Replaced illegal tab in Makefile with spaces.
gustavla Feb 25, 2015
1377e1b
Makefile fix for OS X 10.10
sergeyk Feb 25, 2015
3a1195a
Merge pull request #1961 from sergeyk/master
shelhamer Feb 25, 2015
b9aa166
Merge pull request #1960 from gustavla/makefile_fix
longjon Feb 25, 2015
c05d91d
Blobs are ND arrays (for N not necessarily equals 4).
jeffdonahue Nov 26, 2014
2a1b7dd
Add BlobShape message; use for Net input shapes
jeffdonahue Jan 1, 2015
e525c5b
add offset, {data,diff}_at nd blob accessors
jeffdonahue Feb 4, 2015
e944b4a
TestBlob: test that legacy BlobProtos are correctly handled by ShapeE…
jeffdonahue Nov 30, 2014
5a4903d
InnerProductLayer weights are 2D; biases are 1D
jeffdonahue Nov 26, 2014
138862a
Fix sparse GaussianFiller for new IPLayer weight axes
jeffdonahue Feb 16, 2015
99529dc
InnerProductLayer can multiply along any axis
jeffdonahue Nov 29, 2014
19070c3
ConvLayer biases are 1D
jeffdonahue Nov 30, 2014
b6eed34
LossLayer output is 0D (scalar)
jeffdonahue Nov 26, 2014
0cdad98
AccuracyLayer output is 0D (scalar)
jeffdonahue Nov 30, 2014
fe2e760
AccuracyLayer generalized to N instance axes
jeffdonahue Jan 31, 2015
5efa84d
Test{Net,Solver} fixes for AccuracyLayer generalization
jeffdonahue Feb 13, 2015
919ba57
EltwiseLayer need not assume old 4D dim names
jeffdonahue Nov 26, 2014
2933340
FlattenLayer: generalized Blob axes
jeffdonahue Nov 26, 2014
9256ff9
common_layers.hpp: remove unused "Blob col_bob_"
jeffdonahue Nov 26, 2014
af12ddb
TestConcatLayer: fix style errors
jeffdonahue Nov 26, 2014
d56d8cb
TestConcatLayer: add forward/gradient tests for concatenation along num
jeffdonahue Nov 26, 2014
913adf0
ConcatLayer: generalized Blob axes
jeffdonahue Nov 26, 2014
c94fbc5
SliceLayer: generalized Blob axes
jeffdonahue Nov 26, 2014
aed4ab3
SoftmaxLayer: generalized Blob axes
jeffdonahue Feb 15, 2015
1dc5caf
CuDNNSoftmaxLayer: generalized Blob axes
jeffdonahue Feb 10, 2015
1f66260
SoftmaxLossLayer generalized like SoftmaxLayer
jeffdonahue Jan 31, 2015
a1ea7f1
SplitLayer: change Reshape(n,h,c,w) to ReshapeLike(...)
jeffdonahue Nov 26, 2014
4759e41
HDF5DataLayer shapes output according to HDF5 shape
jeffdonahue Nov 26, 2014
fd6f89e
DataLayer outputs 1D labels
jeffdonahue Nov 26, 2014
32fc958
MemoryDataLayer outputs 1D labels
jeffdonahue Nov 26, 2014
69c9a66
ImageDataLayer outputs 1D labels
jeffdonahue Nov 26, 2014
566602c
WindowDataLayer outputs 1D labels
jeffdonahue Nov 26, 2014
88f299a
EuclideanLossLayer: generalized Blob axes
jeffdonahue Nov 30, 2014
6e38795
DummyDataLayer outputs blobs of arbitrary shape
jeffdonahue Jan 1, 2015
2273406
Add CHECK_EQ(4, ...)s to "vision layers" to enforce that the
jeffdonahue Jan 16, 2015
a359a43
PyBlobs support generalized axes
jeffdonahue Jan 2, 2015
a8023e2
Add option not to reshape to Blob::FromProto; use when loading Blobs
jeffdonahue Jan 31, 2015
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,9 @@
# QtCreator files
*.user

# PyCharm files
.idea

# OSX dir files
.DS_Store

Expand Down
3 changes: 2 additions & 1 deletion CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -17,14 +17,15 @@ caffe_option(CPU_ONLY "Build Caffe wihtout CUDA support" OFF) # TODO: rename to
caffe_option(USE_CUDNN "Build Caffe with cuDNN libary support" ON IF NOT CPU_ONLY)
caffe_option(BUILD_SHARED_LIBS "Build shared libraries" ON)
caffe_option(BUILD_python "Build Python wrapper" ON)
set(python_version "2" CACHE STRING "Specify which python version to use")
caffe_option(BUILD_matlab "Build Matlab wrapper" OFF IF UNIX OR APPLE)
caffe_option(BUILD_docs "Build documentation" ON IF UNIX OR APPLE)

# ---[ Dependencies
include(cmake/Dependencies.cmake)

# ---[ Flags
if(UNIX OR APLE)
if(UNIX OR APPLE)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fPIC -Wall")
endif()

Expand Down
13 changes: 10 additions & 3 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -261,7 +261,8 @@ ifneq (,$(findstring clang++,$(CXX)))
else ifneq (,$(findstring g++,$(CXX)))
STATIC_LINK_COMMAND := -Wl,--whole-archive $(STATIC_NAME) -Wl,--no-whole-archive
else
$(error Cannot static link with the $(CXX) compiler.)
# The following line must not be indented with a tab, since we are not inside a target
$(error Cannot static link with the $(CXX) compiler)
endif

# Debugging
Expand Down Expand Up @@ -319,7 +320,7 @@ else
# 10.10 has accelerate while 10.9 has veclib
XCODE_CLT_VER := $(shell pkgutil --pkg-info=com.apple.pkg.CLTools_Executables | grep -o 'version: 6')
ifneq (,$(findstring version: 6,$(XCODE_CLT_VER)))
BLAS_INCLUDE ?= /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.10.sdk/System/Library/Frameworks/Accelerate.framework/Versions/Current/Frameworks/vecLib.framework/Headers/
BLAS_INCLUDE ?= /System/Library/Frameworks/Accelerate.framework/Versions/Current/Frameworks/vecLib.framework/Headers/
LDFLAGS += -framework Accelerate
else
BLAS_INCLUDE ?= /System/Library/Frameworks/vecLib.framework/Versions/Current/Headers/
Expand Down Expand Up @@ -450,6 +451,7 @@ $(MAT$(PROJECT)_SO): $(MAT$(PROJECT)_SRC) $(STATIC_NAME)
CXXLIBS="\$$CXXLIBS $(STATIC_LINK_COMMAND) $(LDFLAGS)" -output $@

runtest: $(TEST_ALL_BIN)
$(TOOL_BUILD_DIR)/caffe
$(TEST_ALL_BIN) $(TEST_GPUID) --gtest_shuffle $(TEST_FILTER)

pytest: py
Expand Down Expand Up @@ -537,7 +539,12 @@ $(TOOL_BUILD_DIR)/%: $(TOOL_BUILD_DIR)/%.bin | $(TOOL_BUILD_DIR)
@ $(RM) $@
@ ln -s $(abspath $<) $@

$(TOOL_BINS) $(EXAMPLE_BINS): %.bin : %.o | $(DYNAMIC_NAME)
$(TOOL_BINS): %.bin : %.o | $(DYNAMIC_NAME)
@ echo CXX/LD -o $@
$(Q)$(CXX) $< -o $@ $(LINKFLAGS) -l$(PROJECT) $(LDFLAGS) \
-Wl,-rpath,$(ORIGIN)/../lib

$(EXAMPLE_BINS): %.bin : %.o | $(DYNAMIC_NAME)
@ echo CXX/LD -o $@
$(Q)$(CXX) $< -o $@ $(LINKFLAGS) -l$(PROJECT) $(LDFLAGS) \
-Wl,-rpath,$(ORIGIN)/../../lib
Expand Down
39 changes: 33 additions & 6 deletions cmake/Dependencies.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -92,12 +92,39 @@ endif()

# ---[ Python
if(BUILD_python)
# disable Python 3 search
find_package(PythonInterp 2.7)
find_package(PythonLibs 2.7)
find_package(NumPy 1.7.1)
find_package(Boost 1.46 COMPONENTS python)

if(NOT "${python_version}" VERSION_LESS "3.0.0")
# use python3
find_package(PythonInterp 3.0)
find_package(PythonLibs 3.0)
find_package(NumPy 1.7.1)
# Find the matching boost python implementation
set(version ${PYTHONLIBS_VERSION_STRING})

STRING( REPLACE "." "" boost_py_version ${version} )
find_package(Boost 1.46 COMPONENTS "python-py${boost_py_version}")
set(Boost_PYTHON_FOUND ${Boost_PYTHON-PY${boost_py_version}_FOUND})

while(NOT "${version}" STREQUAL "" AND NOT Boost_PYTHON_FOUND)
STRING( REGEX REPLACE "([0-9.]+).[0-9]+" "\\1" version ${version} )
STRING( REGEX MATCHALL "([0-9.]+).[0-9]+" has_more_version ${version} )
if("${has_more_version}" STREQUAL "")
break()
endif()

STRING( REPLACE "." "" boost_py_version ${version} )
find_package(Boost 1.46 COMPONENTS "python-py${boost_py_version}")
set(Boost_PYTHON_FOUND ${Boost_PYTHON-PY${boost_py_version}_FOUND})
endwhile()
if(NOT Boost_PYTHON_FOUND)
find_package(Boost 1.46 COMPONENTS python)
endif()
else()
# disable Python 3 search
find_package(PythonInterp 2.7)
find_package(PythonLibs 2.7)
find_package(NumPy 1.7.1)
find_package(Boost 1.46 COMPONENTS python)
endif()
if(PYTHONLIBS_FOUND AND NUMPY_FOUND AND Boost_PYTHON_FOUND)
set(HAVE_PYTHON TRUE)
endif()
Expand Down
5 changes: 5 additions & 0 deletions cmake/Misc.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,11 @@ endif()
set(CMAKE_INSTALL_RPATH_USE_LINK_PATH TRUE CACHE BOOLEAN "Use link paths for shared library rpath")
set(CMAKE_MACOSX_RPATH TRUE)

list(FIND CMAKE_PLATFORM_IMPLICIT_LINK_DIRECTORIES ${CMAKE_INSTALL_PREFIX}/lib __is_systtem_dir)
if(${__is_systtem_dir} STREQUAL -1)
set(CMAKE_INSTALL_RPATH ${CMAKE_INSTALL_PREFIX}/lib)
endif()

# ---[ Funny target
if(UNIX OR APPLE)
add_custom_target(symlink_to_build COMMAND "ln" "-sf" "${PROJECT_BINARY_DIR}" "${PROJECT_SOURCE_DIR}/build"
Expand Down
6 changes: 4 additions & 2 deletions cmake/Summary.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -107,17 +107,19 @@ function(caffe_print_configuration_summary)
caffe_status(" C++ compiler : ${CMAKE_CXX_COMPILER}")
caffe_status(" Release CXX flags : ${__flags_rel}")
caffe_status(" Debug CXX flags : ${__flags_deb}")
caffe_status(" BUILD_SHARED_LIBS : ${BUILD_SHARED_LIBS}")
caffe_status(" Build type : ${CMAKE_BUILD_TYPE}")
caffe_status("")
caffe_status(" BUILD_SHARED_LIBS : ${BUILD_SHARED_LIBS}")
caffe_status(" BUILD_python : ${BUILD_python}")
caffe_status(" BUILD_matlab : ${BUILD_matlab}")
caffe_status(" BUILD_docs : ${BUILD_docs}")
caffe_status(" CPU_ONLY : ${CPU_ONLY}")
caffe_status("")
caffe_status("Dependencies:")
caffe_status(" BLAS : " APPLE THEN "Yes (vecLib)" ELSE "Yes (${BLAS})")
caffe_status(" Boost : Yes (ver. ${Boost_MAJOR_VERSION}.${Boost_MINOR_VERSION})")
caffe_status(" glog : Yes")
caffe_status(" gflags : Yes")
caffe_status(" gflags : Yes")
caffe_status(" protobuf : " PROTOBUF_FOUND THEN "Yes (ver. ${PROTOBUF_VERSION})" ELSE "No" )
caffe_status(" lmdb : " LMDB_FOUND THEN "Yes (ver. ${LMDB_VERSION})" ELSE "No")
caffe_status(" Snappy : " SNAPPY_FOUND THEN "Yes (ver. ${Snappy_VERSION})" ELSE "No" )
Expand Down
4 changes: 2 additions & 2 deletions docs/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ Caffe has several dependencies.

Pycaffe and Matcaffe interfaces have their own natural needs.

* For Python Caffe: `Python 2.7`, `numpy (>= 1.7)`, boost-provided `boost.python`
* For Python Caffe: `Python 2.7` or `Python 3.3+`, `numpy (>= 1.7)`, boost-provided `boost.python`
* For MATLAB Caffe: MATLAB with the `mex` compiler.

**cuDNN Caffe**: for fastest operation Caffe is accelerated by drop-in integration of [NVIDIA cuDNN](https://developer.nvidia.com/cudnn). To speed up your Caffe models, install cuDNN then uncomment the `USE_CUDNN := 1` flag in `Makefile.config` when installing Caffe. Acceleration is automatic. For now cuDNN v1 is integrated but see [PR #1731](https://github.com/BVLC/caffe/pull/1731) for v2.
Expand Down Expand Up @@ -69,7 +69,7 @@ but we suggest first installing the [Anaconda](https://store.continuum.io/cshop/

To import the `caffe` Python module after completing the installation, add the module directory to your `$PYTHONPATH` by `export PYTHONPATH=/path/to/caffe/python:$PYTHONPATH` or the like. You should not import the module in the `caffe/python/caffe` directory!

*Caffe's Python interface works with Python 2.7. Python 3 or earlier Pythons are your own adventure.*
*Caffe's Python interface works with Python 2.7. Python 3.3+ should work out of the box without protobuf support. For protobuf support please install protobuf 3.0 alpha (https://developers.google.com/protocol-buffers/). Earlier Pythons are your own adventure.*

#### MATLAB

Expand Down
2 changes: 1 addition & 1 deletion examples/imagenet/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ We assume that you already have downloaded the ImageNet training data and valida

You will first need to prepare some auxiliary data for training. This data can be downloaded by:

./data/get_ilsvrc_aux.sh
./data/ilsvrc12/get_ilsvrc_aux.sh

The training and validation input are described in `train.txt` and `val.txt` as text listing all the files and their labels. Note that we use a different indexing for labels than the ILSVRC devkit: we sort the synset names in their ASCII order, and then label them from 0 to 999. See `synset_words.txt` for the synset/name mapping.

Expand Down
154 changes: 130 additions & 24 deletions include/caffe/blob.hpp
Original file line number Diff line number Diff line change
@@ -1,6 +1,10 @@
#ifndef CAFFE_BLOB_HPP_
#define CAFFE_BLOB_HPP_

#include <algorithm>
#include <string>
#include <vector>

#include "caffe/common.hpp"
#include "caffe/proto/caffe.pb.h"
#include "caffe/syncedmem.hpp"
Expand All @@ -19,10 +23,16 @@ template <typename Dtype>
class Blob {
public:
Blob()
: data_(), diff_(), num_(0), channels_(0), height_(0), width_(0),
count_(0), capacity_(0) {}
: data_(), diff_(), count_(0), capacity_(0) {}

/// @brief Deprecated; use <code>Blob(const vector<int>& shape)</code>.
explicit Blob(const int num, const int channels, const int height,
const int width);
const int width);
explicit Blob(const vector<int>& shape);

/// @brief Deprecated; use <code>Reshape(const vector<int>& shape)</code>.
void Reshape(const int num, const int channels, const int height,
const int width);
/**
* @brief Change the dimensions of the blob, allocating new memory if
* necessary.
Expand All @@ -37,25 +47,114 @@ class Blob {
* an error; either Net::Forward or Net::Reshape need to be called to
* propagate the new input shape to higher layers.
*/
void Reshape(const int num, const int channels, const int height,
const int width);
void Reshape(const vector<int>& shape);
void Reshape(const BlobShape& shape);
void ReshapeLike(const Blob& other);
inline int num() const { return num_; }
inline int channels() const { return channels_; }
inline int height() const { return height_; }
inline int width() const { return width_; }
inline string shape_string() const {
ostringstream stream;
for (int i = 0; i < shape_.size(); ++i) {
stream << shape_[i] << " ";
}
stream << "(" << count_ << ")";
return stream.str();
}
inline const vector<int>& shape() const { return shape_; }
/**
* @brief Returns the dimension of the index-th axis (or the negative index-th
* axis from the end, if index is negative).
*
* @param index the axis index.
* If 0 <= index < num_axes(), return the dim of the index-th axis.
* If -num_axes <= index <= -1, return the dim of the
* (num_axes() - index)-th axis; e.g., the last axis if index == -1,
* the second to last if index == -2, etc.
* Dies on out of range index.
*/
inline int shape(int index) const {
CHECK_GE(index, -num_axes());
CHECK_LT(index, num_axes());
if (index < 0) {
index += num_axes();
}
return shape_[index];
}
inline int num_axes() const { return shape_.size(); }
inline int count() const { return count_; }
/**
* @brief Compute the count for a range of dimensions.
*
* @param start_axis The first axis to include in the count.
*
* @param end_axis The first axis to exclude from the count.
*/
inline int count(int start_axis, int end_axis) const {
CHECK_LE(start_axis, end_axis);
CHECK_GE(start_axis, 0);
CHECK_GE(end_axis, 0);
CHECK_LE(start_axis, num_axes());
CHECK_LE(end_axis, num_axes());
int count = 1;
for (int i = start_axis; i < end_axis; ++i) {
count *= shape(i);
}
return count;
}
/**
* @brief Compute the count from a specified starting dimension.
*
* @param start_axis The first axis to include in the count.
*/
inline int count(int start_axis) const {
return count(start_axis, num_axes());
}

/// @brief Deprecated legacy shape accessor num: use shape(0) instead.
inline int num() const { return LegacyShape(0); }
/// @brief Deprecated legacy shape accessor channels: use shape(1) instead.
inline int channels() const { return LegacyShape(1); }
/// @brief Deprecated legacy shape accessor height: use shape(2) instead.
inline int height() const { return LegacyShape(2); }
/// @brief Deprecated legacy shape accessor width: use shape(3) instead.
inline int width() const { return LegacyShape(3); }
inline int LegacyShape(int index) const {
CHECK_LE(num_axes(), 4)
<< "Cannot use legacy accessors on Blobs with > 4 axes.";
CHECK_LT(index, 4);
CHECK_GE(index, -4);
if (index >= num_axes() || index < -num_axes()) {
// Axis is out of range, but still in [0, 3] (or [-4, -1] for reverse
// indexing) -- this special case simulates the one-padding used to fill
// extraneous axes of legacy blobs.
return 1;
}
return shape(index);
}

inline int offset(const int n, const int c = 0, const int h = 0,
const int w = 0) const {
CHECK_GE(n, 0);
CHECK_LE(n, num_);
CHECK_GE(channels_, 0);
CHECK_LE(c, channels_);
CHECK_GE(height_, 0);
CHECK_LE(h, height_);
CHECK_GE(width_, 0);
CHECK_LE(w, width_);
return ((n * channels_ + c) * height_ + h) * width_ + w;
CHECK_LE(n, num());
CHECK_GE(channels(), 0);
CHECK_LE(c, channels());
CHECK_GE(height(), 0);
CHECK_LE(h, height());
CHECK_GE(width(), 0);
CHECK_LE(w, width());
return ((n * channels() + c) * height() + h) * width() + w;
}

inline int offset(const vector<int>& indices) const {
CHECK_LE(indices.size(), num_axes());
int offset = 0;
for (int i = 0; i < num_axes(); ++i) {
offset *= shape(i);
if (indices.size() > i) {
CHECK_GE(indices[i], 0);
CHECK_LT(indices[i], shape(i));
offset += indices[i];
}
}
return offset;
}
/**
* @brief Copy from a source Blob.
Expand All @@ -71,12 +170,20 @@ class Blob {

inline Dtype data_at(const int n, const int c, const int h,
const int w) const {
return *(cpu_data() + offset(n, c, h, w));
return cpu_data()[offset(n, c, h, w)];
}

inline Dtype diff_at(const int n, const int c, const int h,
const int w) const {
return *(cpu_diff() + offset(n, c, h, w));
return cpu_diff()[offset(n, c, h, w)];
}

inline Dtype data_at(const vector<int>& index) const {
return cpu_data()[offset(index)];
}

inline Dtype diff_at(const vector<int>& index) const {
return cpu_diff()[offset(index)];
}

inline const shared_ptr<SyncedMemory>& data() const {
Expand All @@ -99,7 +206,7 @@ class Blob {
Dtype* mutable_cpu_diff();
Dtype* mutable_gpu_diff();
void Update();
void FromProto(const BlobProto& proto);
void FromProto(const BlobProto& proto, bool reshape = true);
void ToProto(BlobProto* proto, bool write_diff = false) const;

/// @brief Compute the sum of absolute values (L1 norm) of the data.
Expand Down Expand Up @@ -135,13 +242,12 @@ class Blob {
*/
void ShareDiff(const Blob& other);

bool ShapeEquals(const BlobProto& other);

protected:
shared_ptr<SyncedMemory> data_;
shared_ptr<SyncedMemory> diff_;
int num_;
int channels_;
int height_;
int width_;
vector<int> shape_;
int count_;
int capacity_;

Expand Down
Loading