Skip to content

Commit

Permalink
Merge pull request BVLC#7 from BVLC/master
Browse files Browse the repository at this point in the history
from master fork
  • Loading branch information
yjxiong committed Oct 10, 2014
2 parents 54b40b3 + e6deb5d commit a7da302
Show file tree
Hide file tree
Showing 4 changed files with 18 additions and 17 deletions.
12 changes: 7 additions & 5 deletions docs/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Caffe depends on several software packages.
* [CUDA](https://developer.nvidia.com/cuda-zone) library version 6.5 (recommended), 6.0, 5.5, or 5.0 and the latest driver version for CUDA 6 or 319.* for CUDA 5 (and NOT 331.*)
* [BLAS](http://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms) (provided via ATLAS, MKL, or OpenBLAS).
* [OpenCV](http://opencv.org/).
* [Boost](http://www.boost.org/) (>= 1.55, although only 1.55 is tested)
* [Boost](http://www.boost.org/) (>= 1.55, although only 1.55 and 1.56 are tested)
* `glog`, `gflags`, `protobuf`, `leveldb`, `snappy`, `hdf5`, `lmdb`
* For the Python wrapper
* `Python 2.7`, `numpy (>= 1.7)`, boost-provided `boost.python`
Expand Down Expand Up @@ -141,11 +141,13 @@ Do `brew edit opencv` and change the lines that look like the two lines below to
**NOTE**: We find that everything compiles successfully if `$LD_LIBRARY_PATH` is not set at all, and `$DYLD_FALLBACK_LIBRARY_PATH` is set to to provide CUDA, Python, and other relevant libraries (e.g. `/usr/local/cuda/lib:$HOME/anaconda/lib:/usr/local/lib:/usr/lib`).
In other `ENV` settings, things may not work as expected.

**NOTE**: There is currently a conflict between boost 1.56 and CUDA in some configurations. Check the [conflict description](https://github.com/BVLC/caffe/issues/1193#issuecomment-57491906) and try downgrading to 1.55.

#### 10.8-specific Instructions

Simply run the following:

brew install --build-from-source --with-python boost
brew install --build-from-source boost boost-python
brew install --with-python protobuf
for x in snappy leveldb gflags glog szip lmdb homebrew/science/opencv; do brew install $x; done

Expand Down Expand Up @@ -186,16 +188,16 @@ After this, run

for x in snappy leveldb gflags glog szip lmdb homebrew/science/opencv; do brew uninstall $x; brew install --build-from-source --fresh -vd $x; done
brew uninstall protobuf; brew install --build-from-source --with-python --fresh -vd protobuf
brew install --build-from-source --with-python --fresh -vd boost
brew install --build-from-source --fresh -vd boost boost-python

**Note** that `brew install --build-from-source --fresh -vd boost` is fine if you do not need the Caffe Python wrapper.

**Note** that the HDF5 dependency is provided by Anaconda Python in this case.
If you're not using Anaconda, include `hdf5` in the list above.

**Note** that in order to build the caffe python wrappers you must install boost using the --with-python option:
**Note** that in order to build the Caffe Python wrappers you must install `boost` and `boost-python`:

brew install --build-from-source --with-python --fresh -vd boost
brew install --build-from-source --fresh -vd boost boost-python

**Note** that Homebrew maintains itself as a separate git repository and making the above `brew edit FORMULA` changes will change files in your local copy of homebrew's master branch. By default, this will prevent you from updating Homebrew using `brew update`, as you will get an error message like the following:

Expand Down
4 changes: 2 additions & 2 deletions examples/finetune_flickr_style/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ All steps are to be done from the caffe root directory.
The dataset is distributed as a list of URLs with corresponding labels.
Using a script, we will download a small subset of the data and split it into train and val sets.

caffe % ./models/finetune_flickr_style/assemble_data.py -h
caffe % ./examples/finetune_flickr_style/assemble_data.py -h
usage: assemble_data.py [-h] [-s SEED] [-i IMAGES] [-w WORKERS]

Download a subset of Flickr Style to a directory
Expand All @@ -48,7 +48,7 @@ Using a script, we will download a small subset of the data and split it into tr
num workers used to download images. -x uses (all - x)
cores.

caffe % python models/finetune_flickr_style/assemble_data.py --workers=-1 --images=2000 --seed 831486
caffe % python examples/finetune_flickr_style/assemble_data.py --workers=-1 --images=2000 --seed 831486
Downloading 2000 images with 7 workers...
Writing train/val for 1939 successfully downloaded images.

Expand Down
17 changes: 8 additions & 9 deletions examples/mnist/readme.md
Original file line number Diff line number Diff line change
@@ -1,23 +1,22 @@
---
title: MNIST Tutorial
description: Train and test "LeNet" on MNIST data.
title: LeNet MNIST Tutorial
description: Train and test "LeNet" on the MNIST handwritten digit data.
category: example
include_in_docs: true
priority: 1
---

# Training MNIST with Caffe
# Training LeNet on MNIST with Caffe

We will assume that you have Caffe successfully compiled. If not, please refer to the [Installation page](/installation.html). In this tutorial, we will assume that your Caffe installation is located at `CAFFE_ROOT`.

## Prepare Datasets

You will first need to download and convert the data format from the MNIST website. To do this, simply run the following commands:

cd $CAFFE_ROOT/data/mnist
./get_mnist.sh
cd $CAFFE_ROOT/examples/mnist
./create_mnist.sh
cd $CAFFE_ROOT
./data/mnist/get_mnist.sh
./examples/mnist/create_mnist.sh

If it complains that `wget` or `gunzip` are not installed, you need to install them respectively. After running the script there should be two datasets, `mnist_train_lmdb`, and `mnist_test_lmdb`.

Expand Down Expand Up @@ -228,8 +227,8 @@ Check out the comments explaining each line in the prototxt `$CAFFE_ROOT/example

Training the model is simple after you have written the network definition protobuf and solver protobuf files. Simply run `train_lenet.sh`, or the following command directly:

cd $CAFFE_ROOT/examples/mnist
./train_lenet.sh
cd $CAFFE_ROOT
./examples/mnist/train_lenet.sh

`train_lenet.sh` is a simple script, but here is a quick explanation: the main tool for training is `caffe` with action `train` and the solver protobuf text file as its argument.

Expand Down
2 changes: 1 addition & 1 deletion src/caffe/solver.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ template <typename Dtype>
Solver<Dtype>::Solver(const string& param_file)
: net_() {
SolverParameter param;
ReadProtoFromTextFile(param_file, &param);
ReadProtoFromTextFileOrDie(param_file, &param);
Init(param);
}

Expand Down

0 comments on commit a7da302

Please sign in to comment.