Skip to content

Commit

Permalink
feat: Docker images optimization
Browse files Browse the repository at this point in the history
  • Loading branch information
quadeare committed May 6, 2020
1 parent 65fbb36 commit fba637a
Show file tree
Hide file tree
Showing 20 changed files with 744 additions and 646 deletions.
3 changes: 3 additions & 0 deletions .dockerignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
cpu.Dockerfile
gpu.Dockerfile
cpu-armv7.Dockerfile
6 changes: 5 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1,4 +1,8 @@
build/
build/*
!build/README.md
!build/build.sh
!build/get_models.sh
!build/get_libs.sh
model/
models/
clients/
2 changes: 1 addition & 1 deletion .travis/build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ LOCAL_DIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )
ROOT_DIR=$(dirname "$LOCAL_DIR")
cd "$ROOT_DIR"

mkdir build
mkdir -p build
cd build

# Configure
Expand Down
12 changes: 6 additions & 6 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -170,8 +170,10 @@ if (USE_CAFFE)
if(USE_FAISS)
if (NOT USE_CPU_ONLY AND CUDA_FOUND)
string(REPLACE "/include" "" CUDA_PREFIX ${CUDA_INCLUDE_DIRS})
string(REPLACE "arch" "=arch" FAISS_NVCC_FLAGS ${NVCC_FLAGS_EXTRA})
set(CONFIGURE_OPTS --with-cuda=${CUDA_PREFIX} --with-cuda-arch="${FAISS_NVCC_FLAGS}")
if (NOT CUDA_ARCH)
string(REPLACE ";" " " CUDA_ARCH "${CUDA_NVCC_FLAGS}")
endif()
set(CONFIGURE_OPTS --with-cuda=${CUDA_PREFIX} --with-cuda-arch=${CUDA_ARCH})
add_definitions(-DUSE_GPU_FAISS)
else()
set(CONFIGURE_OPTS "--without-cuda")
Expand Down Expand Up @@ -356,7 +358,6 @@ if (USE_CAFFE2)
GIT_SUBMODULES ${PYTORCH_SUBMODULES}
UPDATE_DISCONNECTED 1
GIT_TAG ${PYTORCH_SUPPORTED_COMMIT}
GIT_CONFIG advice.detachedHead=false
PATCH_COMMAND test -f ${PYTORCH_COMPLETE} && echo Skipping || echo cp modules/detectron/*_op.* caffe2/operators | bash && cp ${CAFFE2_OPS} caffe2/operators && git am ${PYTORCH_PATCHES}
CONFIGURE_COMMAND test -f ${PYTORCH_COMPLETE} && echo Skipping || cmake ../pytorch ${PYTORCH_FLAGS}
BUILD_COMMAND test -f ${PYTORCH_COMPLETE} && echo Skipping || make -j${N}
Expand All @@ -382,7 +383,6 @@ if (USE_CAFFE2)
GIT_REPOSITORY https://github.com/facebookresearch/Detectron
UPDATE_DISCONNECTED 1
GIT_TAG ${DETECTRON_SUPPORTED_COMMIT}
GIT_CONFIG advice.detachedHead=false
PATCH_COMMAND test -f ${DETECTRON_COMPLETE} && echo Skipping || git am ${DETECTRON_PATCHES}
CONFIGURE_COMMAND ""
BUILD_COMMAND ""
Expand Down Expand Up @@ -435,7 +435,7 @@ if (USE_TF)
tensorflow_cc
PREFIX tensorflow_cc
INSTALL_DIR ${CMAKE_BINARY_DIR}
DOWNLOAD_COMMAND git clone https://github.com/FloopCZ/tensorflow_cc.git
DOWNLOAD_COMMAND git clone --branch v1.15.0 https://github.com/FloopCZ/tensorflow_cc.git
CONFIGURE_COMMAND cd tensorflow_cc && mkdir -p build && cd build && cmake -DTENSORFLOW_STATIC=OFF -DTENSORFLOW_SHARED=ON .. && make && ln -s ${CMAKE_BINARY_DIR}/tensorflow_cc/src/tensorflow_cc/tensorflow_cc/build/tensorflow/tensorflow/contrib/makefile/gen/protobuf ${CMAKE_BINARY_DIR}/protobuf
BUILD_COMMAND ""
INSTALL_COMMAND ""
Expand All @@ -447,7 +447,7 @@ if (USE_TF)
tensorflow_cc
PREFIX tensorflow_cc
INSTALL_DIR ${CMAKE_BINARY_DIR}
DOWNLOAD_COMMAND git clone https://github.com/FloopCZ/tensorflow_cc.git
DOWNLOAD_COMMAND git clone --branch v1.15.0 https://github.com/FloopCZ/tensorflow_cc.git
CONFIGURE_COMMAND cd tensorflow_cc && mkdir -p build && cd build && cmake -DTENSORFLOW_STATIC=OFF -DTENSORFLOW_SHARED=ON -DALLOW_CUDA=OFF .. && make && ln -s ${CMAKE_BINARY_DIR}/tensorflow_cc/src/tensorflow_cc/tensorflow_cc/build/tensorflow/tensorflow/contrib/makefile/gen/protobuf ${CMAKE_BINARY_DIR}/protobuf
BUILD_COMMAND ""
INSTALL_COMMAND ""
Expand Down
100 changes: 91 additions & 9 deletions docker/README.md → build/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
## DeepDetect Docker images
# DeepDetect Docker images

This repository contains the Dockerfiles for building the CPU and GPU images for deepdetect.

Expand All @@ -10,7 +10,7 @@ The docker images contain:

This allows to run the container and set an image classification model based on deep (residual) nets in two short command line calls.

### Getting and running official images
## Getting and running official images

```
docker pull jolibrain/deepdetect_cpu
Expand All @@ -20,7 +20,7 @@ or
docker pull jolibrain/deepdetect_gpu
```

#### Running the CPU image
### Running the CPU image

```
docker run -d -p 8080:8080 jolibrain/deepdetect_cpu
Expand Down Expand Up @@ -48,7 +48,7 @@ curl -X POST "http://localhost:8080/predict" -d "{\"service\":\"imageserv\",\"pa
{"status":{"code":200,"msg":"OK"},"head":{"method":"/predict","time":852.0,"service":"imageserv"},"body":{"predictions":{"uri":"http://i.ytimg.com/vi/0vxOhd4qlnA/maxresdefault.jpg","classes":[{"prob":0.2255125343799591,"cat":"n03868863 oxygen mask"},{"prob":0.20917612314224244,"cat":"n03127747 crash helmet"},{"last":true,"prob":0.07399296760559082,"cat":"n03379051 football helmet"}]}}}
```

#### Running the GPU image
### Running the GPU image

This requires [nvidia-docker](https://github.com/NVIDIA/nvidia-docker) in order for the local GPUs to be made accessible by the container.

Expand All @@ -75,7 +75,7 @@ curl -X POST "http://localhost:8080/predict" -d "{\"service\":\"imageserv\",\"pa

Try the `POST` call twice: first time loads the net so it takes slightly below a second, then second call should yield a `time` around 100ms as reported in the output JSON.

#### Access to server logs
### Access to server logs

To look at server logs, use
```
Expand Down Expand Up @@ -112,10 +112,92 @@ docker run -d -p 8080:8080 -v /path/to/volume:/mnt jolibrain/deepdetect_cpu
```
where `path/to/volume` is the path to your local volume that you'd like to attach to `/opt/deepdetect/`. This is useful for sharing / saving models, etc...

#### Building an image
## Build Deepdetect Docker images

Example goes with the CPU image:
Dockerfiles are presents on project root folder.

We choose to prefix Dockerfiles with target architecture :
* cpu-armv7.Dockerfile
* cpu.Dockerfile
* gpu.Dockerfile

### Build script

Build script is avaliable in docker path : build/build.sh

Docker build-arg : DEEPDETECT_BUILD

Description : DEEPDETECT_BUILD build argument change cmake arguments in build.sh script.

Expected values :

* CPU
* caffe-tf
* default
* GPU
* tf
* tf-cpu
* caffe-cpu-tf
* caffe-tf
* caffe2
* p100
* volta
* volta-faiss
* faiss
* default

#### Launch build with environments variables

```bash
DEEPDETECT_ARCH=cpu,gpu DEEPDETECT_BUILD=default,caffe-tf,armv7,[...] ./build.sh
```

#### Launch build with build script parameters

```bash
Params usage: ./build.sh [options...]

-a, --deepdetect-arch Choose Deepdetect architecture : cpu,gpu
-b, --deepdetect-build Choose Deepdetect build profile : CPU (default,caffe-tf,armv7) / GPU (default,caffe-cpu-tf,caffe-tf,caffe2,p100,volta)
```

### Building an image

#### Docker build arguments

* DEEPDETECT_BUILD : Change cmake arguments, checkout build script documentation.
* DEEPDETECT_DEFAULT_MODELS : [**true**/false] Enable or disable default models in deepdetect docker image. Default models size is about 160MB.

#### Build examples

Example with CPU image:
```
cd cpu
docker build -t jolibrain/deepdetect_cpu --no-cache .
# Build with default cmake
docker build -t jolibrain/deepdetect_cpu --no-cache -f cpu.Dockerfile .
# Build with default cmake and without default models
docker build --build-arg DEEPDETECT_DEFAULT_MODELS=false -t jolibrain/deepdetect_cpu --no-cache -f cpu.Dockerfile .
# Build with custom cmake
docker build --build-arg DEEPDETECT_BUILD=caffe-tf -t jolibrain/deepdetect_cpu --no-cache -f cpu.Dockerfile .
```

Example with CPU (armv7) image:
```
# Build with default cmake
docker build -t jolibrain/deepdetect_cpu:armv7 --no-cache -f cpu-armv7.Dockerfile .
```

Example with GPU image:
```
# Build with default cmake
docker build -t jolibrain/deepdetect_gpu --no-cache -f gpu.Dockerfile .
# Build with default cmake and without default models
docker build --build-arg DEEPDETECT_DEFAULT_MODELS=false -t jolibrain/deepdetect_gpu --no-cache -f gpu.Dockerfile .
# Build with custom cmake
docker build --build-arg DEEPDETECT_BUILD=caffe-tf -t jolibrain/deepdetect_gpu --no-cache -f gpu.Dockerfile .
```
Loading

0 comments on commit fba637a

Please sign in to comment.