Skip to content

Commit

Permalink
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
Browse files Browse the repository at this point in the history
… add_cudnn_pool3d
  • Loading branch information
chengduoZH committed Nov 16, 2017
2 parents 7c2fd61 + 08bc08d commit 7e91da4
Show file tree
Hide file tree
Showing 98 changed files with 2,783 additions and 1,290 deletions.
6 changes: 1 addition & 5 deletions benchmark/paddle/image/run_mkldnn.sh
Original file line number Diff line number Diff line change
@@ -1,9 +1,7 @@
set -e

function train() {
unset OMP_NUM_THREADS MKL_NUM_THREADS
export OMP_DYNAMIC="FALSE"
export KMP_AFFINITY="granularity=fine,compact,0,0"
unset OMP_NUM_THREADS MKL_NUM_THREADS OMP_DYNAMIC KMP_AFFINITY
topology=$1
layer_num=$2
bs=$3
Expand All @@ -14,8 +12,6 @@ function train() {
elif [ $4 == "False" ]; then
thread=`nproc`
# each trainer_count use only 1 core to avoid conflict
export OMP_NUM_THREADS=1
export MKL_NUM_THREADS=1
log="logs/${topology}-${layer_num}-${thread}mklml-${bs}.log"
else
echo "Wrong input $3, use True or False."
Expand Down
2 changes: 1 addition & 1 deletion cmake/external/openblas.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@ IF(NOT ${CBLAS_FOUND})
ENDIF()
INSTALL(CODE "execute_process(
COMMAND ${CMAKE_COMMAND} -E copy_directory ${CBLAS_INSTALL_DIR}/lib
destination ${CMAKE_INSTALL_PREFIX}/${TMP_INSTALL_DIR}
${CMAKE_INSTALL_PREFIX}/${TMP_INSTALL_DIR}
)"
)
INSTALL(CODE "MESSAGE(STATUS \"Installing: \"
Expand Down
8 changes: 4 additions & 4 deletions doc/design/ops/images/2_level_rnn.dot
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
digraph G {

rnn [label="1-th level RNN" shape=box]
rnn [label="1st level RNN" shape=box]

subgraph cluster0 {
label = "time step 0"

sent0 [label="sentence"]
sent1 [label="sentence"]

rnn1 [label="2-th level RNN" shape=box]
rnn1 [label="2nd level RNN" shape=box]

sent0 -> rnn1
sent1 -> rnn1
Expand All @@ -20,7 +20,7 @@ digraph G {
sent2 [label="sentence"]
sent3 [label="sentence"]

rnn2 [label="2-th level RNN" shape=box]
rnn2 [label="2nd level RNN" shape=box]

sent2 -> rnn2
sent3 -> rnn2
Expand All @@ -32,7 +32,7 @@ digraph G {
sent4 [label="sentence"]
sent5 [label="sentence"]

rnn3 [label="2-th level RNN" shape=box]
rnn3 [label="2nd level RNN" shape=box]

sent4 -> rnn3
sent5 -> rnn3
Expand Down
66 changes: 33 additions & 33 deletions doc/design/ops/rnn.md
Original file line number Diff line number Diff line change
@@ -1,62 +1,62 @@
# RNNOp design

This document is about an RNN operator which requires that instances in a mini-batch have the same length. We will have a more flexible RNN operator.
This document describes the RNN (Recurrent Neural Network) operator and how it is implemented in PaddlePaddle. The RNN op requires that all instances in a mini-batch have the same length. We will have a more flexible dynamic RNN operator in the future.

## RNN Algorithm Implementation

<p aligh="center">
<p align="center">
<img src="./images/rnn.jpg"/>
</p>

The above diagram shows an RNN unrolled into a full network.

There are several important concepts:
There are several important concepts here:

- *step-net*: the sub-graph to run at each step,
- *memory*, $h_t$, the state of the current step,
- *ex-memory*, $h_{t-1}$, the state of the previous step,
- *initial memory value*, the ex-memory of the first step.
- *step-net*: the sub-graph that runs at each step.
- *memory*, $h_t$, the state of the current step.
- *ex-memory*, $h_{t-1}$, the state of the previous step.
- *initial memory value*, the memory of the first (initial) step.

### Step-scope

There could be local variables defined in step-nets. PaddlePaddle runtime realizes these variables in *step-scopes* -- scopes created for each step.
There could be local variables defined in each step-net. PaddlePaddle runtime realizes these variables in *step-scopes* which are created for each step.

<p aligh="center">
<p align="center">
<img src="./images/rnn.png"/><br/>
Figure 2 the RNN's data flow
Figure 2 illustrates the RNN's data flow
</p>

Please be aware that all steps run the same step-net. Each step
Please be aware that every step runs the same step-net. Each step does the following:

1. creates the step-scope,
2. realizes local variables, including step-outputs, in the step-scope, and
3. runs the step-net, which could use these variables.
1. Creates the step-scope.
2. Initializes the local variables including step-outputs, in the step-scope.
3. Runs the step-net, which uses the above mentioned variables.

The RNN operator will compose its output from step outputs in step scopes.
The RNN operator will compose its output from step outputs in each of the step scopes.

### Memory and Ex-memory

Let's give more details about memory and ex-memory via a simply example:
Let's give more details about memory and ex-memory using a simple example:

$$
h_t = U h_{t-1} + W x_t
$$,
where $h_t$ and $h_{t-1}$ are the memory and ex-memory of step $t$'s respectively.
where $h_t$ and $h_{t-1}$ are the memory and ex-memory (previous memory) of step $t$ respectively.
In the implementation, we can make an ex-memory variable either "refers to" the memory variable of the previous step,
or copy the value of the previous memory value to the current ex-memory variable.
In the implementation, we can make an ex-memory variable either "refer to" the memory variable of the previous step,
or copy the memory value of the previous step to the current ex-memory variable.
### Usage in Python
For more information on Block, please refer to the [design doc](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/block.md).
We can define an RNN's step-net using Block:
We can define an RNN's step-net using a Block:
```python
import paddle as pd
X = some_op() # x is some operator's output, and is a LoDTensor
X = some_op() # x is some operator's output and is a LoDTensor
a = some_op()
# declare parameters
Expand All @@ -68,7 +68,7 @@ with rnn.stepnet():
x = rnn.add_input(X)
# declare a memory (rnn's step)
h = rnn.add_memory(init=a)
# h.pre_state() means previous memory of rnn
# h.pre_state(), the previous memory of rnn
new_state = pd.add_two( pd.matmul(W, x) + pd.matmul(U, h.pre_state()))
# update current memory
h.update(new_state)
Expand All @@ -80,19 +80,19 @@ out = rnn()
Python API functions in above example:
- `rnn.add_input` indicates the parameter is a variable that will be segmented into step-inputs.
- `rnn.add_memory` creates a variable used as the memory.
- `rnn.add_outputs` mark the variables that will be concatenated across steps into the RNN output.
- `rnn.add_input`: indicates that the parameter is a variable that will be segmented into step-inputs.
- `rnn.add_memory`: creates a variable used as the memory.
- `rnn.add_outputs`: marks the variables that will be concatenated across steps into the RNN output.
### Nested RNN and LoDTensor
An RNN whose step-net includes other RNN operators is known as an *nested RNN*.
For example, we could have a 2-level RNN, where the top level corresponds to paragraphs, and the lower level corresponds to sentences.
For example, we could have a 2-level RNN, where the top level corresponds to paragraphs, and the lower level corresponds to sentences. Each step of the higher level RNN also receives an input from the corresponding step of the lower level, and additionally the output from the previous time step at the same level.
The following figure illustrates the feeding of text into the lower level, one sentence each step, and the feeding of step outputs to the top level. The final top level output is about the whole text.
The following figure illustrates feeding in text into the lower level, one sentence at a step, and the feeding in step outputs to the top level. The final top level output is about the whole text.
<p aligh="center">
<p align="center">
<img src="./images/2_level_rnn.png"/>
</p>
Expand All @@ -110,7 +110,7 @@ a = some_op()
# chapter_data is a set of 128-dim word vectors
# the first level of LoD is sentence
# the second level of LoD is chapter
# the second level of LoD is a chapter
chapter_data = pd.Variable(shape=[None, 128], type=pd.lod_tensor, level=2)
def lower_level_rnn(paragraph):
Expand Down Expand Up @@ -138,14 +138,14 @@ with top_level_rnn.stepnet():
pd.matmul(W0, paragraph_data) + pd.matmul(U0, h.pre_state()))
top_level_rnn.add_outputs(h)
# just output the last step
# output the last step
chapter_out = top_level_rnn(output_all_steps=False)
```
in above example, the construction of the `top_level_rnn` calls `lower_level_rnn`. The input is a LoD Tensor. The top level RNN segments input text data into paragraphs, and the lower level RNN segments each paragraph into sentences.
In the above example, the construction of the `top_level_rnn` calls `lower_level_rnn`. The input is an LoD Tensor. The top level RNN segments input text data into paragraphs, and the lower level RNN segments each paragraph into sentences.
By default, the `RNNOp` will concatenate the outputs from all the time steps,
if the `output_all_steps` set to False, it will only output the final time step.
By default, the `RNNOp` will concatenate the outputs from all the time steps.
If the `output_all_steps` is set to False, it will only output the final time step.
<p align="center">
Expand Down
Loading

0 comments on commit 7e91da4

Please sign in to comment.