Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add sequence slice operator #5546

Merged
merged 23 commits into from
Nov 20, 2017
Merged

Add sequence slice operator #5546

merged 23 commits into from
Nov 20, 2017

Conversation

wanghaox
Copy link
Contributor

resolve #5545
Add forward and backward kernel code
Add unitest

- Case:
LoD(x) = {{0, 3, 6, 10}}; Dims(x0) = (10, 3, 2)
offset = (0, 1, 1); size = (2, 1, 2)
LoD(Out) = {{0, 2, 3, 5}}; Dims(Out) = (5,3,2)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is better to write the case concretely, like context_project.


auto dim_0 = 0;
for (size_t i = 0; i < sizes.size(); ++i) {
dim_0 += sizes[i];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please ensure sizes[i] is greater than zero.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done, added the PADDLE_ENFORCE_LT check

"Output(Out) of SubSequenceOp should not be null.");
auto input_dims = ctx->GetInputDim("X");

auto offsets = ctx->Attrs().Get<std::vector<int>>("offset");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please ensure offsets value is valid.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changed from ctx->Attrs() to input tensor

framework::OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("X", "(LoDTensor), "
"the variable-length input of SubSequenceOp");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't forget to put a full stop.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

"size",
"A list<int> to describes size for sub sequence item.");
AddOutput("Out",
"(Tensor), Variable-length output of "
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Out" should be LoDTensor. And AddOutput("Out",... should be wrote behind AddInput("X", "(LoDTensor), ....

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

The operator crop a subsequence from given sequence with given start offset and subsequence size.
It only supports sequence (LoD Tensor with level number is 1).
- Case:
LoD(x) = {{0, 3, 6, 10}}; Dims(x0) = (10, 3, 2)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dims(x0) -> Dims(x)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The example case has been rewritten

PADDLE_ENFORCE_EQ(n, offset_len,
"The length of input and offset should be the same")
PADDLE_ENFORCE_EQ(n, size_len,
"The length of input and size should be the same")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why n, offset_len and n, size_len should be equal.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The three is working on the same sequence.

auto size = sizes[i];
PADDLE_ENFORCE_LT(lod[0][i] + offset + size, lod[0][i + 1],
"The target tensor's length overflow")
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is better to put lines 63~68 into InferShape. Because InferShape is running in compiling time, if the config (size and offset) is wrong, user can check out mistakes in time.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The offset and length have been changed to input tensor, so they must be checked in the runtime.

auto size = sizes[i];
PADDLE_ENFORCE_LT(lod[0][i] + offset + size, lod[0][i + 1],
"The target tensor's length overflow")
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think lines 111~123 are unnecessary. Because you have done these in SubSequenceOpKernel.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

auto x_grad_stride = framework::stride(x_grad->dims());

auto offset = offsets[i];
auto size = sizes[i];
Copy link
Contributor

@chengduoZH chengduoZH Nov 10, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The variables of size and offset are unnecessary.

See the License for the specific language governing permissions and
limitations under the License. */

#include "paddle/operators/sub_sequence_op.h"
Copy link
Contributor

@qingqing01 qingqing01 Nov 10, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have:

sequence_conv_op
sequence_pool_op
sequence_softmax_op
...

so, maybe rename to sequence_slice_op ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sequence_slice_op sounds good.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done, rename to sequence_slice_op

auto input_dims = ctx->GetInputDim("X");

auto offsets = ctx->Attrs().Get<std::vector<int>>("offset");
auto sizes = ctx->Attrs().Get<std::vector<int>>("size");
Copy link
Contributor

@qingqing01 qingqing01 Nov 10, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the raw Paddle Layer, the offset and size are the input, not attributes. Since they may be changed during each mini-batch.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree, I think it's better to use 'length' instead of 'size'.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

limitations under the License. */

#pragma once
#include "paddle/framework/eigen.h"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove this line, since not use the Eigen function.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

See the License for the specific language governing permissions and
limitations under the License. */

#define EIGEN_USE_GPU
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove this line, since there is no Eigen function in the .h file.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

x_grad->set_lod(lod);
x_grad->mutable_data<T>(ctx.GetPlace());
auto temp = framework::EigenVector<T>::Flatten(*x_grad);
temp.device(ctx.GetEigenDevice<Place>()) = temp.constant(static_cast<T>(0));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

out_lod[0].append(out_lod_offset)

outs = np.concatenate(outs, axis=0)
self.outputs = {'Out': outs}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also check the output LoD.

self.outputs = {'Out': (outs, out_lod)}

Copy link
Contributor Author

@wanghaox wanghaox left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code has been updated, please review

c1, c2]
[d1, d2;
e1, e2]]
LoD(X) = {{0, 3, 5}}; Dims(X) = (4, 1, 2)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dims(X) = (4, 1, 2) -> Dims(X) = (5, 2)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

"(Tensor), "
"A vector<int> to describes length for sub sequence item.");
AddOutput("Out",
"(LoDTensor), output of sequence slice Op.");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the output of SequenceSliceOp.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

"the input of SequenceSliceOp.");
AddInput("Offset",
"(Tensor), "
"A vector<int> to describes offset for sub sequence item.");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For reference only: a vector<int> to describe the offset of every input sequence for sub sequence item.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

"(LoDTensor), output of sequence slice Op.");
AddComment(R"DOC(
Sequence slice operator
The operator crop a subsequence from given sequence with given start offset and subsequence length.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There should be a blank row below "Sequence slice operator"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

offset_cpu.CopyFrom(*offset, platform::CPUPlace(), ctx.device_context());
offset_data = offset_cpu.data<int64_t>();

framework::Tensor length_cpu;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should these two variables be placed outside of the if_block?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what the difference?

Out = [[a1, a2;
b1, b2]
[e1, e2]]
LoD(Out) = {{0, 2, 3}}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This description of case is not consistent with your code.
Your code describes that the input size and output size are equal.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed the output dims error

auto out_lod = SequenceSliceLoD(*in, offset_data, length_data);
out->set_lod(out_lod);
math::SetConstant<Place, T> set_zero;
set_zero(ctx.device_context(), out, static_cast<T>(0));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If your document about the function of this operator is right, you do not need to clean out memory

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed

x = np.random.random((100, 3, 2)).astype('float32')
lod = [[0, 20, 40, 60, 80, 100]]
offset = np.array([1, 2, 3, 4, 5]).flatten().astype("int64")
length = np.array([10, 8, 6, 4, 2]).flatten().astype("int64")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can separate the definitions of data. Like this

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

@chengduoZH
Copy link
Contributor

Please read this document first.

Copy link
Contributor Author

@wanghaox wanghaox left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

update sequence_slice_op code

"the input of SequenceSliceOp.");
AddInput("Offset",
"(Tensor), "
"A vector<int> to describes offset for sub sequence item.");
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

"(Tensor), "
"A vector<int> to describes length for sub sequence item.");
AddOutput("Out",
"(LoDTensor), output of sequence slice Op.");
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

"(LoDTensor), output of sequence slice Op.");
AddComment(R"DOC(
Sequence slice operator
The operator crop a subsequence from given sequence with given start offset and subsequence length.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

c1, c2]
[d1, d2;
e1, e2]]
LoD(X) = {{0, 3, 5}}; Dims(X) = (4, 1, 2)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Out = [[a1, a2;
b1, b2]
[e1, e2]]
LoD(Out) = {{0, 2, 3}}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed the output dims error

auto out_lod = SequenceSliceLoD(*in, offset_data, length_data);
out->set_lod(out_lod);
math::SetConstant<Place, T> set_zero;
set_zero(ctx.device_context(), out, static_cast<T>(0));
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed

x = np.random.random((100, 3, 2)).astype('float32')
lod = [[0, 20, 40, 60, 80, 100]]
offset = np.array([1, 2, 3, 4, 5]).flatten().astype("int64")
length = np.array([10, 8, 6, 4, 2]).flatten().astype("int64")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

@chengduoZH chengduoZH changed the title Add sub_sequence operator Add sequence_sub operator Nov 15, 2017
"Output(Out) of SequenceSliceOp should not be null.");
auto input_dims = ctx->GetInputDim("X");

ctx->SetOutputDim("Out", input_dims);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do you set input_dims to Out? In some case, out_dim is not equal to input_dims.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The function InferShape is called at the compiling stage, the offset and length are unknown, so I set the dims to the Maximum, and change them to the real-value at runtime.

}

auto lod = in->lod();
auto out_lod = SequenceSliceLoD(*in, offset_data, length_data);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can get out_lod from Out.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

auto lod = in->lod();
auto out_lod = SequenceSliceLoD(*in, offset_data, length_data);

x_grad->set_lod(lod);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

x_grad_lod can be assigned in here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

using LoD = framework::LoD;

template <typename T>
LoD SequenceSliceLoD(const T& in, const int64_t* offset_data,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please use inline.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

out_lod_offset = out_lod_offset + len(sub_x)
outs.append(sub_x)
out_lod[0].append(out_lod_offset)
outs = np.concatenate(outs, axis=0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is better that putting those code(line 19~25) into a function.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Those codes are short and only use once.

Copy link
Contributor Author

@wanghaox wanghaox left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

update some code

"Output(Out) of SequenceSliceOp should not be null.");
auto input_dims = ctx->GetInputDim("X");

ctx->SetOutputDim("Out", input_dims);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The function InferShape is called at the compiling stage, the offset and length are unknown, so I set the dims to the Maximum, and change them to the real-value at runtime.

using LoD = framework::LoD;

template <typename T>
LoD SequenceSliceLoD(const T& in, const int64_t* offset_data,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

}

auto lod = in->lod();
auto out_lod = SequenceSliceLoD(*in, offset_data, length_data);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

auto lod = in->lod();
auto out_lod = SequenceSliceLoD(*in, offset_data, length_data);

x_grad->set_lod(lod);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

out_lod_offset = out_lod_offset + len(sub_x)
outs.append(sub_x)
out_lod[0].append(out_lod_offset)
outs = np.concatenate(outs, axis=0)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Those codes are short and only use once.

PADDLE_ENFORCE_EQ(offset->dims().size(), 1UL,
"Only support one level sequence now.");
PADDLE_ENFORCE_EQ(length->dims().size(), 1UL,
"Only support one level sequence now.");
Copy link
Contributor

@qingqing01 qingqing01 Nov 15, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems the offset->dims().size() == 2, length->dims().size() == 2, like the label in the cost operator, the rank is 2. the dims[0] is the batch size, dims[1] = 1.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@qingqing01 qingqing01 changed the title Add sequence_sub operator Add sequence slice operator Nov 15, 2017
PADDLE_ENFORCE_EQ(length_dim.size(), 2UL,
"Only support one level sequence now.");

ctx->SetOutputDim("Out", input_dims);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add some notes (why do you set input_dims to Out?) .

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If not set, it triggers an error
EnforceNotMet: enforce numel() > 0 failed
When calling this method, the Tensor's numel must be larger than zero. Please check Tensor::Resize has been called first. at [/home/work/wanghao/code/Paddle/paddle/framework/tensor_impl.h:118]

so initialize the output dims by set input_dims to the Out

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

PADDLE_ENFORCE_EQ(offset_dim.size(), 2UL,
"Only support one level sequence now.");
PADDLE_ENFORCE_EQ(length_dim.size(), 2UL,
"Only support one level sequence now.");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment is inconsistent with the expression.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dim 0 is for batch size, dim 1 is for the sequence offset

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done,added comment

"a vector<int> to describe the length of every input sequence for "
"sub sequence item.");
AddOutput("Out",
"(LoDTensor), The output of SequenceSliceOp.");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The->the

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

AddComment(R"DOC(
Sequence slice operator

The operator crop a subsequence from given sequence with given start offset and subsequence length.
Copy link
Contributor

@chengduoZH chengduoZH Nov 16, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is for your reference only.
The operator crops a subsequence from given sequence with given offset and subsequence length.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

b1, b2]
[e1, e2]]
LoD(Out) = {{0, 2, 3}}; Dims(Out) = (3, 2)
NOTE: The length of the input, offset and length should be the same. The offset start from 0.
Copy link
Contributor

@chengduoZH chengduoZH Nov 16, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is for your reference only.
The first dimension size of input, the size of offset and Length, should be equal

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

PADDLE_ENFORCE_LT(
lod[0][i] + offset_data[i] + length_data[i],
lod[0][i + 1],
"The target tensor's length overflow")}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

请安装pre-commit,86行代码格式不对

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


x_grad->mutable_data<T>(ctx.GetPlace());
math::SetConstant<Place, T> set_zero;
set_zero(ctx.device_context(), x_grad, static_cast<T>(0));
Copy link
Contributor

@chengduoZH chengduoZH Nov 16, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

    x_grad->mutable_data<T>(ctx.GetPlace());
    x_grad->set_lod(in->lod());
   ......

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Copy link
Contributor Author

@wanghaox wanghaox left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

update code, fix some typos

PADDLE_ENFORCE_EQ(offset_dim.size(), 2UL,
"Only support one level sequence now.");
PADDLE_ENFORCE_EQ(length_dim.size(), 2UL,
"Only support one level sequence now.");
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dim 0 is for batch size, dim 1 is for the sequence offset

"a vector<int> to describe the length of every input sequence for "
"sub sequence item.");
AddOutput("Out",
"(LoDTensor), The output of SequenceSliceOp.");
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

AddComment(R"DOC(
Sequence slice operator

The operator crop a subsequence from given sequence with given start offset and subsequence length.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

PADDLE_ENFORCE_EQ(length_dim.size(), 2UL,
"Only support one level sequence now.");

ctx->SetOutputDim("Out", input_dims);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If not set, it triggers an error
EnforceNotMet: enforce numel() > 0 failed
When calling this method, the Tensor's numel must be larger than zero. Please check Tensor::Resize has been called first. at [/home/work/wanghao/code/Paddle/paddle/framework/tensor_impl.h:118]

so initialize the output dims by set input_dims to the Out

b1, b2]
[e1, e2]]
LoD(Out) = {{0, 2, 3}}; Dims(Out) = (3, 2)
NOTE: The length of the input, offset and length should be the same. The offset start from 0.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

PADDLE_ENFORCE_LT(
lod[0][i] + offset_data[i] + length_data[i],
lod[0][i + 1],
"The target tensor's length overflow")}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


x_grad->mutable_data<T>(ctx.GetPlace());
math::SetConstant<Place, T> set_zero;
set_zero(ctx.device_context(), x_grad, static_cast<T>(0));
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Copy link
Contributor Author

@wanghaox wanghaox left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix comments

PADDLE_ENFORCE_EQ(offset_dim.size(), 2UL,
"Only support one level sequence now.");
PADDLE_ENFORCE_EQ(length_dim.size(), 2UL,
"Only support one level sequence now.");
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done,added comment

PADDLE_ENFORCE_EQ(length_dim.size(), 2UL,
"Only support one level sequence now.");

ctx->SetOutputDim("Out", input_dims);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

chengduoZH
chengduoZH previously approved these changes Nov 17, 2017
@wanghaox wanghaox merged commit f350be3 into PaddlePaddle:develop Nov 20, 2017
@wanghaox wanghaox deleted the sub_sequence_op branch November 24, 2017 13:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Porting Sub Sequence Operator
4 participants