-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"add dynamic lstm scripts" #3
Conversation
fc = fluid.layers.fc(input=inputs, size=hid_dim) | ||
lstm, cell = fluid.layers.dynamic_lstm( | ||
input=fc, size=hid_dim, is_reverse=(i % 2) == 0) | ||
inputs = [fc, lstm] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The configuration here is not consistent with the config in https://github.com/dzhwinter/benchmark/pull/2/files . There is no reversed LSTM in that PR. Just simple stacked LSTM.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
# data = fluid.layers.data( | ||
# name="words", shape=[1], append_batch_size=False, dtype="int64") | ||
# label = fluid.layers.data( | ||
# name="label", shape=[1], append_batch_size=False, dtype="int64") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove the unused codes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed.
'--use_cprof', action='store_true', help='If set, use cProfile.') | ||
parser.add_argument( | ||
'--use_nvprof', | ||
action='store_false', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
store_false -> store_true
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed.
hid_dim = args.hid_dim | ||
stacked_num = args.stacked_num | ||
|
||
assert stacked_num % 2 == 1, "Must stacked_num %2 == 1." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove this check.
for i in range(stacked_num): | ||
fc = fluid.layers.fc(input=inputs, size=hid_dim) | ||
lstm, cell = fluid.layers.dynamic_lstm(input=fc, size=hid_dim) | ||
inputs = [fc, lstm] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
inputs = lstm
line 66 67实际可以和line 71 - lin 74行合并。
fc1 = fluid.layers.fc(input=emb, size=hid_dim) | ||
lstm1, cell1 = fluid.layers.dynamic_lstm(input=fc1, size=hid_dim) | ||
|
||
inputs = [fc1, lstm1] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
inputs = lstm1
inputs = [fc, lstm] | ||
|
||
fc_last = fluid.layers.sequence_pool(input=inputs[0], pool_type='max') | ||
lstm_last = fluid.layers.sequence_pool(input=inputs[1], pool_type='max') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
只需要对最后的lstm进行pooling即可,line 79只有一个pooling 之后的输入。
意图是为了保持和 https://github.com/dzhwinter/benchmark/blob/master/paddle/understand_sentiment_lstm.py#L74 这里一致。
train_reader = paddle.batch( | ||
paddle.reader.shuffle( | ||
paddle.dataset.imdb.train(word_dict), | ||
buf_size=args.batch_size * 10), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
all fixed based on comments. |
fix PaddlePaddle/Paddle#6165