Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[cherry-pick] fix gpu kernel for numel op #27129

Closed
wants to merge 28 commits into from
Closed

[cherry-pick] fix gpu kernel for numel op #27129

wants to merge 28 commits into from

Conversation

wangchaochaohu
Copy link
Contributor

@wangchaochaohu wangchaochaohu commented Sep 7, 2020

PR types

Bug fixes

PR changes

APIs

Describe

cherry-pick #27085

zhiqiu and others added 28 commits September 2, 2020 12:15
* add check for bernoulli and register bool for unsqueeze

* follow comments
* add embedding 2.0

* add embedding support input int32
* fix sample codes, test=develop
fix pool trt plugin bug
test=release/2.0-beta
* replace fluid.optimizer.set_dict with optimizer.set_state_dict

* replace fluid.optimizer.set_dict with optimizer.set_state_dict

* add coverage rate

* Increase coverage rate, fix code style

* Increase coverage rate, fix code style

* add fit to generate optimizer.state_dict() to save .pdopt to increase coverage rate

* delete http.log
* fix norm bug, test=develop

* fix norm bug, test=develop

* fix norm bug, test=develop

* fix norm bug, test=develop

* fix norm bug, test=develop
#26940)

* set_default_type only take effect on python floats or complex

* fix doc
* fix dropout bug in backward when input is 1d tensor, test=develop

* add test case and refine error message, test=develop

* refine error message, test=develop
Fix some code samples in Tranoformer apis.
test=develop
* refine paddle.stack

* support TensorArray

* add test

* fix coverage problem

* fix coverage problem

* fix sample code
* update optimizer (#26711)

* update doc

* update doc

* fix optimizer sample code

* add default value for adamw weight_decay

* fix adamw

* change LearningRateDecay to _LRScheduler

* fix adamw;notest

* fix load;notest

* remove file

* bug fix

* fix code style

* bug fix

* add ut

* adamw support weight_decay=0

* fix ut

* fix set_lr doc

* fix doc

* change parameters place

* fix sample code
* fix heter-ps multi thread
update doc of paddle.to_tensor
* Fix conv1d when data formate is NLC
test=develop

* Fix unitest of conv1d
test=develop
cherry-pick from the develop PR#26792, fix the argmin, argmax
@paddle-bot-old
Copy link

paddle-bot-old bot commented Sep 7, 2020

Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.