Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Move bn to pten #39347

Merged
merged 52 commits into from
Mar 3, 2022
Merged
Show file tree
Hide file tree
Changes from 46 commits
Commits
Show all changes
52 commits
Select commit Hold shift + click to select a range
e0aa0ab
add bn cpu version; test=develop
phlrain Jan 29, 2022
bb583a1
move batch norm to pten
phlrain Feb 3, 2022
3f6c67f
move batch norm to pten; test=develop
phlrain Feb 3, 2022
a3c18eb
fix bug; test=develop
phlrain Feb 4, 2022
9bf8ec3
fix func::tranpose depend bug; test=develop
phlrain Feb 4, 2022
df7b7e9
fix compile bugs; test=develop
phlrain Feb 4, 2022
8504e93
fix use_op batch_norm bug; test=develop
phlrain Feb 4, 2022
318ca26
fix cudnn bn add relu test; test=develop
phlrain Feb 4, 2022
4e28eea
fix pten context build and double grad bug; test= develop
phlrain Feb 7, 2022
5a799c8
remve useless code; test=develop
phlrain Feb 7, 2022
e19b6c4
add batch norm gpu fp16 support; test=develop
phlrain Feb 7, 2022
8145e59
fix test bn op bug; test=develop
phlrain Feb 7, 2022
79b5360
remove output dtype set; test=develop
phlrain Feb 8, 2022
362d573
fix bug; test=develop
phlrain Feb 8, 2022
be1461c
fix bug; test=develop
phlrain Feb 8, 2022
eff3192
fix applay pass to program bug; test=develop
phlrain Feb 9, 2022
8547557
revert to develop; test=develop
phlrain Feb 9, 2022
1c9533f
fix rocm bug; test=develop
phlrain Feb 11, 2022
3abefe0
revert operator to develop; test=develop
phlrain Feb 11, 2022
6baf0ad
fix pre_commit; test=develop
phlrain Feb 11, 2022
72661a4
fix statci check error; test=develop
phlrain Feb 11, 2022
6e68787
resolve conflict; test=develop
phlrain Feb 11, 2022
6559230
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
phlrain Feb 11, 2022
03234e5
ana batch norm bug;
phlrain Feb 16, 2022
f6b096b
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
phlrain Feb 16, 2022
fd43edd
revert batch norm op
phlrain Feb 16, 2022
fcff717
resolve conlict
phlrain Feb 16, 2022
a633b3c
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
phlrain Feb 16, 2022
99aba23
fix nan inf and speed bug; test=develop
phlrain Feb 16, 2022
5967fda
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
phlrain Feb 16, 2022
f2229fb
fix bug; test=develop
phlrain Feb 17, 2022
8a93f3f
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
phlrain Feb 17, 2022
b7e618a
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
phlrain Feb 17, 2022
f622410
fix error; test=develop
phlrain Feb 18, 2022
3639b3f
test expand op; test=develop
phlrain Feb 18, 2022
8b84513
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
phlrain Feb 18, 2022
fe849fa
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
phlrain Feb 18, 2022
ebd7ff4
fix bug; test=develop
phlrain Feb 21, 2022
b73f789
resolve confilct
phlrain Feb 24, 2022
b0bb451
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
phlrain Feb 24, 2022
0b3b64e
resolve confilct; test=develop
phlrain Feb 24, 2022
5dc41f6
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
phlrain Feb 24, 2022
f75973b
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
phlrain Feb 27, 2022
d5ee88d
polish code; test=develop
phlrain Feb 27, 2022
a35c7f9
polish code; test=develop
phlrain Feb 28, 2022
e15131d
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
phlrain Feb 28, 2022
704bc7a
change mutable data to ctx alloc; test=develop
phlrain Feb 28, 2022
c82e7ec
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
phlrain Feb 28, 2022
7ace554
make format same with ci; test=develop
phlrain Mar 1, 2022
fce2207
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
phlrain Mar 1, 2022
af0205e
fix format error with ci; test=develop
phlrain Mar 1, 2022
37321c7
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
phlrain Mar 1, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -12,20 +12,21 @@
// See the License for the specific language governing permissions and
// limitations under the License.

#include <random>
#include <string>
#include <unordered_set>

#include <gtest/gtest.h>
#include <boost/logic/tribool.hpp>
#include <random>
#include <unordered_set>

#include "gtest/gtest.h"
#include "paddle/fluid/framework/ir/graph_traits.h"
#include "paddle/fluid/framework/ir/mkldnn/conv_elementwise_add_mkldnn_fuse_pass.h"
#include "paddle/fluid/framework/ir/pass_tester_helper.h"
#include "paddle/fluid/framework/naive_executor.h"
#include "paddle/fluid/framework/op_registry.h"
#include "paddle/fluid/platform/place.h"

USE_OP(batch_norm);
USE_OP_ITSELF(batch_norm);
USE_OP_DEVICE_KERNEL(batch_norm, MKLDNN);
USE_OP(conv2d_transpose);
USE_OP_DEVICE_KERNEL(conv2d_transpose, MKLDNN);
Expand Down
2 changes: 0 additions & 2 deletions paddle/fluid/framework/operator.cc
Original file line number Diff line number Diff line change
Expand Up @@ -2212,8 +2212,6 @@ void OperatorWithKernel::BuildPhiKernelContext(
vector_int_attr.end());
pt_kernel_context->EmplaceBackAttr(vector_int64_attr);
}
// TODO(YuanRisheng) Need support vector<int64_t> attr

} else if (attr_defs[i].type_index ==
std::type_index(typeid(std::vector<int32_t>))) {
const auto& vector_int_attr = BOOST_GET_CONST(std::vector<int>, attr);
Expand Down
12 changes: 0 additions & 12 deletions paddle/fluid/operators/batch_norm_op.cc
Original file line number Diff line number Diff line change
Expand Up @@ -1289,15 +1289,3 @@ REGISTER_OPERATOR(batch_norm_grad, ops::BatchNormGradOp,
ops::BatchNormDoubleGradMaker<paddle::imperative::OpBase>);
REGISTER_OPERATOR(batch_norm_grad_grad, ops::BatchNormDoubleGradOp,
ops::BatchNormDoubleGradOpInplaceInferer);

REGISTER_OP_CPU_KERNEL(
batch_norm, ops::BatchNormKernel<paddle::platform::CPUDeviceContext, float>,
ops::BatchNormKernel<paddle::platform::CPUDeviceContext, double>);
REGISTER_OP_CPU_KERNEL(
batch_norm_grad,
ops::BatchNormGradKernel<paddle::platform::CPUDeviceContext, float>,
ops::BatchNormGradKernel<paddle::platform::CPUDeviceContext, double>);
REGISTER_OP_CPU_KERNEL(
batch_norm_grad_grad,
ops::BatchNormDoubleGradKernel<paddle::platform::CPUDeviceContext, float>,
ops::BatchNormDoubleGradKernel<paddle::platform::CPUDeviceContext, double>);
Loading