Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add log2 operator #28319

Merged
merged 28 commits into from
Nov 12, 2020
Merged

add log2 operator #28319

merged 28 commits into from
Nov 12, 2020

Conversation

Joejiong
Copy link
Contributor

@Joejiong Joejiong commented Oct 30, 2020

PR types

New features

PR changes

OPs

Describe

Add new op log2 and its API.
Doc preview:

image

Unitest:

class TestLog2(TestActivation):
    def setUp(self):
        self.op_type = "log2"
        self.init_dtype()

        x = np.random.uniform(0.1, 1, [11, 17]).astype(self.dtype)
        out = np.log2(x)

        self.inputs = {'X': OpTest.np_dtype_to_fluid_dtype(x)}
        self.outputs = {'Out': out}

    def test_check_grad(self):
        if self.dtype == np.float16:
            return
        self.check_grad(['X'], 'Out')

    def test_error(self):
        in1 = fluid.layers.data(
            name="in1", shape=[11, 17], append_batch_size=False, dtype="int32")
        in2 = fluid.layers.data(
            name="in2", shape=[11, 17], append_batch_size=False, dtype="int64")

        self.assertRaises(TypeError, fluid.layers.log2, in1)
        self.assertRaises(TypeError, fluid.layers.log2, in2)

    def test_api(self):
        with fluid.program_guard(fluid.Program(), fluid.Program()):
            input_x = np.random.uniform(0.1, 1, [11, 17]).astype("float64")
            data_x = fluid.layers.data(
                name="data_x",
                shape=[11, 17],
                append_batch_size=False,
                dtype="float64")

            out1 = fluid.layers.log2(data_x)
            exe = fluid.Executor(place=fluid.CPUPlace())
            exe.run(fluid.default_startup_program())
            res1 = exe.run(fluid.default_main_program(),
                           feed={"data_x": input_x},
                           fetch_list=[out1])
        expected_res = np.log2(input_x)
        self.assertTrue(np.allclose(res1, expected_res))

        # dygraph
        with fluid.dygraph.guard():
            np_x = np.random.uniform(0.1, 1, [11, 17]).astype("float64")
            data_x = fluid.dygraph.to_variable(np_x)
            z = fluid.layers.log2(data_x)
            np_z = z.numpy()
            z_expected = np.array(np.log2(np_x))
        self.assertTrue(np.allclose(np_z, z_expected))

image

@paddle-bot-old
Copy link

Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

Copy link
Member

@zhhsplendid zhhsplendid left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

看到coverage挂了,可以像上周说的log验证那部分代码是否有跑到,如果有,报错到ci ce群

@@ -1895,10 +1974,12 @@ def test_errors(self):
# The input type must be Variable.
self.assertRaises(TypeError, F.softplus, 1)
# The input dtype must be float16, float32, float64.
x_int32 = paddle.fluid.data(name='x_int32', shape=[12, 10], dtype='int32')
x_int32 = paddle.fluid.data(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2.0后用paddle.data是更推荐的用法,可否看看这部分能否改成paddle.data?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个是Log1p, 我之后换一个pr,统一看看这个activate里面有多少要改的公共的这种需要迁移的,我统一迁移

python/paddle/fluid/tests/unittests/test_activation_op.py Outdated Show resolved Hide resolved
struct Log2Functor : public BaseActivationFunctor<T> {
template <typename Device, typename X, typename Out>
void operator()(Device d, X x, Out out) const {
out.device(d) = x.log() / static_cast<T>(log(2));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

尽管数学上等价,但是计算机应该算以log2为底会更简单快速。比算log(x)/log(2)快。如果有空可以自己写写看这里有没有更快的实现。

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里看看有空调查和实现一下,没空时现在这样也能勉强接受。。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

之后实现,谢谢

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个问题,除了性能,还有计算误差的问题,建议再调研下,看是否能优化。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

换成tensor原生实现,thx
done;

Copy link
Contributor Author

@Joejiong Joejiong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个paddle2.0的activate我之后统一提一个解决,这个pr主要是和log2相关,之后统一迁移,谢谢

@@ -1895,10 +1974,12 @@ def test_errors(self):
# The input type must be Variable.
self.assertRaises(TypeError, F.softplus, 1)
# The input dtype must be float16, float32, float64.
x_int32 = paddle.fluid.data(name='x_int32', shape=[12, 10], dtype='int32')
x_int32 = paddle.fluid.data(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个是Log1p, 我之后换一个pr,统一看看这个activate里面有多少要改的公共的这种需要迁移的,我统一迁移

"""
:alias_main: paddle.log2
:alias: paddle.log2,paddle.tensor.log2,paddle.tensor.math.log2
:old_api: paddle.fluid.layers.log2
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

不需要再写alias_main\alias\old_api这三行了。文档会自动加,辛苦删掉

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done, thx

Out = \\log_2x

Args:
x (Variable): Input LoDTensor or Tensor. Must be one of the following types: float32, float64.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Variable->Tensor

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

不推荐暴露LoDTensor,辛苦删掉

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done, thx



Returns:
Variable: The log to the base 2 of the input LoDTensor or Tensor computed element-wise.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Variable->Tensor
删掉LoDTensor

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done, thx

exe = paddle.static.Executor(fluid.CPUPlace())

# Execute
x_i = np.array([[1], [2]]).astype(np.float32)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

不推荐用numpy的api,最好用paddle的相关api替代

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done, thx


# Execute
x_i = np.array([[1], [2]]).astype(np.float32)
res_val, = exe.run(fluid.default_main_program(), feed={'x':x_i}, fetch_list=[res])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

最好用动态图写示例代码哈

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done, thx

.. code-block:: python

import paddle
import paddle.fluid as fluid
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

删除import paddle.fluid as fluid

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done thx


# example 1: x is a float
x_i = paddle.to_tensor([[1.0], [2.0]])
res = paddle.log2(x_i)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

增加结果注释

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done thx

swtkiwi
swtkiwi previously approved these changes Nov 4, 2020
Copy link
Contributor

@swtkiwi swtkiwi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

zhhsplendid
zhhsplendid previously approved these changes Nov 4, 2020
Copy link
Member

@zhhsplendid zhhsplendid left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

struct Log2Functor : public BaseActivationFunctor<T> {
template <typename Device, typename X, typename Out>
void operator()(Device d, X x, Out out) const {
out.device(d) = x.log() / static_cast<T>(log(2));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个问题,除了性能,还有计算误差的问题,建议再调研下,看是否能优化。

x_i = paddle.full(shape=[1], fill_value=2, dtype='float32')
paddle.to_tensor(x_i)
res = paddle.log2(x_i)
print(res.numpy()) # [1.]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里不需要加.numpy(), print(res)即可

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thx, done

if in_dygraph_mode():
return core.ops.log2(x)

check_variable_and_dtype(x, 'x', ['float32', 'float64'], "log2")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

float16 ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thx,
done

@@ -8731,6 +8732,62 @@ def log(x, name=None):
return out


def log2(x, name=None):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

对于新增的API,源码只需要放到新的paddle目录下即可,不需要放到fluid下,原则上fluid下不新增API了。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thx; done

@Joejiong Joejiong dismissed stale reviews from zhhsplendid and swtkiwi via f0151d0 November 5, 2020 08:09
XiaoguangHu01
XiaoguangHu01 previously approved these changes Nov 9, 2020
Copy link
Contributor

@XiaoguangHu01 XiaoguangHu01 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

zhhsplendid
zhhsplendid previously approved these changes Nov 9, 2020
Copy link
Member

@zhhsplendid zhhsplendid left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM,不过请修改PR标题,你这里是开发log2而不是修复了log2

@Joejiong Joejiong changed the title Fix log2 add log2 operator Nov 9, 2020
@Joejiong Joejiong requested a review from swtkiwi November 9, 2020 08:47
swtkiwi
swtkiwi previously approved these changes Nov 9, 2020
Copy link
Contributor

@swtkiwi swtkiwi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@TCChenlong TCChenlong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@XiaoguangHu01 XiaoguangHu01 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@zhhsplendid zhhsplendid merged commit 08d2413 into PaddlePaddle:develop Nov 12, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants