Skip to content

Commit

Permalink
test=develop
Browse files Browse the repository at this point in the history
  • Loading branch information
ysh329 committed Sep 29, 2020
1 parent 3254e65 commit c0d399d
Showing 1 changed file with 67 additions and 69 deletions.
136 changes: 67 additions & 69 deletions python/paddle/fluid/layers/nn.py
Original file line number Diff line number Diff line change
Expand Up @@ -4801,15 +4801,15 @@ def split(input, num_or_sections, dim=-1, name=None):

Args:
input (Tensor): A N-D Tensor. The data type is bool, float16, float32, float64, int32 or int64.
num_or_sections (int|list|tuple): If ``num_or_sections`` is int, then the ``num_or_sections``
num_or_sections (int|list|tuple): If ``num_or_sections`` is int, then the ``num_or_sections``
indicates the number of equal sized sub-Tensors that the ``input``
will be divided into. If ``num_or_sections`` is a list or tuple, the length of it
will be divided into. If ``num_or_sections`` is a list or tuple, the length of it
indicates the number of sub-Tensors and the elements in it indicate the sizes of sub-Tensors'
dimension orderly. The length of the list mustn't be larger than the ``input`` 's size of specified dim.
dim (int|Tensor, optional): The dimension along which to split, it can be a scalar with type ``int`` or
a ``Tensor`` with shape [1] and data type ``int32`` or ``int64``. If :math:`dim < 0`,
the dimension to split along is :math:`rank(input) + dim`. Default is -1.
name (str, optional): The default value is None. Normally there is no need for user to set this property.
name (str, optional): The default value is None. Normally there is no need for user to set this property.
For more information, please refer to :ref:`api_guide_Name` .

Returns:
Expand Down Expand Up @@ -4838,7 +4838,7 @@ def split(input, num_or_sections, dim=-1, name=None):
# out0.shape [3, 2, 5]
# out1.shape [3, 3, 5]
# out2.shape [3, 4, 5]

# dim is negative, the real dim is (rank(input) + axis) which real
# value is 1.
out0, out1, out2 = fluid.layers.split(input, num_or_sections=3, dim=-2)
Expand Down Expand Up @@ -6409,10 +6409,10 @@ def lod_reset(x, y=None, target_lod=None):
out.dims = [6, 1]

Args:
x (Variable): Input variable which could be a Tensor or LoDTensor.
x (Variable): Input variable which could be a Tensor or LoDTensor.
The data type should be int32, int64, float32 or float64.
y (Variable, optional): If provided, output's LoD would be derived from :attr:`y`.
If y's lod level>0, the data type can be any type.
y (Variable, optional): If provided, output's LoD would be derived from :attr:`y`.
If y's lod level>0, the data type can be any type.
If y's lod level=0, the data type should be int32.
target_lod (list|tuple, optional): One level LoD which should be considered
as target LoD when :attr:`y` not provided.
Expand Down Expand Up @@ -6473,9 +6473,9 @@ def lod_append(x, level):
x.dims = [6, 1]

Args:
x (Variable): Input variable which could be a tensor or LoDTensor.
x (Variable): Input variable which could be a tensor or LoDTensor.
The data type should be int32, int64, float32 or float64.
level (list|tuple|Variable, optional): The LoD level to be appended into LoD of x.
level (list|tuple|Variable, optional): The LoD level to be appended into LoD of x.
If level is variable and its lod level>0, the data type can be any type.
If level is variable and its lod level=0, the data type should be int32.
Returns:
Expand Down Expand Up @@ -7131,19 +7131,19 @@ def image_resize(input,
future and only use :attr:`out_shape` instead.

Supporting resample methods:
'LINEAR' : Linear interpolation
'LINEAR' : Linear interpolation

'BILINEAR' : Bilinear interpolation

'TRILINEAR' : Trilinear interpolation

'NEAREST' : Nearest neighbor interpolation

'BICUBIC' : Bicubic interpolation

Linear interpolation is the method of using a line connecting two known quantities
Linear interpolation is the method of using a line connecting two known quantities
to determine the value of an unknown quantity between the two known quantities.

Nearest neighbor interpolation is to perform nearest neighbor interpolation
in both the 3rd dimension(in height direction) and the 4th dimension(in width
direction) on input tensor.
Expand All @@ -7158,7 +7158,7 @@ def image_resize(input,
interpolating functions of three variables (e.g. D-direction,
H-direction and W-direction in this op) on a rectilinear 3D grid.
The linear interpolation is performed on three directions.

Bicubic interpolation is an extension of cubic interpolation for interpolating
data points on a two-dimensional regular grid. The interpolated surface is
smoother than corresponding surfaces obtained by bilinear interpolation or
Expand Down Expand Up @@ -7257,7 +7257,7 @@ def image_resize(input,
output: (N,C,D_out,H_out,W_out) where:

D_out = D_{in} * scale_{factor}

Trilinear interpolation:
if:
align_corners = False , align_mode = 0
Expand All @@ -7272,29 +7272,29 @@ def image_resize(input,
D_out = D_{in} * scale_{factor}
H_out = H_{in} * scale_{factor}
W_out = W_{in} * scale_{factor}


For details of linear interpolation, please refer to Wikipedia:
https://en.wikipedia.org/wiki/Linear_interpolation.

For details of nearest neighbor interpolation, please refer to Wikipedia:
https://en.wikipedia.org/wiki/Nearest-neighbor_interpolation.

For details of bilinear interpolation, please refer to Wikipedia:
https://en.wikipedia.org/wiki/Bilinear_interpolation.

For details of trilinear interpolation, please refer to Wikipedia:
https://en.wikipedia.org/wiki/Trilinear_interpolation.

For details of bicubic interpolation, please refer to Wikipedia:
https://en.wikipedia.org/wiki/Bicubic_interpolation

Parameters:
input (Variable): 3-D, 4-D or 5-D Tensor, its data type is float32, float64, or uint8,
its data format is specified by :attr:`data_format`.
out_shape (list|tuple|Variable|None): Output shape of image resize
layer, the shape is (out_w, ) when input is a 3-D Tensor, the shape is (out_h, out_w)
when input is a 4-D Tensor and is (out_d, out_h, out_w) when input is a 5-D Tensor.
layer, the shape is (out_w, ) when input is a 3-D Tensor, the shape is (out_h, out_w)
when input is a 4-D Tensor and is (out_d, out_h, out_w) when input is a 5-D Tensor.
Default: None. If a list, each element can be an integer or a Tensor Variable of shape: [1].
If a Tensor Variable, its dimensions size should be a 1.
scale(float|Variable|None): The multiplier for the input height or width. At
Expand Down Expand Up @@ -7322,8 +7322,8 @@ def image_resize(input,
input and output tensors are aligned, preserving the values at the
corner pixels.
Default: True
align_mode(int) : An optional for linear/bilinear/trilinear interpolation. Refer to the fomula in the
the example code above, it can be \'0\' for src_idx = scale*(dst_indx+0.5)-0.5 ,
align_mode(int) : An optional for linear/bilinear/trilinear interpolation. Refer to the fomula in the
the example code above, it can be \'0\' for src_idx = scale*(dst_indx+0.5)-0.5 ,
can be \'1\' for src_idx = scale*dst_index.
data_format (str, optional): Specify the data format of the input, and the data format of the output
will be consistent with that of the input. An optional string from:`NCW`, `NWC`, `"NCHW"`, `"NHWC"`, `"NCDHW"`,
Expand Down Expand Up @@ -7592,34 +7592,34 @@ def resize_linear(input,
output shape which specified by actual_shape, out_shape and scale
in priority order.

**Warning:** the parameter :attr:`actual_shape` will be deprecated in
**Warning:** the parameter :attr:`actual_shape` will be deprecated in
the future and only use :attr:`out_shape` instead.

Align_corners and align_mode are optional parameters,the calculation
Align_corners and align_mode are optional parameters,the calculation
method of interpolation can be selected by them.

Example:

.. code-block:: text

For scale:

if align_corners = True && out_size > 1 :

scale_factor = (in_size-1.0)/(out_size-1.0)

else:

scale_factor = float(in_size/out_size)

Linear interpolation:

if:
align_corners = False , align_mode = 0

input : (N,C,W_in)
output: (N,C,W_out) where:

W_out = (W_{in}+0.5) * scale_{factor} - 0.5

else:
Expand All @@ -7632,41 +7632,41 @@ def resize_linear(input,
input(Variable): 3-D Tensor(NCW), its data type is float32, float64, or uint8,
its data format is specified by :attr:`data_format`.
out_shape(list|tuple|Variable|None): Output shape of resize linear
layer, the shape is (out_w,). Default: None. If a list, each
element can be an integer or a Tensor Variable with shape: [1]. If a
layer, the shape is (out_w,). Default: None. If a list, each
element can be an integer or a Tensor Variable with shape: [1]. If a
Tensor Variable, its dimension size should be 1.
scale(float|Variable|None): The multiplier for the input height or width. At
least one of :attr:`out_shape` or :attr:`scale` must be set.
And :attr:`out_shape` has a higher priority than :attr:`scale`.
least one of :attr:`out_shape` or :attr:`scale` must be set.
And :attr:`out_shape` has a higher priority than :attr:`scale`.
Default: None.
actual_shape(Variable): An optional input to specify output shape
dynamically. If provided, image resize
according to this given shape rather than
:attr:`out_shape` and :attr:`scale` specifying
shape. That is to say actual_shape has the
highest priority. It is recommended to use
:attr:`out_shape` if you want to specify output
shape dynamically, because :attr:`actual_shape`
will be deprecated. When using actual_shape to
specify output shape, one of :attr:`out_shape`
and :attr:`scale` should also be set, otherwise
:attr:`out_shape` if you want to specify output
shape dynamically, because :attr:`actual_shape`
will be deprecated. When using actual_shape to
specify output shape, one of :attr:`out_shape`
and :attr:`scale` should also be set, otherwise
errors would be occurred in graph constructing stage.
Default: None
align_corners(bool): ${align_corners_comment}
align_mode(bool): ${align_mode_comment}
data_format (str, optional): Specify the data format of the input, and the data format of the output
data_format (str, optional): Specify the data format of the input, and the data format of the output
will be consistent with that of the input. An optional string from: `"NCW"`, `"NWC"`.
The default is `"NCW"`. When it is `"NCW"`, the data is stored in the order of:
`[batch_size, input_channels, input_width]`.
name(str, optional): The default value is None. Normally there is no need for user to set this property.
name(str, optional): The default value is None. Normally there is no need for user to set this property.
For more information, please refer to :ref:`api_guide_Name`

Returns:
Variable: 3-D tensor(NCW or NWC).

Examples:
.. code-block:: python

#declarative mode
import paddle.fluid as fluid
import numpy as np
Expand All @@ -7677,14 +7677,14 @@ def resize_linear(input,
place = fluid.CPUPlace()
exe = fluid.Executor(place)
exe.run(fluid.default_startup_program())

input_data = np.random.rand(1,3,100).astype("float32")

output_data = exe.run(fluid.default_main_program(),
feed={"input":input_data},
fetch_list=[output],
return_numpy=True)

print(output_data[0].shape)

# (1, 3, 50)
Expand Down Expand Up @@ -8283,7 +8283,7 @@ def gather(input, index, overwrite=True):

Returns:
output (Tensor): The output is a tensor with the same rank as input.

Examples:

.. code-block:: python
Expand Down Expand Up @@ -9765,7 +9765,7 @@ def prelu(x, mode, param_attr=None, name=None):
if mode not in ['all', 'channel', 'element']:
raise ValueError('mode should be one of all, channel, element.')
alpha_shape = [1]
# NOTE(): The input of this API should be ``N,C,...`` format,
# NOTE(): The input of this API should be ``N,C,...`` format,
# which means x.shape[0] is batch_size and x.shape[0] is channel.
if mode == 'channel':
assert len(
Expand Down Expand Up @@ -10065,10 +10065,10 @@ def stack(x, axis=0, name=None):
Tensor :math:`[d_0, d_1, d_{axis-1}, len(x), d_{axis}, ..., d_{n-1}]`.
Supported data types: float32, float64, int32, int64.
axis (int, optional): The axis along which all inputs are stacked. ``axis`` range is ``[-(R+1), R+1)``,
where ``R`` is the number of dimensions of the first input tensor ``x[0]``.
where ``R`` is the number of dimensions of the first input tensor ``x[0]``.
If ``axis < 0``, ``axis = axis+R+1``. The default value of axis is 0.
name (str, optional): Please refer to :ref:`api_guide_Name`, Default None.


Returns:
Variable: The stacked Tensor, has same data type with input Tensors. Output dim is :math:`rank(x[0])+1`.
Expand Down Expand Up @@ -10375,7 +10375,7 @@ def expand_as(x, target_tensor, name=None):
:alias_main: paddle.expand_as
:alias: paddle.expand_as,paddle.tensor.expand_as,paddle.tensor.manipulation.expand_as
:old_api: paddle.fluid.layers.expand_as

expand_as operator tiles to the input by given expand tensor. You should set expand tensor
for each dimension by providing tensor 'target_tensor'. The rank of X
should be in [1, 6]. Please note that size of 'target_tensor' must be the same
Expand Down Expand Up @@ -10611,20 +10611,20 @@ def gaussian_random(shape,
# result_3 is:
# [[-0.12310527, 0.8187662, 1.923219 ]
# [ 0.70721835, 0.5210541, -0.03214082]]

.. code-block:: python

# declarative mode
# declarative mode
import numpy as np
from paddle import fluid

x = fluid.layers.gaussian_random((2, 3), std=2., seed=10)

place = fluid.CPUPlace()
exe = fluid.Executor(place)
start = fluid.default_startup_program()
main = fluid.default_main_program()

exe.run(start)
x_np, = exe.run(main, feed={}, fetch_list=[x])

Expand All @@ -10638,11 +10638,11 @@ def gaussian_random(shape,
import numpy as np
from paddle import fluid
import paddle.fluid.dygraph as dg

place = fluid.CPUPlace()
with dg.guard(place) as g:
x = fluid.layers.gaussian_random((2, 4), mean=2., dtype="float32", seed=10)
x_np = x.numpy()
x_np = x.numpy()
x_np
# array([[2.3060477 , 2.676496 , 3.9911983 , 0.9990833 ],
# [2.8675377 , 2.2279181 , 0.79029655, 2.8447366 ]], dtype=float32)
Expand Down Expand Up @@ -11328,7 +11328,7 @@ def size(input):

Raises:
TypeError: ``input`` must be a Tensor and the data type of ``input`` must be one of bool, float16, float32, float64, int32, int64.

Examples:
.. code-block:: python

Expand Down Expand Up @@ -12238,7 +12238,7 @@ def logical_or(x, y, out=None, name=None):

.. note::
``paddle.logical_or`` supports broadcasting. If you want know more about broadcasting, please refer to :ref:`user_guide_broadcasting`.

Args:
x (Tensor): the input tensor, it's data type should be bool.
y (Tensor): the input tensor, it's data type should be bool.
Expand Down Expand Up @@ -12413,12 +12413,10 @@ def clip_by_norm(x, max_norm, name=None):
Examples:
.. code-block:: python

import paddle
import numpy as np

paddle.disable_static()
input = paddle.to_tensor(data=np.array([[0.1, 0.2], [0.3, 0.4]]), dtype="float32")
reward = paddle.nn.clip_by_norm(x=input, max_norm=1.0)
import paddle.fluid as fluid
input = fluid.data(
name='data', shape=[None, 1], dtype='float32')
reward = fluid.layers.clip_by_norm(x=input, max_norm=1.0)
"""

helper = LayerHelper("clip_by_norm", **locals())
Expand Down Expand Up @@ -15179,7 +15177,7 @@ def unbind(input, axis=0):
Removes a tensor dimension, then split the input tensor into multiple sub-Tensors.
Args:
input (Variable): The input variable which is an N-D Tensor, data type being float32, float64, int32 or int64.

axis (int32|int64, optional): A scalar with type ``int32|int64`` shape [1]. The dimension along which to unbind. If :math:`axis < 0`, the
dimension to unbind along is :math:`rank(input) + axis`. Default is 0.
Returns:
Expand Down

0 comments on commit c0d399d

Please sign in to comment.