Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

modify doc for paddle.nn.Layer #27624

Merged
merged 10 commits into from
Oct 8, 2020
56 changes: 27 additions & 29 deletions python/paddle/fluid/dygraph/layers.py
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ class Layer(core.Layer):
can be "my_layer_0.w_n", where "w" is the parameter
base name and "n" is an unique suffix auto-generated.
If None, prefix name will be snake cased class name. Default: None.
dtype(str or core.VarDesc.VarType, optional): data type of this parameter.
dtype(str, optional): data type of this parameter.
If set str, it can be "bool", "float16", "float32", "float64",
"int8", "int16", "int32", "int64", "uint8" or "uint16".
Default: "float32"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

73 行,不需要暴露core.VarDesc.Vartype给用户。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done,thx!

Expand Down Expand Up @@ -198,7 +198,7 @@ def apply(self, fn):
def init_weights(layer):
if type(layer) == nn.Linear:
print('before init weight:', layer.weight.numpy())
new_weight = paddle.fill_constant(layer.weight.shape, layer.weight.dtype, value=0.9)
new_weight = paddle.fill(layer.weight.shape, layer.weight.dtype, value=0.9)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

paddle里只有paddle.full没有paddle.fill

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

抱歉,已修改

layer.weight.set_value(new_weight)
print('after init weight:', layer.weight.numpy())

Expand Down Expand Up @@ -350,17 +350,17 @@ def create_parameter(self,

Parameters:
shape(list): Shape of the parameter.
attr(ParamAttr, optional): Parameter attribute of weight. Please refer to :ref:`api_fluid_ParamAttr`. Default: None.
dtype(str or core.VarDesc.VarType or str, optional): Data type of this parameter.
attr(ParamAttr, optional): Parameter attribute of weight. Please refer to :ref:`api_paddle_ParamAttr`. Default: None.
dtype(str, optional): Data type of this parameter.
If set str, it can be "bool", "float16", "float32", "float64",
"int8", "int16", "int32", "int64", "uint8" or "uint16". Default: "float32".
is_bias(bool, optional): if this is a bias parameter. Default: False.
default_initializer(Initializer, optional): the default initializer for this parameter.
If set None, default initializer will be set to :ref:`api_fluid_initializer_XavierInitializer` and :ref:`api_fluid_initializer_ConstantInitializer`
If set None, default initializer will be set to :ref:`_api_paddle_fluid_initializer_Xavier` and :ref:`_api_paddle_fluid_initializer_Constant`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

没改对啊。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

经沟通,暂时使用
paddle.nn.initializer.Constant
paddle.nn.initializer.Xavier

for non-bias and bias parameter, respectively. Default: None.

Returns:
:ref:`api_guide_Variable_en` : created parameter.
:Tensor, created parameter.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • 353行::ref:api_fluid_ParamAttr 现在是:ref:api_paddle_ParamAttr
  • 354行: core.VarDesc.VarType 不需要暴露, str写了两次
  • 359行::ref:api_fluid_initializer_XavierInitializer and :ref:api_fluid_initializer_ConstantInitializer 更新成paddle.nn.initializer下的API

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done,thx!

Examples:
.. code-block:: python
Expand Down Expand Up @@ -389,24 +389,19 @@ def forward(self, input):
default_initializer)

# TODO: Add more parameter list when we need them
def create_variable(self,
name=None,
persistable=None,
dtype=None,
type=core.VarDesc.VarType.LOD_TENSOR):
def create_variable(self, name=None, persistable=None, dtype=None):
"""Create Variable for this layer.

Parameters:
name(str, optional): name of the variable. Please refer to :ref:`api_guide_Name` . Default: None
persistable(bool, optional): if set this variable persistable. Default: False
dtype(str or core.VarDesc.VarType, optional): data type of this parameter.
dtype(str, optional): data type of this parameter.
If set str, it can be "bool", "float16", "float32", "float64",
"int8", "int16", "int32", "int64", "uint8" or "uint16".
If set None, it will be "float32". Default: None
type(core.VarDesc.VarType, optional): type of the variable. No need to set this parameter. Default: ``core.VarDesc.VarType.LOD_TENSOR``

Returns:
:ref:`api_guide_Variable_en` : created Variable.
Tensor, created Variable.

Examples:
.. code-block:: python
Expand Down Expand Up @@ -436,7 +431,10 @@ def forward(self, input):
[self._full_name, "_generated_var"]))

return self._helper.main_program.current_block().create_var(
name=var_name, persistable=persistable, dtype=dtype, type=type)
name=var_name,
persistable=persistable,
dtype=dtype,
type=core.VarDesc.VarType.LOD_TENSOR)

def parameters(self, include_sublayers=True):
"""Returns a list of all Parameters from current layer and its sub-layers.
Expand All @@ -445,7 +443,7 @@ def parameters(self, include_sublayers=True):
include_sublayers(bool, optional): Whether include the parameters of sublayers. If True, also include the parameters from sublayers. Default: True

Returns:
list of :ref:`api_guide_Variable_en` : a list of Parameters.
list of Tensor : a list of Parameters.

Examples:
.. code-block:: python
Expand Down Expand Up @@ -634,11 +632,11 @@ def named_sublayers(self,
layers_set=layers_set):
yield p, l

def register_buffer(self, name, variable, persistable=True):
def register_buffer(self, name, tensor, persistable=True):
"""
Registers a variable as buffer into the layer.
Registers a tensor as buffer into the layer.

`buffer` is a non-parameteric variable and will not be updated by optimizer,
`buffer` is a non-trainable tensor and will not be updated by optimizer,
but is necessary for evaluation and inference. For example, the mean and variance in BatchNorm layers.
The registered buffer is persistable by default, and will be saved into
`state_dict` alongside parameters. If set persistable=False, it registers
Expand All @@ -649,7 +647,7 @@ def register_buffer(self, name, variable, persistable=True):
Parameters:
name (string): name of the buffer. The buffer can be accessed
from this layer using the given name
variable (Variable): the variable to be registered as buffer.
tensor (Tensor): the tensor to be registered as buffer.
persistable (bool): whether the buffer is part of this layer's
state_dict.

Expand Down Expand Up @@ -688,12 +686,12 @@ def register_buffer(self, name, variable, persistable=True):
raise KeyError("The name of buffer can not be empty.")
elif hasattr(self, name) and name not in self._buffers:
raise KeyError("attribute '{}' already exists.".format(name))
elif variable is not None and not type(variable) == core.VarBase:
elif tensor is not None and not type(tensor) == core.VarBase:
raise TypeError(
"The registered buffer should be a core.VarBase, but received {}.".
format(type(variable).__name__))
format(type(tensor).__name__))
else:
self._buffers[name] = variable
self._buffers[name] = tensor
if persistable:
self._non_persistable_buffer_names_set.discard(name)
else:
Expand All @@ -707,7 +705,7 @@ def buffers(self, include_sublayers=True):
include_sublayers(bool, optional): Whether include the buffers of sublayers. If True, also include the buffers from sublayers. Default: True

Returns:
list of :ref:`api_guide_Variable_en` : a list of buffers.
list of Tensor : a list of buffers.

Examples:
.. code-block:: python
Expand All @@ -732,15 +730,15 @@ def buffers(self, include_sublayers=True):

def named_buffers(self, prefix='', include_sublayers=True):
"""
Returns an iterator over all buffers in the Layer, yielding tuple of name and Variable.
Returns an iterator over all buffers in the Layer, yielding tuple of name and Tensor.

Parameters:
prefix(str, optional): Prefix to prepend to all buffer names. Default: ''.
include_sublayers(bool, optional): Whether include the buffers of sublayers.
If True, also include the named buffers from sublayers. Default: True.

Yields:
(string, Variable): Tuple of name and Variable
(string, Tensor): Tuple of name and tensor

Examples:
.. code-block:: python
Expand All @@ -750,12 +748,12 @@ def named_buffers(self, prefix='', include_sublayers=True):

fc1 = paddle.nn.Linear(10, 3)
buffer1 = paddle.to_tensor(np.array([0]).astype("float32"))
# register a variable as buffer by specific `persistable`
# register a tensor as buffer by specific `persistable`
fc1.register_buffer("buf_name_1", buffer1, persistable=True)

fc2 = paddle.nn.Linear(3, 10)
buffer2 = paddle.to_tensor(np.array([1]).astype("float32"))
# register a buffer by assigning an attribute with Variable.
# register a buffer by assigning an attribute with Tensor.
# The `persistable` can only be False by this way.
fc2.buf_name_2 = buffer2

Expand Down Expand Up @@ -800,7 +798,7 @@ def clear_gradients(self):
parameters=linear.parameters())
out = linear(a)
out.backward()
adam.minimize(out)
adam.step()
linear.clear_gradients()

"""
Expand Down