Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Docathon][Add Overview Doc No.19-21] add doc of docathon 19-21 #6504

Merged
merged 3 commits into from
Jul 9, 2024
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions docs/api/paddle/nn/Overview_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -201,6 +201,7 @@ Transformer 相关

" :ref:`paddle.nn.MultiHeadAttention <cn_api_paddle_nn_MultiHeadAttention>` ", "多头注意力机制"
" :ref:`paddle.nn.functional.scaled_dot_product_attention <cn_api_paddle_nn_functional_scaled_dot_product_attention>` ", "点乘注意力机制,并在此基础上加入了对注意力权重的缩放"
" :ref:`paddle.nn.functional.sparse_attention <cn_api_paddle_nn_functional_sparse_attention>` ", "稀疏版本的 Attention API,对 Transformer 模块中的 Attention 矩阵进行了稀疏化,从而减少内存消耗和计算量"
" :ref:`paddle.nn.Transformer <cn_api_paddle_nn_Transformer>` ", "Transformer 模型"
" :ref:`paddle.nn.TransformerDecoder <cn_api_paddle_nn_TransformerDecoder>` ", "Transformer 解码器"
" :ref:`paddle.nn.TransformerDecoderLayer <cn_api_paddle_nn_TransformerDecoderLayer>` ", "Transformer 解码器层"
Expand Down Expand Up @@ -259,13 +260,15 @@ Loss 层
" :ref:`paddle.nn.CrossEntropyLoss <cn_api_paddle_nn_CrossEntropyLoss>` ", "交叉熵损失层"
" :ref:`paddle.nn.CTCLoss <cn_api_paddle_nn_CTCLoss>` ", "CTCLoss 层"
" :ref:`paddle.nn.HSigmoidLoss <cn_api_paddle_nn_HSigmoidLoss>` ", "层次 sigmoid 损失层"
" :ref:`paddle.nn.HingeEmbeddingLoss <cn_api_paddle_nn_HingeEmbeddingLoss>` ", "HingeEmbeddingLoss 损失层"
" :ref:`paddle.nn.KLDivLoss <cn_api_paddle_nn_KLDivLoss>` ", "Kullback-Leibler 散度损失层"
" :ref:`paddle.nn.L1Loss <cn_api_paddle_nn_L1Loss>` ", "L1 损失层"
" :ref:`paddle.nn.MarginRankingLoss <cn_api_paddle_nn_MarginRankingLoss>` ", "MarginRankingLoss 层"
" :ref:`paddle.nn.MSELoss <cn_api_paddle_nn_MSELoss>` ", "均方差误差损失层"
" :ref:`paddle.nn.NLLLoss <cn_api_paddle_nn_NLLLoss>` ", "NLLLoss 层"
" :ref:`paddle.nn.GaussianNLLLoss <cn_api_paddle_nn_GaussianNLLLoss>` ", "GaussianNLLLoss 层"
" :ref:`paddle.nn.PoissonNLLLoss <cn_api_paddle_nn_PoissonNLLLoss>`", "PoissonNLLLoss 层"
" :ref:`paddle.nn.RNNTLoss <cn_api_paddle_nn_RNNTLoss>`", "RNNTLoss 层"
" :ref:`paddle.nn.SmoothL1Loss <cn_api_paddle_nn_SmoothL1Loss>` ", "平滑 L1 损失层"
" :ref:`paddle.nn.SoftMarginLoss <cn_api_paddle_nn_SoftMarginLoss>` ", "SoftMarginLoss 层"
" :ref:`paddle.nn.TripletMarginLoss <cn_api_paddle_nn_TripletMarginLoss>` ", "TripletMarginLoss 层"
Expand All @@ -284,6 +287,7 @@ Vision 层


" :ref:`paddle.nn.ChannelShuffle <cn_api_paddle_nn_ChannelShuffle>` ", "将一个形为[N, C, H, W]或是[N, H, W, C]的 Tensor 按通道分成 g 组,得到形为[N, g, C/g, H, W]或[N, H, W, g, C/g]的 Tensor,然后转置为[N, C/g, g, H, W]或[N, H, W, C/g, g]的形状,最后重新排列为原来的形状"
" :ref:`paddle.nn.functional.channel_shuffle <cn_api_paddle_nn_functional_channel_shuffle>` ", "将一个形为[N, C, H, W]或是[N, H, W, C]的 Tensor 按通道分成 g 组,得到形为[N, g, C/g, H, W]或[N, H, W, g, C/g]的 Tensor,然后转置为[N, C/g, g, H, W]或[N, H, W, C/g, g]的形状,最后重新排列为原来的形状"
gsq7474741 marked this conversation as resolved.
Show resolved Hide resolved
" :ref:`paddle.nn.PixelShuffle <cn_api_paddle_nn_PixelShuffle>` ", "将一个形为[N, C, H, W]或是[N, H, W, C]的 Tensor 重新排列成形为 [N, C/r**2, H*r, W*r]或 [N, H*r, W*r, C/r**2] 的 Tensor"
" :ref:`paddle.nn.PixelUnshuffle <cn_api_paddle_nn_PixelUnshuffle>` ", "PixelShuffle 的逆操作,将一个形为[N, C, H, W]或是[N, H, W, C]的 Tensor 重新排列成形为 [N, C*r*r, H/r, W/r] 或 [N, H/r, W/r, C*r*r] 的 Tensor"
" :ref:`paddle.nn.Upsample <cn_api_paddle_nn_Upsample>` ", "用于调整一个 batch 中图片的大小"
Expand Down Expand Up @@ -420,6 +424,7 @@ Padding 相关函数
" :ref:`paddle.nn.functional.tanhshrink <cn_api_paddle_nn_functional_tanhshrink>` ", "tanhshrink 激活函数"
" :ref:`paddle.nn.functional.thresholded_relu <cn_api_paddle_nn_functional_thresholded_relu>` ", "thresholded_relu 激活函数"
" :ref:`paddle.nn.functional.thresholded_relu_ <cn_api_paddle_nn_functional_thresholded_relu_>` ", "Inplace 版本的 :ref:`cn_api_paddle_nn_functional_thresholded_relu` API,对输入 x 采用 Inplace 策略"
" :ref:`paddle.nn.functional.tanh_ <cn_api_paddle_nn_functional_tanh_>` ", "Inplace 版本的 tanh API,对输入 x 采用 Inplace 策略"
gsq7474741 marked this conversation as resolved.
Show resolved Hide resolved

.. _normalization_functional:

Expand Down Expand Up @@ -506,6 +511,8 @@ Embedding 相关函数
" :ref:`paddle.nn.functional.triplet_margin_loss <cn_api_paddle_nn_functional_triplet_margin_loss>` ", "用于计算 TripletMarginLoss"
" :ref:`paddle.nn.functional.triplet_margin_with_distance_loss <cn_api_paddle_nn_functional_triplet_margin_with_distance_loss>` ", "用户自定义距离函数用于计算 triplet margin loss 损失"
" :ref:`paddle.nn.functional.multi_label_soft_margin_loss <cn_api_paddle_nn_functional_multi_label_soft_margin_loss>` ", "用于计算多分类的 hinge loss 损失函数"
" :ref:`paddle.nn.functional.hinge_embedding_loss <cn_api_paddle_nn_functional_hinge_embedding_loss>` ", "计算输入 input 和标签 label(包含 1 和 -1) 间的 `hinge embedding loss` 损失"
" :ref:`paddle.nn.functional.rnnt_loss <cn_api_paddle_nn_functional_rnnt_loss>` ", "计算 RNNT loss,也可以叫做 softmax with RNNT"
" :ref:`paddle.nn.functional.multi_margin_loss <cn_api_paddle_nn_functional_multi_margin_loss>` ", "用于计算 multi margin loss 损失函数"


Expand Down Expand Up @@ -549,6 +556,7 @@ Embedding 相关函数

" :ref:`paddle.nn.initializer.Assign <cn_api_paddle_nn_initializer_Assign>` ", "使用 Numpy 数组、Python 列表、Tensor 来初始化参数"
" :ref:`paddle.nn.initializer.Bilinear <cn_api_paddle_nn_Bilinear>` ", "该接口为参数初始化函数,用于转置卷积函数中"
" :ref:`paddle.LazyGuard <cn_api_paddle_LazyGuard>` ", "使模型在实例化时,其内部的参数不会立即申请内存空间。"
gsq7474741 marked this conversation as resolved.
Show resolved Hide resolved
" :ref:`paddle.nn.initializer.Constant <cn_api_paddle_nn_initializer_Constant>` ", "用于权重初始化,通过输入的 value 值初始化输入变量"
" :ref:`paddle.nn.initializer.KaimingNormal <cn_api_paddle_nn_initializer_KaimingNormal>` ", "实现 Kaiming 正态分布方式的权重初始化"
" :ref:`paddle.nn.initializer.KaimingUniform <cn_api_paddle_nn_initializer_KaimingUniform>` ", "实现 Kaiming 均匀分布方式的权重初始化"
Expand Down