Skip to content

Commit

Permalink
[Docathon][Add Overview Doc No.19-21] add doc of docathon 19-21 (#6504)
Browse files Browse the repository at this point in the history
* add doc of docathon 19-21

* fix some issues
  • Loading branch information
gsq7474741 authored Jul 9, 2024
1 parent 154eadb commit 1b4715e
Showing 1 changed file with 6 additions and 0 deletions.
6 changes: 6 additions & 0 deletions docs/api/paddle/nn/Overview_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -205,6 +205,7 @@ Transformer 相关

" :ref:`paddle.nn.MultiHeadAttention <cn_api_paddle_nn_MultiHeadAttention>` ", "多头注意力机制"
" :ref:`paddle.nn.functional.scaled_dot_product_attention <cn_api_paddle_nn_functional_scaled_dot_product_attention>` ", "点乘注意力机制,并在此基础上加入了对注意力权重的缩放"
" :ref:`paddle.nn.functional.sparse_attention <cn_api_paddle_nn_functional_sparse_attention>` ", "稀疏版本的 Attention API,对 Transformer 模块中的 Attention 矩阵进行了稀疏化,从而减少内存消耗和计算量"
" :ref:`paddle.nn.Transformer <cn_api_paddle_nn_Transformer>` ", "Transformer 模型"
" :ref:`paddle.nn.TransformerDecoder <cn_api_paddle_nn_TransformerDecoder>` ", "Transformer 解码器"
" :ref:`paddle.nn.TransformerDecoderLayer <cn_api_paddle_nn_TransformerDecoderLayer>` ", "Transformer 解码器层"
Expand Down Expand Up @@ -264,13 +265,15 @@ Loss 层
" :ref:`paddle.nn.CrossEntropyLoss <cn_api_paddle_nn_CrossEntropyLoss>` ", "交叉熵损失层"
" :ref:`paddle.nn.CTCLoss <cn_api_paddle_nn_CTCLoss>` ", "CTCLoss 层"
" :ref:`paddle.nn.HSigmoidLoss <cn_api_paddle_nn_HSigmoidLoss>` ", "层次 sigmoid 损失层"
" :ref:`paddle.nn.HingeEmbeddingLoss <cn_api_paddle_nn_HingeEmbeddingLoss>` ", "HingeEmbeddingLoss 损失层"
" :ref:`paddle.nn.KLDivLoss <cn_api_paddle_nn_KLDivLoss>` ", "Kullback-Leibler 散度损失层"
" :ref:`paddle.nn.L1Loss <cn_api_paddle_nn_L1Loss>` ", "L1 损失层"
" :ref:`paddle.nn.MarginRankingLoss <cn_api_paddle_nn_MarginRankingLoss>` ", "MarginRankingLoss 层"
" :ref:`paddle.nn.MSELoss <cn_api_paddle_nn_MSELoss>` ", "均方差误差损失层"
" :ref:`paddle.nn.NLLLoss <cn_api_paddle_nn_NLLLoss>` ", "NLLLoss 层"
" :ref:`paddle.nn.GaussianNLLLoss <cn_api_paddle_nn_GaussianNLLLoss>` ", "GaussianNLLLoss 层"
" :ref:`paddle.nn.PoissonNLLLoss <cn_api_paddle_nn_PoissonNLLLoss>`", "PoissonNLLLoss 层"
" :ref:`paddle.nn.RNNTLoss <cn_api_paddle_nn_RNNTLoss>`", "RNNTLoss 层"
" :ref:`paddle.nn.SmoothL1Loss <cn_api_paddle_nn_SmoothL1Loss>` ", "平滑 L1 损失层"
" :ref:`paddle.nn.SoftMarginLoss <cn_api_paddle_nn_SoftMarginLoss>` ", "SoftMarginLoss 层"
" :ref:`paddle.nn.TripletMarginLoss <cn_api_paddle_nn_TripletMarginLoss>` ", "TripletMarginLoss 层"
Expand Down Expand Up @@ -516,6 +519,8 @@ Embedding 相关函数
" :ref:`paddle.nn.functional.triplet_margin_loss <cn_api_paddle_nn_functional_triplet_margin_loss>` ", "用于计算 TripletMarginLoss"
" :ref:`paddle.nn.functional.triplet_margin_with_distance_loss <cn_api_paddle_nn_functional_triplet_margin_with_distance_loss>` ", "用户自定义距离函数用于计算 triplet margin loss 损失"
" :ref:`paddle.nn.functional.multi_label_soft_margin_loss <cn_api_paddle_nn_functional_multi_label_soft_margin_loss>` ", "用于计算多分类的 hinge loss 损失函数"
" :ref:`paddle.nn.functional.hinge_embedding_loss <cn_api_paddle_nn_functional_hinge_embedding_loss>` ", "计算输入 input 和标签 label(包含 1 和 -1) 间的 `hinge embedding loss` 损失"
" :ref:`paddle.nn.functional.rnnt_loss <cn_api_paddle_nn_functional_rnnt_loss>` ", "计算 RNNT loss,也可以叫做 softmax with RNNT"
" :ref:`paddle.nn.functional.multi_margin_loss <cn_api_paddle_nn_functional_multi_margin_loss>` ", "用于计算 multi margin loss 损失函数"
" :ref:`paddle.nn.functional.adaptive_log_softmax_with_loss <cn_api_paddle_nn_functional_adaptive_log_softmax_with_loss>` ", "自适应 logsoftmax 损失函数"

Expand Down Expand Up @@ -548,6 +553,7 @@ Embedding 相关函数
" :ref:`paddle.nn.functional.temporal_shift <cn_api_paddle_nn_functional_temporal_shift>` ", "用于对输入 X 做时序通道 T 上的位移操作,为 TSM 中使用的操作"
" :ref:`paddle.nn.functional.upsample <cn_api_paddle_nn_functional_upsample>` ", "用于调整一个 batch 中图片的大小"
" :ref:`paddle.nn.functional.class_center_sample <cn_api_paddle_nn_functional_class_center_sample>` ", "用于 PartialFC 类别中心采样"
" :ref:`paddle.nn.functional.channel_shuffle <cn_api_paddle_nn_functional_channel_shuffle>` ", "将一个形为[N, C, H, W]或是[N, H, W, C]的 Tensor 按通道分成 g 组,得到形为[N, g, C/g, H, W]或[N, H, W, g, C/g]的 Tensor,然后转置为[N, C/g, g, H, W]或[N, H, W, C/g, g]的形状,最后重新排列为原来的形状"

.. _about_initializer:

Expand Down

0 comments on commit 1b4715e

Please sign in to comment.