Plug and play transformer you can find network structure and official complete code by clicking List
The following is to quickly retrieve the core code of the plug-and-play module
Survey:
Name | Paper | Time |
---|---|---|
Transformers in Vision: A Survey (v1,v2) | Paper:https://arxiv.org/abs/2101.01169 | 2021-01-05 |
Attention mechanisms and deep learning for machine vision:A survey of the state of the art | Paper:https://arxiv.org/abs/2106.07550 | 2021-06-05 |
Name | Paper Link | Main idea | Tutorial |
---|---|---|---|
1. Squeeze-and-Excitation | SE-2017 | https://github.com/leader402/Plug-and-play/blob/main/cv/tutorial/SE.py | |
2. Polarized Self-Attention | PSA-2021 | https://github.com/leader402/Plug-and-play/blob/main/cv/tutorial/PSA.py | |
3. Dual Attention Network | DaNet-2018 | 通道注意力和空间注意力 | https://github.com/leader402/Plug-and-play/blob/main/cv/tutorial/DaNet.py |
4. Self-attention | Attention is all u need | Query Value Key | https://github.com/leader402/Plug-and-play/blob/main/cv/tutorial/self-attention.py |
5. Masked self-attention | |||
6. Multi-head attention | |||
7. Attention based deep learning architectures | |||
8. Single-channel model | |||
9. Multi-channel model | |||
10. Skip-layer model | |||
11. Bottom-up/top-down model | |||
12. CBAM: Convolutional Block Attention Module | CBAM-2018 | https://github.com/leader402/Plug-and-play/blob/main/cv/tutorial/CBAM.py | |
13. non-local netural network | |||
14. Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Axial-Attention | 使用了位置偏移 | https://github.com/leader402/Plug-and-play/blob/main/cv/tutorial/axial-attention.py |
15. Contextual Transformer Networks for Visual Recognition | 京东 | 替换 Bottleneck中的3*3卷积 |