Skip to content

snoopybingo/Twins

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 

Repository files navigation

Twins: Revisiting the Design of Spatial Attention in Vision Transformers

Very recently, a variety of vision transformer architectures for dense prediction tasks have been proposed and they show that the design of spatial attention is critical to their success in these tasks. In this work, we revisit the design of the spatial attention and demonstrate that a carefully-devised yet simple spatial attention mechanism performs favourably against the state-of-the-art schemes. As a result, we propose two vision transformer architectures, namely, Twins- PCPVT and Twins-SVT. Our proposed architectures are highly-efficient and easy to implement, only involving matrix multiplications that are highly optimized in modern deep learning frameworks. More importantly, the proposed architectures achieve excellent performance on a wide range of visual tasks including image- level classification as well as dense detection and segmentation. The simplicity and strong performance suggest that our proposed architectures may serve as stronger backbones for many vision tasks.

Twins-SVT-S Figure 1. Twins-SVT-S Architecture (Right side shows the inside of two consecutive Transformer Encoders).

Model Zoo

Image Classification

We provide baseline Twins models pretrained on ImageNet 2012.

Name Alias in paper acc@1 FLOPs(G) #params (M) url
PVT+CPVT-Small Twins-PCPVT-S 81.2 3.7 24.1 pcpvt_small.pth
PVT+CPVT-Base Twins-PCPVT-B 82.7 6.4 43.8 pcpvt_base.pth
ALT-GVT-Small Twins-SVT-S 81.3 2.8 24 alt_gvt_small.pth
ALT-GVT-Base Twins-SVT-B 83.1 8.3 56 alt_gvt_base.pth
ALT-GVT-Large Twins-SVT-L 83.3 14.8 99.2 alt_gvt_large.pth

^ Note: Our code will be released soon.

Citation

@article{chu2021Twins,
	title={Twins: Revisiting the Design of Spatial Attention in Vision Transformers},
	author={Xiangxiang Chu and Zhi Tian and Yuqing Wang and Bo Zhang and Haibing Ren and Xiaolin Wei and Huaxia Xia and Chunhua Shen},
	journal={Arxiv preprint 2104.13840},
	url={https://arxiv.org/pdf/2104.13840.pdf},
	year={2021}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published