(NeurIPS 2023) PyTorch implementation of Primal-Attention available on OpenReview.
Primal-Attention: Self-attention through Asymmetric Kernel SVD in Primal Representation
by Yingyi Chen, Qinghua Tao, Francesco Tonin, Johan A.K. Suykens
[arXiv] [PDF] [Video] [Poster] [Project Page]
Figure 1. An illustration of Primal-Attention and canonical self-attention.
If our project is helpful for your research, please consider citing:
@article{chen2023primal,
title={Primal-Attention: Self-attention through Asymmetric Kernel SVD in Primal Representation},
author={Chen, Yingyi and Tao, Qinghua and Tonin, Francesco and Suykens, Johan A.K.},
journal={Advances in Neural Information Processing Systems},
year={2023}
}
Please refer to different folders for detailed experiment instructions. Note that we specified different environments for different tasks, among which the environment for reinforcement learning is the most complicated.
Please feel free to contact chenyingyi076@gmail.com for any discussion.
- 1. Time Series Classification (UEA)
- 2. Long Sequence Modeling (LRA)
- 3. Reinforcement Learning (D4RL)
- 4. Vision Recognization (ImageNet-100, ImageNet-1K)
- 5. Language Modelling (WikiText-103)
Figure 2. Spectrum analysis of the canonical self-attention matrix and Primal-Attention matrix on ImageNet-1K.
This repository is based on the official codes of Time Series: Flowformer, mvts_transformer, Autoformer, LRA: LRA, Nystromformer, RL: Decision Transformer, Flowformer, and CV: DeiT. NLP: fairseq.