Skip to content

(NeurIPS 2023) PyTorch implementation of "Primal-Attention: Self-attention through Asymmetric Kernel SVD in Primal Representation"

Notifications You must be signed in to change notification settings

yingyichen-cyy/PrimalAttention

Repository files navigation

PrimalAttention

(NeurIPS 2023) PyTorch implementation of Primal-Attention available on OpenReview.

Primal-Attention: Self-attention through Asymmetric Kernel SVD in Primal Representation

by Yingyi Chen, Qinghua Tao, Francesco Tonin, Johan A.K. Suykens

[arXiv] [PDF] [Video] [Poster] [Project Page]



Figure 1. An illustration of Primal-Attention and canonical self-attention.

If our project is helpful for your research, please consider citing:

@article{chen2023primal,
  title={Primal-Attention: Self-attention through Asymmetric Kernel SVD in Primal Representation},
  author={Chen, Yingyi and Tao, Qinghua and Tonin, Francesco and Suykens, Johan A.K.},
  journal={Advances in Neural Information Processing Systems},
  year={2023}
}

Table of Content

Please refer to different folders for detailed experiment instructions. Note that we specified different environments for different tasks, among which the environment for reinforcement learning is the most complicated.

Please feel free to contact chenyingyi076@gmail.com for any discussion.

Spectrum Analysis



Figure 2. Spectrum analysis of the canonical self-attention matrix and Primal-Attention matrix on ImageNet-1K.

Acknowledgement

This repository is based on the official codes of Time Series: Flowformer, mvts_transformer, Autoformer, LRA: LRA, Nystromformer, RL: Decision Transformer, Flowformer, and CV: DeiT. NLP: fairseq.

About

(NeurIPS 2023) PyTorch implementation of "Primal-Attention: Self-attention through Asymmetric Kernel SVD in Primal Representation"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published