We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for https://arxiv.org/abs/2202.09741
Superb results, pretty different structure when compared to some other models so would be nice to check whether the abstractions work out
Support it in parts, as a follow up of the Poolformers integration
Not doing it
Official implementation: https://github.com/Visual-Attention-Network
The text was updated successfully, but these errors were encountered:
EfficientFormer seems a bit similar https://arxiv.org/pdf/2206.01191v1.pdf
Sorry, something went wrong.
Successfully merging a pull request may close this issue.
🚀 Feature
Add support for https://arxiv.org/abs/2202.09741
Motivation
Superb results, pretty different structure when compared to some other models so would be nice to check whether the abstractions work out
Pitch
Support it in parts, as a follow up of the Poolformers integration
Alternatives
Not doing it
Additional context
Official implementation: https://github.com/Visual-Attention-Network
The text was updated successfully, but these errors were encountered: