-
Notifications
You must be signed in to change notification settings - Fork 632
Issues: facebookresearch/xformers
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
memory_efficient_attention faster than flash attention 2 backend?
#1180
opened Dec 19, 2024 by
asahni04
[AMD GPU] NotImplementedError: No operator found for
memory_efficient_attention_forward
with inputs
#1175
opened Dec 15, 2024 by
Looong01
Does Xformers offer any extra speed over PyTorch anymore? And why is my Xformers file so big?
#1174
opened Dec 14, 2024 by
Mescalamba
More precise Errors and logger warnings for failing to import triton
#1170
opened Dec 3, 2024 by
b-burton
visual studio 2022,cuda 12.4 ,but it still can't build the wheel, and return this
#1161
opened Nov 22, 2024 by
zslefour
Incompatibility Between xformers FA3 Torch Custom op Wrapper and recent
flashattn_hopper_cuda
#1159
opened Nov 21, 2024 by
ohwi
need to release xformers-0.0.28.post3.whl with manylinux2014_x86_64 os version.
#1155
opened Nov 19, 2024 by
controlRun
Cant use powershell command to build for Nightly PyTorch on Win - filename too long
#1132
opened Oct 22, 2024 by
Mescalamba
xformers 0.0.20 memoery efficient attent cutalss backward non deterministic
#1128
opened Oct 13, 2024 by
Bavesh-B
xformers.sparse.utils._coo_to_csr is incorrect when n > m
#1125
opened Oct 11, 2024 by
francois-rozet
Add support for CUDNN attention via CUDNN_FRONTEND Python API?
#1123
opened Oct 10, 2024 by
Skylion007
[Bug] Unexpected behavior of
memory_efficient_attention
with BlockDiagonalMask
#1122
opened Oct 10, 2024 by
xiangxu-google
Does memory efficient attention cutlass kernel support various seq len inputs for q/k/v + tensor bias?
#1120
opened Oct 4, 2024 by
ShijunK
Why xformers 0.0.28.post1 doesn't have pre-compiled wheel for Windows?
#1118
opened Oct 1, 2024 by
FurkanGozukara
I tried upgrade visual studio,cuda 12.4 and 12.6,but it still can't build the wheel,and return this
#1117
opened Sep 29, 2024 by
neutronslime
scaled_dot_product_attention output is different from memory_efficient_attention
#1114
opened Sep 25, 2024 by
aenoca
Previous Next
ProTip!
Find all open issues with in progress development work with linked:pr.