-
Notifications
You must be signed in to change notification settings - Fork 334
Issues: NVIDIA/TransformerEngine
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
AttributeError: module 'transformer_engine' has no attribute 'pytorch'
#1379
opened Dec 17, 2024 by
carrot0117
TypeError: initialize_ub() got an unexpected keyword argument 'tp_size'
#1376
opened Dec 13, 2024 by
wccccp
TypeError: UbufP2PCommOverlap(): incompatible function arguments.
#1365
opened Dec 11, 2024 by
sallyjunjun
Support more than 1 shape/attention_params for DotProductAttention decision cache
#1349
opened Nov 29, 2024 by
parthmannan
the max error of moe_permute/unpermute.grad could reach 3.6e+00
#1336
opened Nov 15, 2024 by
NiuMa-1234
How can I use fp8_gemm to realize the function of "torch.mm()"?
#1318
opened Nov 6, 2024 by
duomicoding
Linear does not support TP comm overlap for Column Parallel mode
#1312
opened Nov 5, 2024 by
parthmannan
Installing TE is so hard.
bug
Something isn't working
build
Build system
#1284
opened Oct 23, 2024 by
ZihaoZheng98
Isn't the memory consumption should be dropped when using fp8?
question
Further information is requested
#1261
opened Oct 16, 2024 by
JayC1208
TransformerEngine install fails with no clear cause
bug
Something isn't working
build
Build system
#1249
opened Oct 14, 2024 by
sytelus
Previous Next
ProTip!
Type g i on any issue or pull request to go back to the issue listing page.