-
Notifications
You must be signed in to change notification settings - Fork 178
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
None type error with local attention #45
Comments
I got similar problems
|
Hi, I met the exact same issue and may I ask if you have solved it? @15805383399 |
I resolved this one with the methods mentioned in #96 😁 |
Trying to run the example from the readme using local attention instead of linear attention. I changed the attention_type and added an additional argument in the TransformerEncoderBuilder.from_kwargs method:
The exemple throws an error:
It does work with the other attention modules.
Am I doing something wrong?
Is the local_context argument supposed to be an integer?
Thank you.
EDIT: looks like it is failing using cuda only (pytorch 1.6 with cuda 10.1), it works on the cpu
EDIT2: fixed using --no-cache-dir argument when installing with pip (to recompile)
The text was updated successfully, but these errors were encountered: