-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Errors: text classification program with multi-head-self-attention #8999
Comments
Are you using paddle.v2 and fluid together? They have two different backend, and you can not use them in a single network. |
I don't think that is where the problem is. Indeed, I want to use multi-head-attention in my model, however, none of api document can be found here. |
@gmcather Thanks for posting your source code. That is very helpful. In addition to that, could you let us know how did you run the experiment? -- Did you run a Docker image? If so, could you please post how did you get the image (from Dockerhub?) and the command line? Thanks! |
It seems that you want to implement this model in Fluid. |
yes, I'm trying to reproduce the multi-head-self-attention in the paper "ALL YOU NEED IS ATTENTION", Nevertheless, the slight difference is that I want to use it in text classification model instead of NMT. |
I am trying to write text classification with multi-head-self-attention. But the program threw out a bug, which is hard to detect. Please help, thank you!
Here is the code:
The text was updated successfully, but these errors were encountered: