-
-
Notifications
You must be signed in to change notification settings - Fork 101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Visualize self-attention matrix #178
Comments
Hi there 👋, Thank you so much for your attention to PyPOTS! You can follow me on GitHub to receive the latest news of PyPOTS. If you find PyPOTS helpful to your work, please star⭐️ this repository. Your star is your recognition, which can help more people notice PyPOTS and grow PyPOTS community. It matters and is definitely a kind of contribution to the community. I have received your message and will respond ASAP. Thank you for your patience! 😃 Best, |
Hey @gugababa, thanks for starting this issue! Your request is similar to the one @vemuribv asked for in #177. You both want something from the representation learned by models, not the final results, but could be useful to analyze models' behavior. Sounds reasonable and necessary. So let's make it! Could you please make a PR to add a function that helps visualize the SAITS model's attention matrix? I will adjust the framework API to let the model return its attention matrix for your function to do the visualization task. After your PR gets merged, you will be listed in PyPOTS contributors https://pypots.com/about/#all-contributors What do you think? 😃 |
Hi Wenjie,
That sounds good to me! If I have any questions about the current API, I
will let you know. I'll create the function soon.
Best,
Anshu
…On Wed, Aug 30, 2023 at 1:01 AM Wenjie Du ***@***.***> wrote:
Hey @gugababa <https://github.com/gugababa>, thanks for starting this
issue! Your request is similar to the one @vemuribv
<https://github.com/vemuribv> asked for in #177
<#177>. You both want something
from the representation learned by models, not the final results, but could
be useful to analyze models' behavior. Sounds reasonable and necessary. So
let's make it!
Could you please make a PR to add a function that helps visualize the
SAITS model's attention matrix? I will adjust the framework API to let the
model return its attention matrix for your function to do the visualization
task. What do you think?
—
Reply to this email directly, view it on GitHub
<#178 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AWHFE7YDRJRZOZHKTDG77TTXX3CL3ANCNFSM6AAAAAA4DLRUDI>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Cool! Let me know if you have any questions, Anshu ;-)
Best Regards,
Wenjie
…On Aug 30, 2023, 10:35 PM +0800, gugababa ***@***.***>, wrote:
Hi Wenjie,
That sounds good to me! If I have any questions about the current API, I
will let you know. I'll create the function soon.
Best,
Anshu
On Wed, Aug 30, 2023 at 1:01 AM Wenjie Du ***@***.***> wrote:
> Hey @gugababa <https://github.com/gugababa>, thanks for starting this
> issue! Your request is similar to the one @vemuribv
> <https://github.com/vemuribv> asked for in #177
> <#177>. You both want something
> from the representation learned by models, not the final results, but could
> be useful to analyze models' behavior. Sounds reasonable and necessary. So
> let's make it!
>
> Could you please make a PR to add a function that helps visualize the
> SAITS model's attention matrix? I will adjust the framework API to let the
> model return its attention matrix for your function to do the visualization
> task. What do you think?
>
> —
> Reply to this email directly, view it on GitHub
> <#178 (comment)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AWHFE7YDRJRZOZHKTDG77TTXX3CL3ANCNFSM6AAAAAA4DLRUDI>
> .
> You are receiving this because you were mentioned.Message ID:
> ***@***.***>
>
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: ***@***.***>
|
Hi Wenjie,
I was just wondering what the shape of the attention weights are so I can
apply the correct assertions. I assume that each layer within the
multi-attention head will return their own attention weights, and the
attention weights across the heads of a layer are averaged based on what I
see in the API. Are the attention weights given for each batch as well? In
addition, I am writing the function to be compatible with Tensorboard's
image summary function so it can be written to the final Tensorboard file.
Let me know as soon as you can, no rush. :) Thanks again!
Best,
Anshu
…On Wed, Aug 30, 2023 at 11:24 AM Wenjie Du ***@***.***> wrote:
Cool! Let me know if you have any questions, Anshu ;-)
Best Regards,
Wenjie
On Aug 30, 2023, 10:35 PM +0800, gugababa ***@***.***>, wrote:
> Hi Wenjie,
>
> That sounds good to me! If I have any questions about the current API, I
> will let you know. I'll create the function soon.
>
> Best,
> Anshu
>
> On Wed, Aug 30, 2023 at 1:01 AM Wenjie Du ***@***.***> wrote:
>
> > Hey @gugababa <https://github.com/gugababa>, thanks for starting this
> > issue! Your request is similar to the one @vemuribv
> > <https://github.com/vemuribv> asked for in #177
> > <#177>. You both want
something
> > from the representation learned by models, not the final results, but
could
> > be useful to analyze models' behavior. Sounds reasonable and
necessary. So
> > let's make it!
> >
> > Could you please make a PR to add a function that helps visualize the
> > SAITS model's attention matrix? I will adjust the framework API to let
the
> > model return its attention matrix for your function to do the
visualization
> > task. What do you think?
> >
> > —
> > Reply to this email directly, view it on GitHub
> > <#178 (comment)>,
> > or unsubscribe
> > <
https://github.com/notifications/unsubscribe-auth/AWHFE7YDRJRZOZHKTDG77TTXX3CL3ANCNFSM6AAAAAA4DLRUDI>
> > .
> > You are receiving this because you were mentioned.Message ID:
> > ***@***.***>
> >
> —
> Reply to this email directly, view it on GitHub, or unsubscribe.
> You are receiving this because you commented.Message ID: ***@***.***>
—
Reply to this email directly, view it on GitHub
<#178 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AWHFE75N7ES742GKU6ETKITXX5LLBANCNFSM6AAAAAA4DLRUDI>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Hi Anshu, Self-attention calculates the similarities between time steps, and the attention weight map represents the similarities. For each attention layer, the shape of the attention weights is |
1. Feature description
Hi! Would it be possible to add an option to visualize the final (and intermediate) self-attention maps/matrices for the SAITS model? Thank you!
2. Motivation
My work is currently using the SAITS model to impute a spectrum to reduce the acquisition time of an MRI scan. I would like to find the optimal acquisition protocol by determining the attention of each point, as points that have a higher attention score are more likely to be included in the acquisition protocol.
3. Your contribution
I can contribute to the code base and try to add this feature myself, although this may take some time as I will have to parse through the repo.
The text was updated successfully, but these errors were encountered: