-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[NN] Refactor the Code Structure of GT #5100
Conversation
To trigger regression tests:
|
This PR is quite large. Is it possible to break it into multiple smaller PRs? |
Most modifications are just movement of existing utilities for better code structure.
|
python/dgl/nn/pytorch/gt/lap_enc.py
Outdated
------- | ||
>>> import dgl | ||
>>> from dgl import LaplacianPE | ||
>>> from dgl.nn import LapPosEncoder |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why did you want to break the consistency of using Laplacian
for PE
and PosEncoder
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My consideration here is to keep consistency with SpatialEncoder
, DegreeEncoder
, etc. However, LaplacianPosEncoder
seems too long. Any suggestion?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alternatively, you can rename LaplacianPE
to LapPE
. To maintain backward compatibility, consider having an alias for now and adding a deprecation warning.
python/dgl/nn/pytorch/gt/lap_enc.py
Outdated
self.pe_encoder = nn.TransformerEncoder( | ||
encoder_layer, num_layers=num_layer | ||
) | ||
elif self.model_type == "DeepSet": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this different from SetTransformerEncoder
here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've done a pass. For a sanity check, did you try the refactored code on an end-to-end example?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done a pass, there are still 2 files I did not review, please check according to the comment on the existing files.
import torch as th | ||
import torch.nn as nn | ||
|
||
from .... import to_homogeneous |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we use absolute path instead of relative?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Absolute path is indeed more recommended than relative import. Currently relative import is a convention. Maybe we need further discussion to refactor all the code.
python/dgl/nn/pytorch/gt/lap_enc.py
Outdated
where :math:`N` is the number of nodes in the input graph, | ||
:math:`d` is :attr:`lpe_dim`. | ||
""" | ||
pos_enc = th.cat( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what does enc in pos_enc stand for? encoding or encoder? Please avoid unconventional abbr.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
modified to pos_encoding
python/dgl/nn/pytorch/gt/lap_enc.py
Outdated
encoder_layer = nn.TransformerEncoderLayer( | ||
d_model=lpe_dim, nhead=n_head, batch_first=True | ||
) | ||
self.pe_encoder = nn.TransformerEncoder( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what does pe in pe_encoder stand for?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pe stands for positional encoding, which is a common abbr. in this context.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm good.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please apply the lint suggestion.
Description
BiasedMHA
: change/ self.scaling
to* self.scaling
Checklist
Please feel free to remove inapplicable items for your PR.