Link prediction task and graph neural networks #237
Replies: 1 comment 2 replies
-
GNN for KG would be interesting to explore! I have not seen it in the leaderboard, but one related idea is to use GNN as the encoder (R-GCN paper, https://arxiv.org/abs/1703.06103). On a side note, getting a near-perfect score of 0.97 MRR at WikiKG90M does not indicate that the problem is solved. Our task only asks to rank the correct tails among 1000 candidate entities, but in practice, the model needs to rank against all the 90M entities, which is far more challenging. Also, all the winning solutions exploit the candidate tail entity distributions, which boosts the performance a bit, but is not practical when considering ranking against all the 90M entities. We have asked the winners to clarify this limitation in their updated report. In short, the WikiKG90M-LSC task can be made harder and more practically relevant. There is a lot to be done in the space. We will keep the community updated if we are to refresh the task. |
Beta Was this translation helpful? Give feedback.
-
Hi all,
Big thanks for organizing the Large Scale Challenge! It really showed interesting insights on deep learning on graphs, especially for link prediction, because it seems that the top-ranking models (which already got a near-perfect score of 0.97+ MRR) are all some sort of variants of knowledge graph embedding approaches instead of GNNs. This is exactly the opposite to node classification and graph classification where the top solutions were all using GNNs.
I'm wondering if anybody in the community has ever tried a GNN-based approach on WikiKG90M. Actually, the same question goes for other knowledge graph datasets on the OGB leaderboard (WikiKG2, BioKG).
Beta Was this translation helpful? Give feedback.
All reactions