Code for recreating the results in our Transactions in GIS paper as well as our K-CAP 2019 paper .
SE‐KGE : A location‐Aware Knowledge Graph Embedding Model for Geographic Question Answering and Spatial Semantic Lifting
Please visit my Homepage for more information.
The location-aware entity encoder architecture:
- Python 2.7+
- Torch 1.0.1+
- numpy 1.16.0+
- matplotlib 2.2.4+
- sklearn 0.20.3+
- geopandas 0.6.1+
- shapely 1.6.4+
- pyproj 2.2.2+
To set up the code , run python setup.py
. Note that the first three are required for model training and testing. The rest are used for visualization which is optional.
You can download the GeoQA dataset from here. Unextract it and put them in graphqa/dbgeo/
.
This code is implemented in Python 2.7. All codes are in graphqa/netquery/
.
For each baseline in Table 3:
GQE_{diag}
: rungraphqa/dbgeo_geoqa_gqe.sh
;GQE
: rungraphqa/dbgeo_geoqa_gqe_diag.sh
;CGA
: rungraphqa/dbgeo_geoqa_cga.sh
;SE-KGE_{direct}
: rungraphqa/dbgeo_geoqa_direct.sh
;SE-KGE_{pt}
: rungraphqa/dbgeo_geoqa_direct.sh
;SE-KGE_{space}
: rungraphqa/dbgeo_geoqa_space.sh
;SE-KGE_{full}
: rungraphqa/dbgeo_geoqa_full.sh
.
For each baseline in Table 5:
SE-KGE_{space}
: rungraphqa/dbgeo_spa_sem_lift_space.sh
;SE-KGE_{ssl}
: rungraphqa/dbgeo_spa_sem_lift_ssl.sh
.
If you find our work useful in your research please consider citing our paper.
@article{mai2020se,
title={{SE}-{KGE}: A Location-Aware Knowledge Graph Embedding Model for Geographic Question Answering and Spatial Semantic Lifting},
author={Mai, Gengchen and Janowicz, Krzysztof and Cai, Ling and Zhu, Rui and Regalia, Blake and Yan, Bo and Shi, Meilin and Lao, Ni},
journal={Transactions in GIS},
year={2020},
doi={10.1111/tgis.12629}
}
@inproceedings{mai2019contextual,
title={Contextual Graph Attention for Answering Logical Queries over Incomplete Knowledge Graphs},
author={Mai, Gengchen and Janowicz, Krzysztof and Yan, Bo and Zhu, Rui and Cai, Ling and Lao, Ni},
booktitle={Proceedings of the 10th International Conference on Knowledge Capture},
pages={171--178},
year={2019}
}
The location encoder component in SE-KGE model is based on Space2Vec. Read our ICLR 2020 paper for a comprehensive understanding:
@inproceedings{space2vec_iclr2020,
title={Multi-Scale Representation Learning for Spatial Feature Distributions using Grid Cells},
author={Mai, Gengchen and Janowicz, Krzysztof and Yan, Bo and Zhu, Rui and Cai, Ling and Lao, Ni},
booktitle={The Eighth International Conference on Learning Representations},
year={2020},
organization={openreview}
}
Note that a part of our code is based on the code of Hamilton et al's NIPS 2018 paper:
@inproceedings{hamilton2018embedding,
title={Embedding logical queries on knowledge graphs},
author={Hamilton, Will and Bajaj, Payal and Zitnik, Marinka and Jurafsky, Dan and Leskovec, Jure},
booktitle={Advances in Neural Information Processing Systems},
pages={2026--2037},
year={2018}
}