Encoder-decoder model with attention mechanism
We used the Glove pre-trained vectors to initialize the word embeddings.
Bidirectional GRU-RNN.
Unidirectional GRU-RNN, with beamsearch.
We Used BahdanauAttention.
Data included in our github is a reduced dataset extracted from the dataset available at harvardnlp/sent-summary.
We used the ROUGE metric, from the package py-rouge.
- Python 3
- Tensorflow version 1.x
- pip install -r requirements.txt
if google colab every dependency is installed in the notebook.
To use our implementation you simply go to the notebook text_summarization_feats.ipynb. And run the cells.
-
D. Bahdanau, K. Cho, Y. Bengio, Neural machinetranslation by jointly learning to align and trans-late, arXiv preprint arXiv:1409.0473.
-
C.-Y. Lin, ROUGE: A package for automatic eval-uation of summaries, in: Text SummarizationBranches Out, Association for Computational Lin-guistics, Barcelona, Spain, 2004, pp. 74–81.URLhttps://www.aclweb.org/anthology/W04-1013.
-
R. Nallapati, B. Zhou, C. Gulcehre, B. Xi-ang, et al., Abstractive text summarization us-ing sequence-to-sequence rnns and beyond, arXivpreprint arXiv:1602.06023