This repository contains the code developed in TensorFlow for the following paper submitted to SIAM SDM 2019:
If you used this code, please kindly consider citing the following paper:
@article{keneshloo2018transferrl,
title={Deep Transfer Reinforcement Learning for Text Summarization},
author={Keneshloo, Yaser and Ramakrishnan, Naren and Reddy, Chandan K.},
journal={arXiv preprint arXiv:},
year={2018}
}
Deep neural networks are data hungry models and thus they face difficulties when used for training on small size data. Transfer learning is a method that could potentially help in such situations. Although transfer learning achieved great success in image processing, its effect in the text domain is yet to be well established especially due to several intricacies that arise in the context of document analysis and understanding. In this paper, we study the problem of transfer learning for text summarization and discuss why the existing state-of-the-art models for this problem fail to generalize well on other (unseen) datasets. We propose a reinforcement learning framework based on self-critic policy gradient method which solves this problem and achieves good generalization and state-of-the-art results on a variety of datasets. Through an extensive set of experiments, we also show the ability of our proposed framework in fine-tuning the text summarization model only with a few training samples. To the best of our knowledge, this is first work that studies transfer learning in text summarization and provides a generic solution that works well on unseen data.
- Use Python 2.7
Python requirements can be installed as follows:
pip install -r python_requirements.txt
- Use Tensorflow 1.10 or newer
- CUDA 8 or 9
- CUDNN 6 or 7
https://github.com/abisee/cnn-dailymail
We have provided helper codes to download the cnn-dailymail dataset and pre-process this dataset and newsroom dataset. Please refer to this link to access them.
We saw a large improvement on the ROUGE measure by using our processed version of these datasets in the summarization results, therefore, we strongly suggest using these pre-processed files for all the training.
Download our best-performing model from here.
To train our best performing model or decode using our pre-trained model, please check the following file