####################################################### #ReVal: A Simple and Effective Machine Translation Evaluation Metric Based on Recurrent Neural Networks #######################################################
Please refer to the following paper for details about this metric and the generation of training data: Rohit Gupta, Constantin Orasan, and Josef van Genabith. 2015. ReVal: A Simple and Effective Machine Translation Evaluation Metric Based on Recurrent Neural Networks. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’15, Lisbon, Portugal.
This code is available at GitHub.
The TreeStructured-LSTM code used in this metric implementation is obtained from [here] (https://github.com/stanfordnlp/treelstm). Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks by Kai Sheng Tai, Richard Socher, and Christopher Manning.
#Installation and Running
- Java >= 8 (for Stanford CoreNLP utilities)
- Python >= 2.7
If you do not have lua you need to install lua and luarocks first and install the following:
For example:
luarocks install nngraph
First download the required data and libraries by running the following script:
./download_and_preprocess.sh
This will download the Glove vectors, Stanford Parser and POS tagger:
- Glove word vectors (Common Crawl 840B) -- Warning: this is a 2GB download!
- Stanford Parser
- Stanford POS Tagger
To run the metric (Currently this metric only evaluates translations into English) run:
python ReVal.py -r sample_reference.txt -t sample_translation.txt
replace sample_reference.txt and sample_translation.txt with your reference and translation files
For training the metric you need to download the training data. If you plan to replicate all the resluts given in the paper you will also need SICK data.
Preprocess (use -h for help) and train using:
python scripts/preprocess-training-data.py -t training/qsetl_train.txt -d training/qsetl_dev.txt
th relatedness/trainingscript.lua --dim <LSTM_memory_dimension(default:150)> --epochs <number_of_training_epochs(default:10)>