This repository contains a jupyter notebook which uses various neural models to perform sentence representation and sentiment analysis on the dataset of the Stanford Sentiment Treebank (SST)
- Bag-of-words (cbow)
- Continuous bag-of-words (CBOW)
- Deep continuous bag-of-words (Deep CBOW)
- LSTM
- Binary Tree-LSTM
- Child-sum Tree-LSTM
- vanilla LSTM
- No input gate (NIG) LSTM
- No output gate (NOG) LSTM
- No forget gate (NFG) LSTM
- No input activation (NIAF) LSTM
- No output activation (NOAF) LSTM
- No peephole (NP) LSTM
- Coupled input and forget (CIFG) LSTM
The models are evaluated based on accuracy. We also evaluate separately on long and short sentences for each model. We also evaluate performance on all subtrees of the original sentence dataset for each model. We use Word2Vec for pretrained embeddings.
- Stanford Sentiment Treebank: Socher et al., 2013
- Word2Vec: Word2Vec: Mikolov et al., 2013
- Bag-of-words: Mikolov et al., 2013
- Continuous bag-of-words: Mikolov et al., 2013
- Deep continuous bag-of-words: Le and Mikolov, 2014
- LSTM: Hochreiter and Schmidhuber, 1997
- Binary Tree-LSTM: Tai et al., 2015
- Child-sum Tree-LSTM: Tai et al., 2015
- LSTM variants: Gers et al., 2000