Recurrent Neural Networks for multilclass, multilabel classification of texts. The models that learn to tag samll texts with 169 different tags from arxiv.
In classifier.py is implemented a standard BLSTM network with attention.
In hatt_classifier.py you can find the implementation of Hierarchical Attention Networks for Document Classification.
The neural networks were built using Keras and Tensorflow.
The best performing model is the attention BLSTM that achieves a micro f-score of 0.67 on the test set.
The Hierarchical Attention Network achieves only 0.65 micro f-score.
I am using 500k paper abstracts from arxiv. In order to download your own data refer to the arxiv OAI api.
Pretrained word embeddings can be used. The embeddings can either be GloVe or Word2Vec. You can download the GoogleNews-vectors-negative300.bin or the GloVe embeddings.
-
In order to train your own model you must prepare your data set using the data_prep.py script. The preprocessing converts to lower case, tokenizes and removes very short words. The preprocessed files and label files should be saved in a /data folder.
-
You can now run classifier.py or hatt_classifier.py to build and train the models.
-
The trained models are exported to json and the weights to h5 for later use.
-
You can use utils.visualize_attention to visualize the attention weights.
- Python
- NLTK
- NumPy
- Pandas
- SciPy
- OpenCV
- scikit-learn
- Tensorflow
- Keras
Run pip install -r requirements.txt
to install the requirements.