iNLTK aims to provide out of the box support for various NLP tasks that an application developer might need for Indic languages. Paper for iNLTK library has been accepted at EMNLP-2020's NLP-OSS workshop. Here's the link to the paper
Checkout detailed docs along with Installation instructions at https://inltk.readthedocs.io
Language | Code |
---|---|
Hindi | hi |
Punjabi | pa |
Gujarati | gu |
Kannada | kn |
Malayalam | ml |
Oriya | or |
Marathi | mr |
Bengali | bn |
Tamil | ta |
Urdu | ur |
Nepali | ne |
Sanskrit | sa |
English | en |
Telugu | te |
Language | Script | Code |
---|---|---|
Hinglish (Hindi+English) | Latin | hi-en |
Tanglish (Tamil+English) | Latin | ta-en |
Manglish (Malayalam+English) | Latin | ml-en |
Note: English model has been directly taken from fast.ai
Language | Repository | Dataset used for Classification | Results on using complete training set |
Percentage Decrease in Training set size |
Results on using reduced training set without Paraphrases |
Results on using reduced training set with Paraphrases |
---|---|---|---|---|---|---|
Hindi | NLP for Hindi | IIT Patna Movie Reviews | Accuracy: 57.74 MCC: 37.23 |
80% (2480 -> 496) | Accuracy: 47.74 MCC: 20.50 |
Accuracy: 56.13 MCC: 34.39 |
Bengali | NLP for Bengali | Bengali News Articles (Soham Articles) | Accuracy: 90.71 MCC: 87.92 |
99% (11284 -> 112) | Accuracy: 69.88 MCC: 61.56 |
Accuracy: 74.06 MCC: 65.08 |
Gujarati | NLP for Gujarati | iNLTK Headlines Corpus - Gujarati | Accuracy: 91.05 MCC: 86.09 |
90% (5269 -> 526) | Accuracy: 80.88 MCC: 70.18 |
Accuracy: 81.03 MCC: 70.44 |
Malayalam | NLP for Malayalam | iNLTK Headlines Corpus - Malayalam | Accuracy: 95.56 MCC: 93.29 |
90% (5036 -> 503) | Accuracy: 82.38 MCC: 73.47 |
Accuracy: 84.29 MCC: 76.36 |
Marathi | NLP for Marathi | iNLTK Headlines Corpus - Marathi | Accuracy: 92.40 MCC: 85.23 |
95% (9672 -> 483) | Accuracy: 84.13 MCC: 68.59 |
Accuracy: 84.55 MCC: 69.11 |
Tamil | NLP for Tamil | iNLTK Headlines Corpus - Tamil | Accuracy: 95.22 MCC: 92.70 |
95% (5346 -> 267) | Accuracy: 86.25 MCC: 79.42 |
Accuracy: 89.84 MCC: 84.63 |
For more details around implementation or to reproduce results, checkout respective repositories.
If you would like to add support for language of your own choice to iNLTK, please start with checking/raising a issue here
Please checkout the steps I'd mentioned here for Telugu to begin with. They should be almost similar for other languages as well.
If you would like to take iNLTK's models and refine them with your own dataset or build your own custom models on top of it, please check out the repositories in the above table for the language of your choice. The repositories above contain links to datasets, pretrained models, classifiers and all of the code for that.
If you wish for a particular functionality in iNLTK - Start by checking/raising a issue here
Shout out if you want to help :)
- Add Maithili support
Shout out if you want to lead :)
- Add NER support for all languages
- Add Textual Entailment support for all languages
- Work on a unified model for all the languages
- POS support in iNLTK
- Add translations - to and from languages in iNLTK + English
- By Jeremy Howard on Twitter
- By Sebastian Ruder on Twitter
- By Vincent Boucher, By Philip Vollet, By Steve Nouri on LinkedIn
- By Kanimozhi, By Soham, By Imaad on LinkedIn
- iNLTK was trending on GitHub in May 2019
If you use this library in your research, please consider citing:
@inproceedings{arora-2020-inltk,
title = "i{NLTK}: Natural Language Toolkit for Indic Languages",
author = "Arora, Gaurav",
booktitle = "Proceedings of Second Workshop for NLP Open Source Software (NLP-OSS)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.nlposs-1.10",
doi = "10.18653/v1/2020.nlposs-1.10",
pages = "66--71",
abstract = "We present iNLTK, an open-source NLP library consisting of pre-trained language models and out-of-the-box support for Data Augmentation, Textual Similarity, Sentence Embeddings, Word Embeddings, Tokenization and Text Generation in 13 Indic Languages. By using pre-trained models from iNLTK for text classification on publicly available datasets, we significantly outperform previously reported results. On these datasets, we also show that by using pre-trained models and data augmentation from iNLTK, we can achieve more than 95{\%} of the previous best performance by using less than 10{\%} of the training data. iNLTK is already being widely used by the community and has 40,000+ downloads, 600+ stars and 100+ forks on GitHub. The library is available at https://github.com/goru001/inltk.",
}