Simple library and command line utility for extracting summary from HTML pages or plain texts. The package also contains simple evaluation framework for text summaries. Implemented summarization methods:
- Luhn - heurestic method, reference
- Edmundson heurestic method with previous statistic research, reference
- Latent Semantic Analysis, LSA - one of the algorithm from http://scholar.google.com/citations?user=0fTuW_YAAAAJ&hl=en I think the author is using more advanced algorithms now. Steinberger, J. a Ježek, K. Using latent semantic an and summary evaluation. In In Proceedings ISIM '04. 2004. S. 93-100.
- LexRank - Unsupervised approach inspired by algorithms PageRank and HITS, reference
- TextRank - Unsupervised approach, also using PageRank algorithm, reference
- SumBasic - Method that is often used as a baseline in the literature. Source: Read about SumBasic
- KL-Sum - Method that greedily adds sentences to a summary so long as it decreases the KL Divergence. Source: Read about KL-Sum
- Reduction - Graph-based summarization, where a sentence salience is computed as the sum of the weights of its edges to other sentences. The weight of an edge between two sentences is computed in the same manner as TextRank.
Here are some other summarizers:
- https://github.com/thavelick/summarize/ - Python, TF (very simple)
- Reduction - Python, TextRank (simple)
- Open Text Summarizer - C, TF without normalization
- Simple program that summarize text - Python, TF without normalization
- Intro to Computational Linguistics - Java, LexRank
- Sumtract: Second project for UW LING 572 - Python
- TextTeaser - Scala
- PyTeaser - TextTeaser port in Python
- Automatic Document Summarizer - Java, Bipartite HITS (no sources)
- Pythia - Python, LexRank & Centroid
- SWING - Ruby
- Topic Networks - R, topic models & bipartite graphs
- Almus: Automatic Text Summarizer - Java, LSA (without source code)
- Musutelsa - Java, LSA (always freezes)
- http://mff.bajecni.cz/index.php - C++
- MEAD - Perl, various methods + evaluation framework
Make sure you have Python 2.7/3.3+ and pip (Windows, Linux) installed. Run simply (preferred way):
$ [sudo] pip install sumy
Or for the fresh version:
$ [sudo] pip install git+git://github.com/miso-belica/sumy.git
Sumy contains command line utility for quick summarization of documents.
$ sumy lex-rank --length=10 --url=http://en.wikipedia.org/wiki/Automatic_summarization # what's summarization?
$ sumy luhn --language=czech --url=http://www.zdrojak.cz/clanky/automaticke-zabezpeceni/
$ sumy edmundson --language=czech --length=3% --url=http://cs.wikipedia.org/wiki/Bitva_u_Lipan
$ sumy --help # for more info
Various evaluation methods for some summarization method can be executed by commands below:
$ sumy_eval lex-rank reference_summary.txt --url=http://en.wikipedia.org/wiki/Automatic_summarization
$ sumy_eval lsa reference_summary.txt --language=czech --url=http://www.zdrojak.cz/clanky/automaticke-zabezpeceni/
$ sumy_eval edmundson reference_summary.txt --language=czech --url=http://cs.wikipedia.org/wiki/Bitva_u_Lipan
$ sumy_eval --help # for more info
Or you can use sumy like a library in your project. Create file sumy_example.py
(don't name it sumy.py
) with the code below to test it.
# -*- coding: utf-8 -*-
from __future__ import absolute_import
from __future__ import division, print_function, unicode_literals
from sumy.parsers.html import HtmlParser
from sumy.parsers.plaintext import PlaintextParser
from sumy.nlp.tokenizers import Tokenizer
from sumy.summarizers.lsa import LsaSummarizer as Summarizer
from sumy.nlp.stemmers import Stemmer
from sumy.utils import get_stop_words
LANGUAGE = "english"
SENTENCES_COUNT = 10
if __name__ == "__main__":
url = "https://en.wikipedia.org/wiki/Automatic_summarization"
parser = HtmlParser.from_url(url, Tokenizer(LANGUAGE))
# or for plain text files
# parser = PlaintextParser.from_file("document.txt", Tokenizer(LANGUAGE))
stemmer = Stemmer(LANGUAGE)
summarizer = Summarizer(stemmer)
summarizer.stop_words = get_stop_words(LANGUAGE)
for sentence in summarizer(parser.document, SENTENCES_COUNT):
print(sentence)