diff --git a/docs/notebooks/soft_cosine_tutorial.ipynb b/docs/notebooks/soft_cosine_tutorial.ipynb index 5a9e868ff2..43bb5d30a8 100644 --- a/docs/notebooks/soft_cosine_tutorial.ipynb +++ b/docs/notebooks/soft_cosine_tutorial.ipynb @@ -6,22 +6,23 @@ "source": [ "# Finding similar documents with Word2Vec and Soft Cosine Measure \n", "\n", - "Soft Cosine Measure (SCM) is a promising new tool in machine learning that allows us to submit a query and return the most relevant documents. In **part 1**, we will show how you can compute SCM between two documents using `softcossim`. In **part 2**, we will use `SoftCosineSimilarity` to retrieve documents most similar to a query. Part 1 is optional if you only want use `SoftCosineSimilarity`, but is also useful in it's own merit.\n", + "Soft Cosine Measure (SCM) is a promising new tool in machine learning that allows us to submit a query and return the most relevant documents. In **part 1**, we will show how you can compute SCM between two documents using `softcossim`. In **part 2**, we will use `SoftCosineSimilarity` to retrieve documents most similar to a query and compare the performance against other similarity measures.\n", "\n", - "First, however, we go through the basics of what soft cosine measure is.\n", + "First, however, we go through the basics of what Soft Cosine Measure is.\n", "\n", "## Soft Cosine Measure basics\n", "\n", - "Soft Cosine Measure (SCM) is a method that allows us to assess the similarity between two documents in a meaningful way, even when they have no words in common. It uses a measure of similarity between words, which can be derived [2] using [word2vec](http://rare-technologies.com/word2vec-tutorial/) [3] vector embeddings of words. It has been shown to outperform many of the state-of-the-art methods in the semantic text similarity task in the context of community question answering [2].\n", + "Soft Cosine Measure (SCM) is a method that allows us to assess the similarity between two documents in a meaningful way, even when they have no words in common. It uses a measure of similarity between words, which can be derived [2] using [word2vec][] [3] vector embeddings of words. It has been shown to outperform many of the state-of-the-art methods in the semantic text similarity task in the context of community question answering [2].\n", "\n", - "SCM is illustrated below for two very similar sentences. The sentences have no words in common, but by matching the relevant words, SCM is able to accurately measure the similarity between the two sentences. The method also uses the bag-of-words vector representation of the documents (simply put, the word's frequencies in the documents). The intution behind the method is that we compute standard cosine similarity assuming that the document vectors are expressed in a non-orthogonal basis, where the angle between two basis vectors is derived from the angle between the word2vec embeddings of the corresponding words.\n", + "[word2vec]: https://radimrehurek.com/gensim/models/word2vec.html\n", "\n", - "![Soft Cosine Measure](soft_cosine_tutorial.png)\n", + "SCM is illustrated below for two very similar sentences. The sentences have no words in common, but by modeling synonymy, SCM is able to accurately measure the similarity between the two sentences. The method also uses the bag-of-words vector representation of the documents (simply put, the word's frequencies in the documents). The intution behind the method is that we compute standard cosine similarity assuming that the document vectors are expressed in a non-orthogonal basis, where the angle between two basis vectors is derived from the angle between the word2vec embeddings of the corresponding words.\n", "\n", + "![Soft Cosine Measure](soft_cosine_tutorial.png)\n", "\n", - "This method was introduced in the article \"Soft Measure and Soft Cosine Measure: Measure of Features in Vector Space Model\" by Grigori Sidorov, Alexander Gelbukh, Helena Gomez-Adorno, and David Pinto ([link to PDF](http://www.scielo.org.mx/pdf/cys/v18n3/v18n3a7.pdf)).\n", + "This method was perhaps first introduced in the article “Soft Measure and Soft Cosine Measure: Measure of Features in Vector Space Model” by Grigori Sidorov, Alexander Gelbukh, Helena Gomez-Adorno, and David Pinto ([link to PDF](http://www.scielo.org.mx/pdf/cys/v18n3/v18n3a7.pdf)).\n", "\n", - "In this tutorial, we will learn how to use Gensim's SCM functionality, which consists of the `softcossim` method for distance computation, and the `SoftCosineSimilarity` class for corpus based similarity queries.\n", + "In this tutorial, we will learn how to use Gensim's SCM functionality, which consists of the `softcossim` function for one-off computation, and the `SoftCosineSimilarity` class for corpus-based similarity queries.\n", "\n", "> **Note**:\n", ">\n", @@ -29,9 +30,9 @@ ">\n", "\n", "## Running this notebook\n", - "You can download this [iPython Notebook](http://ipython.org/notebook.html), and run it on your own computer, provided you have installed Gensim, PyEMD, NLTK, Matplotlib, and downloaded the necessary data.\n", + "You can download this [Jupyter notebook](http://jupyter.org/), and run it on your own computer, provided you have installed the `gensim`, `jupyter`, `sklearn`, `pyemd`, `wmd`, and `wget` Python packages.\n", "\n", - "The notebook was run on an Ubuntu machine with an Intel core i7-6700HQ CPU 3.10GHz (4 cores) and 16 GB memory. Running the entire notebook on this machine takes about 6 minutes." + "The notebook was run on an Ubuntu machine with an Intel core i7-6700HQ CPU 3.10GHz (4 cores) and 16 GB memory. Assuming all resources required by the notebook have already been downloaded, running the entire notebook on this machine takes about 30 minutes." ] }, { @@ -42,7 +43,7 @@ "source": [ "# Initialize logging.\n", "import logging\n", - "logging.basicConfig(format='%(asctime)s | %(levelname)s : %(message)s')" + "logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)" ] }, { @@ -51,9 +52,11 @@ "source": [ "## Part 1: Computing the Soft Cosine Measure\n", "\n", - "To use SCM, we need some word embeddings first of all. You could train a word2vec (see tutorial [here](http://rare-technologies.com/word2vec-tutorial/)) model on some corpus, but we will start by downloading some pre-trained word2vec embeddings. Download the GoogleNews-vectors-negative300.bin.gz embeddings [here](https://code.google.com/archive/p/word2vec/) (warning: 1.5 GB, file is not needed for part 2). Training your own embeddings can be beneficial, but to simplify this tutorial, we will be using pre-trained embeddings at first.\n", + "To use SCM, we need some word embeddings first of all. You could train a [word2vec][] (see tutorial [here](http://rare-technologies.com/word2vec-tutorial/)) model on some corpus, but we will use pre-trained word2vec embeddings.\n", "\n", - "Let's take some sentences to compute the similarity between." + "[word2vec]: https://radimrehurek.com/gensim/models/word2vec.html\n", + "\n", + "Let's create some sentences to compare." ] }, { @@ -88,6 +91,13 @@ "[nltk_data] Downloading package stopwords to /home/witiko/nltk_data...\n", "[nltk_data] Package stopwords is already up-to-date!\n" ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "2018-02-05 10:47:42,975 : INFO : built Dictionary(11 unique tokens: ['president', 'fruit', 'greets', 'obama', 'illinois']...) from 3 documents (total 11 corpus positions)\n" + ] } ], "source": [ @@ -118,7 +128,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Now, as mentioned earlier, we will be using some downloaded pre-trained embeddings. We load these into a Gensim Word2Vec model class. Note that the embeddings we have chosen here require a lot of memory. We will use the embeddings to construct a term similarity matrix that will be used by the `softcossim` method." + "Now, as we mentioned earlier, we will be using some downloaded pre-trained embeddings. Note that the embeddings we have chosen here require a lot of memory. We will use the embeddings to construct a term similarity matrix that will be used by the `softcossim` function." ] }, { @@ -126,37 +136,40 @@ "execution_count": 4, "metadata": {}, "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "2018-02-05 10:49:29,393 : INFO : constructed a term similarity matrix with 91.735537 % nonzero elements\n" + ] + }, { "name": "stdout", "output_type": "stream", "text": [ - "Cell took 107.69 seconds to run.\n" + "CPU times: user 1min 39s, sys: 3.06 s, total: 1min 42s\n", + "Wall time: 1min 47s\n" ] } ], "source": [ "%%time\n", - "import os\n", - "\n", - "from gensim.models import KeyedVectors\n", - "if not os.path.exists('/data/GoogleNews-vectors-negative300.bin.gz'):\n", - " raise ValueError(\"SKIP: You need to download the google news model\")\n", - " \n", - "model = KeyedVectors.load_word2vec_format('/data/GoogleNews-vectors-negative300.bin.gz', binary=True)\n", - "similarity_matrix = model.similarity_matrix(dictionary)\n", - "del model" + "import gensim.downloader\n", + "\n", + "w2v_model = gensim.downloader.load(\"word2vec-google-news-300\")\n", + "similarity_matrix = w2v_model.similarity_matrix(dictionary)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "So let's compute SCM using the `softcossim` method." + "So let's compute SCM using the `softcossim` function." ] }, { "cell_type": "code", - "execution_count": 6, + "execution_count": 5, "metadata": {}, "outputs": [ { @@ -169,6 +182,7 @@ ], "source": [ "from gensim.matutils import softcossim\n", + "\n", "similarity = softcossim(sentence_obama, sentence_president, similarity_matrix)\n", "print('similarity = %.4f' % similarity)" ] @@ -182,7 +196,7 @@ }, { "cell_type": "code", - "execution_count": 7, + "execution_count": 6, "metadata": {}, "outputs": [ { @@ -203,359 +217,351 @@ "metadata": {}, "source": [ "## Part 2: Similarity queries using `SoftCosineSimilarity`\n", + "You can use SCM to get the most similar documents to a query, using the SoftCosineSimilarity class. Its interface is similar to what is described in the [Similarity Queries](https://radimrehurek.com/gensim/tut3.html) Gensim tutorial.\n", "\n", - "You can use SCM to get the most similar documents to a query, using the `SoftCosineSimilarity` class. Its interface is similar to what is described in the [Similarity Queries](https://radimrehurek.com/gensim/tut3.html) Gensim tutorial.\n", + "### Qatar Living unannotated dataset\n", + "Contestants solving the community question answering task in the [SemEval 2016][semeval16] and [2017][semeval17] competitions had an unannotated dataset of 189,941 questions and 1,894,456 comments from the [Qatar Living][ql] discussion forums. As our first step, we will use the same dataset to build a corpus.\n", "\n", - "### Yelp data\n", - "\n", - "Let's try similarity queries using some real world data. For that we'll be using Yelp reviews, available at http://www.yelp.com/dataset_challenge. Specifically, we will be using reviews of a single restaurant, namely the [Mon Ami Gabi](http://en.yelp.be/biz/mon-ami-gabi-las-vegas-2).\n", - "\n", - "To get the Yelp data, you need to register by name and email address. The data is 3.6 GB.\n", - "\n", - "This time around, we are going to train the Word2Vec embeddings on the data ourselves. One restaurant is not enough to train Word2Vec properly, so we use 6 restaurants for that, but only run queries against one of them. In addition to the Mon Ami Gabi, mentioned above, we will be using:\n", - "\n", - "* [Earl of Sandwich](http://en.yelp.be/biz/earl-of-sandwich-las-vegas).\n", - "* [Wicked Spoon](http://en.yelp.be/biz/wicked-spoon-las-vegas).\n", - "* [Serendipity 3](http://en.yelp.be/biz/serendipity-3-las-vegas).\n", - "* [Bacchanal Buffet](http://en.yelp.be/biz/bacchanal-buffet-las-vegas-7).\n", - "* [The Buffet](http://en.yelp.be/biz/the-buffet-las-vegas-6).\n", - "\n", - "The restaurants we chose were those with the highest number of reviews in the Yelp dataset. Incidentally, they all are on the Las Vegas Boulevard. The corpus we trained Word2Vec on has 27028 documents (reviews), and the corpus we used for `SoftCosineSimilarity` has 6978 documents.\n", - "\n", - "Below a JSON file with Yelp reviews is read line by line, the text is extracted, tokenized, and stopwords and punctuation are removed.\n" + "[semeval16]: http://alt.qcri.org/semeval2016/task3/\n", + "[semeval17]: http://alt.qcri.org/semeval2017/task3/\n", + "[ql]: http://www.qatarliving.com/forum" ] }, { "cell_type": "code", - "execution_count": 8, + "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ - "[nltk_data] Downloading package punkt to /home/witiko/nltk_data...\n", - "[nltk_data] Package punkt is already up-to-date!\n", "[nltk_data] Downloading package stopwords to /home/witiko/nltk_data...\n", - "[nltk_data] Package stopwords is already up-to-date!\n" - ] - } - ], - "source": [ - "# Pre-processing a document.\n", - "from nltk.corpus import stopwords\n", - "from nltk import download, word_tokenize\n", - "download('punkt') # Download data for tokenizer.\n", - "download('stopwords') # Download stopwords list.\n", - "stop_words = stopwords.words('english')\n", - "\n", - "def preprocess(doc):\n", - " doc = doc.lower() # Lower the text.\n", - " doc = word_tokenize(doc) # Split into words.\n", - " doc = [w for w in doc if not w in stop_words] # Remove stopwords.\n", - " doc = [w for w in doc if w.isalpha()] # Remove numbers and punctuation.\n", - " return doc" - ] - }, - { - "cell_type": "code", - "execution_count": 9, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Cell took 103.94 seconds to run.\n" + "[nltk_data] Package stopwords is already up-to-date!\n", + "Number of documents: 3\n", + "CPU times: user 1min 59s, sys: 6.06 s, total: 2min 5s\n", + "Wall time: 2min 22s\n" ] } ], "source": [ "%%time\n", + "from itertools import chain\n", "import json\n", + "import gzip\n", + "from re import sub\n", + "from os.path import isfile\n", "\n", - "# Business IDs of the restaurants.\n", - "ids = ['4JNXUYY8wbaaDmk3BPzlWw', # Mon Ami Gabi\n", - " 'Ffhe2cmRyloz3CCdRGvHtA', # Earl of Sandwich\n", - " 'K7lWdNUhCbcnEvI0NhGewg', # Wicked Spoon\n", - " 'eoHdUeQDNgQ6WYEnP2aiRw', # Serendipity 3\n", - " 'RESDUcs7fIiihp38-d6_6g', # Bacchanal Buffet\n", - " '2weQS-RnoOBhb1KsHKyoSQ'] # The Buffet\n", - "\n", - "w2v_corpus = [] # Documents to train word2vec on (all 6 restaurants).\n", - "scm_corpus = [] # Documents to run queries against (only one restaurant).\n", - "documents = [] # scm_corpus, with no pre-processing (so we can see the original documents).\n", - "with open('/data/review.json') as data_file:\n", - " for line in data_file:\n", - " json_line = json.loads(line)\n", - " \n", - " if json_line['business_id'] not in ids:\n", - " # Not one of the 6 restaurants.\n", - " continue\n", - " \n", - " # Pre-process document.\n", - " text = json_line['text'] # Extract text from JSON object.\n", - " text = preprocess(text)\n", - " \n", - " # Add to corpus for training Word2Vec.\n", - " w2v_corpus.append(text)\n", - " \n", - " if json_line['business_id'] == ids[0]:\n", - " # Add to corpus for similarity queries.\n", - " scm_corpus.append(text)\n", - " documents.append(json_line['text'])" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Below is a plot with a histogram of document lengths and includes the average document length as well. Note that these are the pre-processed documents, meaning stopwords are removed, punctuation is removed, etc. Document lengths have a high impact on the running time of SCM, so when comparing running times with this experiment, the number of documents in query corpus (about 7000) and the length of the documents (about 59 words on average) should be taken into account." - ] - }, - { - "cell_type": "code", - "execution_count": 10, - "metadata": {}, - "outputs": [ - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAgQAAAGPCAYAAAAjncgQAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMS4xLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvAOZPmwAAIABJREFUeJzt3XmcHVWd///X2xAgECDBEMJiiMgiAyg/RDEOOEFBosgXR3RUUMFBQFEEwUFAZQLjhiKCiiPB+RLJDKAsjsIouyzDJkFRUDbZZMlCkF8gEAKE8/2jqkPl0knfdG6nu5PX8/GoR98659Spc2519/3cU6eqUkpBkiSt3F7V3w2QJEn9z4BAkiQZEEiSJAMCSZKEAYEkScKAQJIkYUCgPpbkwSRT+rsdK7oko5L8NMnsJCXJ4b2ooySZ1AfNW+kkmZLkwf5uR0+SjKuP+9H93Rb1PwMCtS3J/vU/j7cuJv/iTvwTTPK2JJOSjFjWulYi3wT2BL4NfAy4pH+bs2JJsk9vgqyBYrC3X8vHKv3dAK3wtgReWspt3gb8KzAF+P873aAV1ATg0lLKif3dkBXUPsA2wCn93ZBeGuzt13JgQKA+VUqZ399tWFpJVgVeKqW82N9tWQqjgTn93QhJg5enDNSnuptDkOSQJLcneSbJnCS3JTm4zptENewN8EB9iqIkmdDY/lNJ7kjyXJIZSU5Psm43+/5MkvuTzEvy2yRvT3J1kqsbZSbU9e9bn6b4KzAP2DjJqkmOT3JLkicb9byvm32VJD9K8oEkf6rL3pxkuzr/wCT31m2+Jsmmbb5/4+q5AU/Udd7S3H/XaRxgLWC/rverhzpXS/LdJI8neTrJL5NsvJiyb0zyqyRP1cfr6iQ7d1NunSTfrt/v+UkeTXJ2ko2a7UwyrmW7rvd/QiPt6iR3Jdm2fq+erev9UJ2/U5Kb6vfj7iS7d9OeDZL8uP79mJ/kziSfXsy+P5Lk2CSP1MfnyiSbNdsD7AFs0vh97NU93+uh+1vqtj+Z5Lwkr20p09X/v0tyVd3/R5Mc1U19m9TH75kks5KckmT35nvabvvr39H76vfrliRvbslfv35PH67LzEzy6yRb9+a90MDjCIF6Y50ko7pJH9rThkkOAE4Dzgd+UG+zNdVpgtOBC4EtgI8Anwdm15veWW//ZeDfgKvq8q8DPgPsmGTHrhGJ+p//D4D/Bb4LbAL8N/A34JFumnYs1amNU4EAc4G1gYOBc4EzgdWphl5/nuQ9pZRft9TxNqp/vKcBBTgGuDjJV4HPAf8OjAC+SHU65O09vFejgRuoPuy/BzwOfBS4MMm+pZRzgGup5gz8GPgtMHlJddZ+XNdzdl3/BOB/utn/VsB1wDNUQdpzwIHAFUl2K6VcW5dbE7iGakh6CjANeDXwHmAz4NE22tRqnbpNPwPOAz4F/FeSUA17/wg4B/gCcF6S15RS5tTtGQ3cBAwBfgjMAt4J/DDJq0spX23Z11HAAuCker9HAf8F7Fjnf61O35jqd7JXUk3c+zrV7/6ZwEjgs8D1Sd5YSnm8pf+/Bn5e9/8DwIlJbu/6vavf96uADal+bx8D9gV2adl1O+3/EDCc6m+qUL0HFybZtJTyQl3mfGBbqr+rB4D1gH+g+nv909K+HxqASikuLm0twP5U/yyWtDzYss2DwJTG+s+BO3rYzxfqusa1pK8HzAeuAIZ0067P1uurUgUSvwOGNsrtV5e7upE2oU77K7Bmy/6GAKu1pK0K3AFc0ZJegOeB1zXSDqrTZwHrNNK/Xqdv1sP7cHJdbkIjbRjwZ2B6S9/mNt/nJdT5xrrOH7akn1WnT2qkXVj3afNG2qj6vZ3WSJtUb/vBbvaXlmPUeky73v9mH6+u0z7WSNuyTnsJ+PtG+rvq9E820iYDM4D1WvZ1BvAsMKJl33cCqzbKfa5O36aRdjEtv9s9vM9TmuWBscALwHEt5V5HFWh9vZv+f7zl9246cH4j7Yi63N6NtNXr/rS+p922HxhXl50NjGyk/586/b31+oh6/Qvtvgcug2/xlIF643PAbt0sN7ex7Ryq4fg391jylXal+sd4aillQSN9KjCT6ts5wA5U31DPKC9/u4HqW9+Ti6n7rFLKM82EUsqC8vKIw6qpTkusTfWt/E3d1PGbUsp9jfWu9+PCUn97bUnv6bTBHsDvSilXN9o0j+pb7xhg+x6278576p8/aEn/XnMlyRBgd+CiUsq9jf3Ppvqwe1OS9evkDwB/KqWc17qzUkpvH6c6j+p4ddVzN9UE03tKKdc3yi3yXtYjCB+gGl0oqS7HHFWPaF1GFVDtyKLOKqU831i/rllnh7yfakT2py1tmgPcziu/1c8D/rNrpW7fb1vaNJEq8LmwUe45qsBnaV1QSmn+bbS+B/OogsMJ6eb0nFYMnjJQb9xSSrmpNTHVZU1jetj2RKrh298muR+4HPhpKeU3bex3k/rn3c3EUsqCJPdSfdtplvtLS7kXs/jLIu/rLjHJJ6mGWbeiOpWwsLpuiv+1Zb0rCHh4MekjF9OWLpvQ+GffcGf9cxztBWGtdRZa3hvgnpb19YA1aHmvu9n/TKpvub9Yynb05NFSSuvVKXNoeS9LKXOqGGDhe7le/fqf66U7o1vWW49b1wdjT8dnaWxR/7xrMfn3t6x31/8ngTc01jcB7u8m6Go9tu1Y5D0opTzZfF9LKfOTfJHqtMrMJDcDvwKmllJaf781SBkQaLkqpdyZZEuqb6q7A+8FDk7yw1LKZ/qxafNaE5LsS/Vt6yKqQGYW8CLwCaq5BK0WdJO2pPQsJn1FtLiRgiGLSe/te9k16nkO8H8XU7b1fPfyOD5d7Xo31e9Qq9bfv+X9O9Pj/koppyT5BbAX1YjgV4Bjk7y3OYqlwcuAQMtdKeVZqglK5ydZhWoI+pAkXy+lPMriPzweqn9uSeMbbZJXAZsDv28ptxnVCERXuVWovtX+sc2mfpDqm9tezW9hST7R5vbL6iGqvrZ6ff3zwV7WGar35s+N9C1ayj1Odb69nf3fRzWhcEm6vnW33mxqk9aCy+hx4GlglVLKFR2st7enPrp0jUD9tZTy5yWWbN9DwLZJ0jJKsFk3ZZe1/VUlpTxANanzlFRXptwGfIlq3oMGOecQaLlK8urmeqmu9b+9Xu36sOg6l986ZHs51XnMz9VBQJd9gfWpJk5BNcv9CeDAJENbyi3NMHDXt6aF35JSXS74j0tRx7K4GNg+jcv8kqwOfJrq3PGtvaiz68qIz7akH9pcqedoXALsmeR1jf2vSzU5c1opZWadfD6wdZIPtu6sPqcPL38gvr2RN4Rq4mXH1O0+H3hfkjd20571eln1M8CIRn+W1gVUv0/HdVfHYq7a6cmlVKfo3t+oZ3WqK0FaLVP7k6yRZFgzrZTyCNWo2YhGuQ2SvL7l706DhCMEWt4uSzKL6nLAGVTfZg6l+tbedW56Wv3zG0nOpgoCriqlzEryb1SXHV6W5L+pJj19FvgD1eV0lFKeT3U/g+8DVyX5GdU30U9QfTC1+23pl1T/bH+Z5JfARsAhVOfVt+td95fKiVSXX/5PkuZlh38H7Ft6ceOkUsptSc4BPp1kHeB6qgltrSMEAF+mmsX/v0lO4+XLDkdQTdzr8m1gb+CcJO+iClRGUA2PHwdcU0r5U5KbqI7pulSXf36YvvkfdDTVFQQ3JjmD6hTBSKpj9o9UM/GX1jSqS/NOqc+fv1RKObfdjUsp99eXHX6b6n4A/001SfK1VEPwP6W6WmNpnE71uz+1nqTbddnhc1277VT7qX4/rkpyHtX7OZ/qtN9WVFcFdfkGVcD4Wno3gqV+ZECg5e10qvPvh1PN2H+M6lzvV7smUZVSpiU5hurD90yqkaxdgFmllK8mmU0VRHyH6p/qFOCY0rgrYinlB/W3oSOp/gn/gepe/9/j5X+YS1RK+Ul9Tfunqa5w+AvVBMPNWA4BQR0A/T1VYHAI1SS/O6guM/v5MlT9z1TBxb5UH0ZXUV3R0Dph784kO1H9k/8i1XGYBhxY6nsQ1OWeSfJ2qg+091N9IMyiujfBvY0q96U6/kdTHbf/AH5D47ROJ9Tv245U57jfR3X8/kYVcB7Zy2p/SHUN/kepfvdCdX+KpWnXSfXk1yOogq1XUd0T4yqqew0slVLK3CTvoAp8P0d16elZwI1UoyTN3/Nlbf/DVFd9vJPq77dQnbY7oJSyuLkaGmRSen1VkDS41KcZHqe6DLC7YVVp0Kuv9vkusHE9J0dqi3MItEJKsno350s/DqyLE6C0gmg9r1/PITgYuNdgQEvLUwZaUb0V+G59zvMJqpv4HEA15L7Uw7PSAHVhqudv3EZ1e+KPUl0Fsm+/tkqDkgGBVlQPUp33/BzVqMDfqM6vHt1yVzppMLsU+CRVADCE6lLSD5dSftqvrdKg5BwCSZLkHAJJkrQCnTIYNWpUGTduXH83o8+98MfqJntD3/CGHkpKklZkt9566+xSSm9vtvUKK0xAMG7cOKZNm9ZzwUHu0Y1eA8BGK0FfJUmLl+Shnku1z1MGkiTJgECSJBkQSJIkDAgkSRIGBJIkCQMCSZJEmwFBkmOS3JLkqSSPJ7koyTYtZZJkUpLHksxLcnWSrVvKjEwyNcmcepmaZERLmW2TXFPX8WiS47p5SI0kSeqgdkcIJlA9T/ttwDuAF4ErkqzbKHMU1bPGDwXeTPU89MuTrNUoczbVQ2Ym1sv2wNSuzCRrUz0bfWZdx2HAv1A9P1ySJPWRtm5MVErZvbme5GPAHODvgYvqb/CHA98spVxQl9mPKijYBzg9yVZUQcBOpZQb6zIHA9cl2bKUcjfVAzrWAPYrpcwD7kjyeuCIJCcXH7wgSVKf6O0cgrXqbZ+s118LjAEu6ypQf6BfSzWqADAemAvc0KjneuCZljLX1dt2uRTYEBjXy7ZKkqQe9DYgOJXq+ds31utj6p8zW8rNbOSNAR5vfsuvX89qKdNdHc19LJTkoCTTkkx7/PHHe9MPSZJELwKCJCcDOwF7l1IWdL5J7SulTC6l7FBK2WG99Tr2fAdJklY6SxUQJPku8BHgHaWU+xtZM+qf67dssn4jbwawXvOKgfr16JYy3dXR3IckSeqwtgOCJKfycjBwV0v2A1Qf2Ls1yq8O7MzLcwZuBIZTzRPoMh5Ys6XMzvW2XXYDHgMebLetfWHDjceSpM+WDTce25/dkySt5Nq6yiDJacDHgPcBTybpOp8/t5Qyt5RSkpwCHJvkLuAe4MtUkwjPBiil3JnkEqorDg6qtz8duLi+woC67L8CU5J8FdgCOBo4vr+vMJj+6MPseNwlfVb/zSdM7LO6JUnqSVsBAXBI/fPKlvTjgUn1628Bw4DTgJHAzcC7SilPN8rvA3yf6soBgF8Cn+3KLKXMSbJbXcc0qqsYvgOc3GY7JUlSL7R7H4Ie7xRYf4OfxMsBQndlngQ+2kM9twNvb6ddkiSpM3yWgSRJMiCQJEkGBJIkCQMCSZKEAYEkScKAQJIkYUAgSZIwIJAkSRgQSJIkDAgkSRIGBJIkCQMCSZKEAYEkScKAQJIkYUAgSZIwIJAkSRgQSJIkDAgkSRIGBJIkCQMCSZKEAYEkScKAQJIkYUAgSZIwIJAkSRgQSJIkDAgkSRIGBJIkCQMCSZJEmwFBkrcn+WWSR5OUJPu35JfFLKc1ykzpJv+mlnpWS/L9JLOTPFPvc+OO9FSSJC1WuyMEw4E7gMOAed3kb9Cy7Fmn/6yl3BUt5d7Tkn8KsDfwEWBnYG3g4iRD2mynJEnqhVXaKVRK+RXwK6i+6XeTP6O5nmQv4J5SyjUtRee3lm1ssw5wAPCJUsrlddrHgIeAXYFL22mrJElaeh2fQ5BkOPBh4IxusndKMivJPUnOSDK6kfcmYChwWVdCKeVh4E7gbZ1upyRJellfTCrcB1gV+ElL+iXAx4F3AkcCbwGuSrJanT8GWADMbtluZp33CkkOSjItybTHH3+8Q82XJGnl09Ypg6V0IPCLUsoin9CllHMbq7cnuZXqdMAewIW92VEpZTIwGWDVVVctSXrXYkmSVnIdDQiSbAfsABzbU9lSymNJHgE2r5NmAEOAUUAzmFgfuK6n+l544QV2PO6SpW5zu24+YWKf1S1JUn/r9CmDg4AHqK4mWKIko4CNgOl10q3AC8BujTIbA1sBN3S4nZIkqaGtEYJ6ouBm9eqrgLH1aMDfSil/rcusAewLfKuUUrrZfhJwAVUAMA74BjAL+DlAKWVOkv8AvpVkFvAEcDLwR9oIMCRJUu+1O0KwA/D7ehkGHF+/PqFR5kPAmsCZ3Wy/ANgW+AVwD9WEw7uB8aWUpxvlDqcKEH4KXA/MBfYspSxos52SJKkX2r0PwdXAEmfslVLOpPtggFLKPGD3NvYzHzi0XiRJ0nLiswwkSZIBgSRJMiCQJEkYEEiSJAwIJEkSBgSSJAkDAkmShAGBJEnCgECSJGFAIEmSMCCQJEkYEEiSJAwIJEkSBgSSJAkDAkmShAGBJEnCgECSJGFAIEmSMCCQJEkYEEiSJAwIJEkSBgSSJAkDAkmShAGBJEnCgECSJGFAIEmSMCCQJEkYEEiSJNoMCJK8PckvkzyapCTZvyV/Sp3eXG5qKbNaku8nmZ3kmbq+jVvKjE1yUZ0/O8n3kqy6zL2UJElL1O4IwXDgDuAwYN5iylwBbNBY3tOSfwqwN/ARYGdgbeDiJEMA6p//A6xV538E+ADwnTbbKEmSemmVdgqVUn4F/Aqq0YDFFJtfSpnRXUaSdYADgE+UUi6v0z4GPATsClwKvAvYGtiklPJwXeYo4MdJvlRKeardTkmSpKXTyTkEOyWZleSeJGckGd3IexMwFLisK6H+0L8TeFudNB64sysYqF0KrFZvL0mS+khbIwRtuAS4EHgAGAd8FbgqyZtKKfOBMcACYHbLdjPrPOqfM1vyZ9fbjaEbSQ4CDupA+yVJWql1JCAopZzbWL09ya1UpwP2oAoU+kQpZTIwGSBJ6av9SJK0ouuTyw5LKY8BjwCb10kzgCHAqJai69d5XWXWb8kfVW/X7dwESZLUGX0SECQZBWwETK+TbgVeAHZrlNkY2Aq4oU66Ediq5VLE3YD59faSJKmPtHXKIMlwYLN69VXA2CTbAX+rl0nABVQBwDjgG8As4OcApZQ5Sf4D+FaSWcATwMnAH6kuV4RqwuGfgLOSHAm8Gvg2cIZXGEiS1LfaHSHYAfh9vQwDjq9fn0A16W9b4BfAPcBPgLuB8aWUpxt1HE4VIPwUuB6YC+xZSlkAUP/cA3i2zv8pVZDxhd53T5IktaPd+xBcDWQJRXZvo475wKH1srgyfwXe206bJElS5/gsA0mSZEAgSZIMCCRJEgYEkiQJAwJJkoQBgSRJwoBAkiRhQCBJkjAgkCRJGBBIkiQMCCRJEgYEkiQJAwJJkoQBgSRJwoBAkiRhQCBJkjAgkCRJGBBIkiQMCCRJEgYEkiQJAwJJkoQBgSRJwoBAkiRhQCBJkjAgkCRJGBBIkiQMCCRJEgYEkiQJAwJJkkSbAUGStyf5ZZJHk5Qk+zfyhiY5MckfkzyTZHqSs5OMbanj6nrb5nJuS5mRSaYmmVMvU5OM6EhPB7gMGUqSHpeF5dso21w23HjsEvYuSVrZrdJmueHAHcBZ9dK0BrA98DXgNmAd4DvAJUneUEp5sVH2TODYxvq8lrrOBsYCE+v1HwNTgT3bbOegVRa8wI7HXdJzwR9/EqC9sg03nzCx50KSpJVWWwFBKeVXwK8AkkxpyZsD7NZMS3Iw8CdgK+D2RtazpZQZ3e0jyVZUgcBOpZQbG/Vcl2TLUsrd7bRVkiQtvb6aQ7B2/fPJlvQPJ5md5E9JTkqyViNvPDAXuKGRdj3wDPC27naS5KAk05JM61TDJUlaGbV7yqBtSValOmVwUSnlkUbW2cBDwGPA1sA3gDcA76rzxwCPl1JK1wallJJkVp33CqWUycDker+luzKSJKlnHQ0IkqwC/CcwAvg/zbz6w7vL7UnuB25Osn0p5XedbIckSVo6HTtlUAcD51B9639nKeWJHjaZBiwANq/XZwDrpTGVvn49us6TJEl9pCMBQZKhwE+pgoFdFjdxsMW2wBBger1+I9XVDOMbZcYDa7LovAJJktRhbZ0ySDIc2KxefRUwNsl2wN+o5gScB7yZ6vLAkqTrnP+cUsq8JK8D9qW6UmE28HdU8wx+TzVxkFLKnUkuAU5PclC9/enAxV5hIElS32p3hGAHqg/v3wPDgOPr1ycAGwN7ARsCt1J94+9aPlRv/zzwTuBS4G7ge8BlwK6llAWN/ewD/KEud2n9+mO965okSWpXu/chuBrIEoosKY9SysPAP7SxnyeBj7bTJkmS1Dk+y0CSJBkQSJIkAwJJkoQBgSRJwoBAkiRhQCBJkjAgkCRJGBBIkiQMCCRJEgYEkiQJAwJJkoQBgSRJwoBAkiRhQCBJkjAgkCRJGBBIkiQMCCRJEgYEkiQJAwJJkoQBgSRJwoBAkiRhQCBJkjAgkCRJGBBIkiQMCCRJEgYEkiQJAwJJkoQBgSRJos2AIMnbk/wyyaNJSpL9W/KTZFKSx5LMS3J1kq1byoxMMjXJnHqZmmRES5ltk1xT1/FokuOSZJl7KUmSlqjdEYLhwB3AYcC8bvKPAo4EDgXeDMwCLk+yVqPM2cD2wMR62R6Y2pWZZG3gcmBmXcdhwL8AR7TfHUmS1BurtFOolPIr4FcASaY08+pv8IcD3yylXFCn7UcVFOwDnJ5kK6ogYKdSyo11mYOB65JsWUq5G9gXWAPYr5QyD7gjyeuBI5KcXEopy9xbSZLUrU7MIXgtMAa4rCuh/kC/FnhbnTQemAvc0NjueuCZljLX1dt2uRTYEBjXgXZKkqTF6ERAMKb+ObMlfWYjbwzwePNbfv16VkuZ7upo7mMRSQ5KMi3JtF62XZIk0eYpg4GqlDIZmAyQxFMKkiT1UidGCGbUP9dvSV+/kTcDWK95xUD9enRLme7qaO5DkiT1gU4EBA9QfWDv1pWQZHVgZ16eM3Aj1ZUK4xvbjQfWbCmzc71tl92Ax4AHO9BOSZK0GO3eh2B4ku2SbFdvM7ZeH1vPBTgF+GKS9yfZBphCNYnwbIBSyp3AJVRXHIxPMh44Hbi4vsKAuuyzwJQk2yR5P3A04BUGkiT1sXZHCHYAfl8vw4Dj69cn1PnfAr4LnAZMAzYA3lVKebpRxz7AH6iuHLi0fv2xrsxSyhyqEYEN6zpOA74DnNyLfkmSpKXQ7n0IrgYWe8fA+hv8pHpZXJkngY/2sJ/bgbe30yZJktQ5PstAkiQZEEiSJAMCSZKEAYEkScKAQJIkYUAgSZIwIJAkSRgQSJIkDAgkSRIGBJIkCQMCSZKEAYEkScKAQJIkYUAgSZIwIJAkSRgQSB2T5BXLj370o0XK/OxnP2O77bZjjTXWYJNNNuHb3/522/WXUnj3u99NEs4///yF6Q8++CAHHHAAm266KcOGDWPTTTflmGOOYd68eR3rm6QV3yr93QBpRXLGGWfw3ve+d+H6Ouuss/D1r3/9a/bZZx++973vMXHiRO68804OPPBAhg0bxmc/+9ke6/7Od77Dq171yhj+rrvuYsGCBfz7v/87m2++OXfeeScHHXQQTzzxBJMnT+5MxySt8BwhUJ+YMGECn/70pznyyCNZd911WW+99Tj11FOZP38+n/nMZxgxYgRjx45l6tSpi2z36KOP8uEPf5iRI0cycuRI9thjD+69996F+ffddx977bUXY8aMYc0112T77bfn4osvXqSOcePG8dWvfpWDDz6Ytddem4033nipvokvixEjRjBmzJiFy7BhwxbmTZ06lT333JNDDjmETTfdlD322INjjjmGE088kVLKEuu95ZZbOPXUUznzzDNfkTdx4kSmTJnC7rvvvrDeL33pS1xwwQUd75+kFZcBgfrMf/3Xf7HWWmtx8803c/TRR3P44Yfzvve9jy222IJp06ax33778clPfpLp06cD8Oyzz7LLLruw+uqrc80113DjjTeywQYbsOuuu/Lss88CMHfuXN797ndz+eWX84c//IG9996b97///dx1112L7Pu73/0u2267Lb/73e/44he/yFFHHcWNN9642LZed911DB8+fInL17/+9R77fNhhhzFq1Cje/OY386Mf/YiXXnppYd78+fNZffXVFyk/bNgwHnnkER566KHF1vn000+zzz77MHnyZEaPHt1jGwCeeuopRo4c2VZZSQJPGagPbb311kyaNAmAI444gm9+85sMHTqUww47DIDjjjuOE088keuvv54PfOADnHvuuZRSOPPMM0kCwOmnn87o0aO5+OKL+ad/+ife+MY38sY3vnHhPr70pS9x0UUXcf755/PlL395Yfq73vWuhcPwhx56KN/73ve48sorGT9+fLdt3WGHHbjtttuW2J911113ifknnHACu+yyC8OHD+fKK6/kyCOPZPbs2Qvbtfvuu3P44Ydz2WWXseuuu/KXv/yF73znOwBMnz6dcePGdVvvpz71KSZOnMi73/3uJe6/y0MPPcRJJ53Escce21Z5SQIDAvWhN7zhDQtfJ2H06NFsu+22C9OGDh3KyJEjmTVrFgC33norDzzwAGuttdYi9Tz77LPcd999ADzzzDMcf/zxXHzxxUyfPp0XXniB5557bpF9te4bYMMNN1y4n+4MGzaMzTbbrHcdrX3lK19Z+Hq77bZjwYIFfO1rX1sYEBx44IELT3m88MILrL322hx22GFMmjSp27kBUJ1m+MMf/sC0adPaasPMmTOZOHEiu+22G5///OeXqT+SVi6eMlCfGTp06CLrSbpN6xpWf+mll9huu+247bbbFlnuueceDj74YAC+8IUvcN555/Fv//ZvXHPNNdx222285S1v4fnnn+9x383h+1adOmXQtOOOO/LUU08xc+bMhW048cQTmTt3Lg899BAzZszgLW95CwCbbrppt3VceeWV/PnPf2b48OGsssoqrLJKFcN/6EMfYqeddlqk7IwZM9hll13YZpttmDp16sJRFklqhyMEGjC23357zjnnHEaNGsWIESO6LfO///u/fPzjH2fvvfcG4LnnnuNhmOg8AAAT7ElEQVS+++5jiy22WKZ9d+KUQavbbruN1Vdf/RV9GTJkCBtttBEA55xzDuPHj2e99dbrto6vfe1rfOELX1gkbdttt+Wkk05ir732Wpg2ffp0dtllF7beemvOOeechYGDJLXL/xoaMPbdd9+FH3QnnHACY8eO5eGHH+YXv/gFn/rUp9h8883ZYost+PnPf85ee+3F0KFDOf7443nuueeWed/LesrgoosuYsaMGYwfP55hw4bxm9/8huOOO46DDjqI1VZbDYDZs2dz3nnnMWHCBObPn8+ZZ57JeeedxzXXXLOwnt/+9rd8/OMf56yzzuItb3kLG2200cLgoek1r3nNwlGFxx57jAkTJrDhhhtyyimnMHv27IXl1ltvPYYMGdLrfklaeRgQrCQyZGifDiFvsNFreOyRvy5THWussQbXXnstRx99NB/84AeZM2cOG264IbvsssvCGfMnn3wyBxxwADvvvDMjR47k8MMP70hAsKyGDh3KD3/4Q4444gheeuklNt10U0444QQ+85nPLFLurLPO4l/+5V8opTB+/HiuvvrqhacNoJovcffddy+8qqIdl112Gffeey/33nsvY8eOXSTvgQceWOxkRUlqSk/XPw8WScqOx13SZ/XffMJEBkL9F/z4kwDs/ckf90n9vXXzCRN7vJZektQ5SW4tpezQqfqcVChJkgwIJElShwKCJA8mKd0s/1PnT+omb0ZLHanLPZZkXpKrk2zdifZJkqQl69QIwZuBDRrL9kABftYoc3dLmW1b6jgKOBI4tK5vFnB5krWQJEl9qiNXGZRSHm+uJzkAeIpFA4IXSymLjAo0ygc4HPhmKeWCOm0/qqBgH+D0TrRTkiR1r+NzCOoP9wOA/yylNB/Ivml9OuCBJOcmad6a7bXAGOCyroR622uBt3W6jZIkaVF9MalwN6oP+DMaaTcD+wMTgQOpPvxvSPLqOn9M/XNmS10zG3mvkOSgJNOStHejd0mS1K2+uDHRgcAtpZQ/dCWUUn7dLJDkJuB+YD/g5N7uqJQyGZhc1+lF8JIk9VJHRwiSjAb2YtHRgVcopcwF/gRsXid1zS1Yv6Xo+o08SZLURzp9ymB/YD5wzpIKJVkdeD0wvU56gOqDf7eWMjsDN3S4jZIkqUXHThnUkwk/CZxbjwA0804CLgL+CowGvgKsCfwEoJRSkpwCHJvkLuAe4MvAXODsTrVRkiR1r5NzCCZQnQL4aDd5G1ONGowCHgduAt5aSnmoUeZbwDDgNGAk1UTEd5VSnu5gGyVJUjc6FhCUUn4DdPs4vVLKh9vYvgCT6kWSJC1HPstAkiQZEEiSJAMCSZKEAYEkScKAQJIkYUAgSZIwIJAkSRgQSJIkDAgkSRIGBJIkCQMCSZKEAYEkScKAQJIkYUAgSZIwIJAkSRgQSJIkDAgkSRIGBJIkCQMCSZKEAYEkScKAQJIkYUAgSZIwIJAkSRgQSJIkDAgkSRIGBJIkCQMCSZKEAYEkSaJDAUGSSUlKyzKjkZ+6zGNJ5iW5OsnWLXWMTDI1yZx6mZpkRCfaJ0mSlqyTIwR3Axs0lm0beUcBRwKHAm8GZgGXJ1mrUeZsYHtgYr1sD0ztYPskSdJirNLBul4spcxoTUwS4HDgm6WUC+q0/aiCgn2A05NsRRUE7FRKubEuczBwXZItSyl3d7CdkiSpRSdHCDatTwk8kOTcJJvW6a8FxgCXdRUspcwDrgXeVieNB+YCNzTqux54plFGkiT1kU4FBDcD+1N9yz+QKgC4Icmr69cAM1u2mdnIGwM8XkopXZn161mNMq+Q5KAk05JM60QnJElaWXXklEEp5dfN9SQ3AfcD+wE3dWIfi9nvZGByvc/SQ3FJkrQYfXLZYSllLvAnYHOga17B+i3F1m/kzQDWq+cbAAvnHoxulJEkSX2kTwKCJKsDrwemAw9Qfajv1pK/My/PGbgRGE41l6DLeGBNFp1XIEmS+kBHThkkOQm4CPgr1bf6r1B9mP+klFKSnAIcm+Qu4B7gy1STCM8GKKXcmeQSqisODqqrPR242CsMJEnqe5267HBj4BxgFPA41byBt5ZSHqrzvwUMA04DRlJNQnxXKeXpRh37AN8HLq3Xfwl8tkPtkyRJS9CpSYUf7iG/AJPqZXFlngQ+2on2aPnLkKE0poB03AYbvYbHHvlrn9UvSSu7Tt6YSCuxsuAFdjzukj6r/+YTJvZZ3ZIkH24kSZIwIJAkSRgQSJIkDAgkSRIGBJIkCQMCSZKEAYEkScKAQJIkYUAgSZIwIJAkSRgQSJIkDAgkSRIGBJIkCQMCSZKEAYEkScKAQJIkYUAgSZIwIJAkSRgQSJIkDAgkSRIGBJIkCQMCSZKEAYEkScKAQJIkYUAgSZIwIJAkSRgQSJIkDAg0SGTIUJL02bLhxmP7u4uS1K9W6UQlSY4B3g9sCcwHbgKOKaXc0SgzBdivZdObSylvbZRZDTgJ+AgwDLgSOKSU8kgn2qnBqyx4gR2Pu6TP6r/5hIl9VrckDQadGiGYAPwQeBvwDuBF4Iok67aUuwLYoLG8pyX/FGBvqoBgZ2Bt4OIkQzrUTkmS1I2OjBCUUnZvrif5GDAH+HvgokbW/FLKjO7qSLIOcADwiVLK5Y16HgJ2BS7tRFslSdIr9dUcgrXqup9sSd8pyawk9yQ5I8noRt6bgKHAZV0JpZSHgTupRh4kSVIf6cgIQTdOBW4DbmykXQJcCDwAjAO+ClyV5E2llPnAGGABMLulrpl13iskOQg4qKMtlyRpJdTxgCDJycBOwE6llAVd6aWUcxvFbk9yK9XpgD2oAoWlVkqZDEyu91t63WhJklZyHT1lkOS7VBMC31FKuX9JZUspjwGPAJvXSTOAIcColqLr13mSJKmPdCwgSHIqLwcDd7VRfhSwETC9TroVeAHYrVFmY2Ar4IZOtVOSJL1Sp+5DcBrwMeB9wJNJus75zy2lzE0yHJgEXEAVAIwDvgHMAn4OUEqZk+Q/gG8lmQU8AZwM/JHqckVJktRHOjWH4JD655Ut6cdTBQILgG2BjwMjqIKC3wD/VEp5ulH+cKp7GPyUl29M9PHmXARJktR5nboPQXrInwfsvqQydbn5wKH1IkmSlhOfZSBJkgwIJEmSAYEkScKAQJIkYUAgAZAhQ0nSZ8uGG4/t7y5K0hL11bMMpEGlLHiBHY+7pM/qv/mEiX1WtyR1giMEkiTJgECSJBkQSJIkDAgkSRIGBJIkCQMCSZKEAYEkScKAQJIkYUAgSZIwIJCWi76+NbK3R5a0rLx1sbQc9PWtkcHbI0taNo4QSJIkAwJJkmRAIEmSMCCQJEkYEEiSJAwIpBVGX1/a6GWN0orNyw6lFURfX9roZY3Sis0RAkltcQRCWrE5QiCpLY5ASCs2RwgkSZIBgSRJGqABQZJDkjyQ5LkktybZub/bJEnSimzABQRJPgScCnwd+P+AG4BfJ3HGkbQC6+tJi6usNsxJkdISDMRJhUcAU0opZ9TrhyaZCHwaOKb/miWpLy2PSYt9Wf9vv7YnSfqs/g02eg2PPfLXPqtfGlABQZJVgTcBJ7VkXQa8bfm3SJLaM9ivwthw47FMf/ThPqu/rwOawd7+gSCllP5uw0JJNgQeBf6hlHJtI/04YN9SypYt5Q8CDqpXtwHuWF5t7SOjgNn93YgOsB8Dx4rQB1gx+rEi9AHsx0CyZSllrU5VNqBGCJZWKWUyMBkgybRSyg793KRlsiL0AezHQLIi9AFWjH6sCH0A+zGQJJnWyfoG2qTC2cACYP2W9PWBGcu/OZIkrRwGVEBQSnkeuBXYrSVrN6qrDSRJUh8YiKcMTgamJvktcD3wKWBD4Ec9bDe5rxu2HKwIfQD7MZCsCH2AFaMfK0IfwH4MJB3tw4CaVNglySHAUcAGVBMFP9+cZChJkjprQAYEkiRp+RpQcwgkSVL/GPQBwWB77kGSSUlKyzKjkZ+6zGNJ5iW5OsnW/dzmtyf5ZZJH6/bu35LfY5uTjEwyNcmcepmaZMQA68eUbo7NTS1lVkvy/SSzkzxT17fxcuzDMUluSfJUkseTXJRkm5YyA/p4tNmHwXAsPpPkj3U/nkpyY5I9GvkD+jgsRT8G/LFoVf+OlSQ/aKQNiuPRaEt3fejTYzGoA4IM3uce3E01P6Jr2baRdxRwJHAo8GZgFnB5ko7dfKIXhlPN5TgMmNdNfjttPhvYHphYL9sDU/uwzd3pqR8AV7DosXlPS/4pwN7AR4CdgbWBi5MM6YsGd2MC8EOqO3e+A3gRuCLJuo0yA/14TKDnPsDAPxaPAF+keu92AK4C/jvJG+r8gX4cuvTUDxj4x2KhJG+lumHdH1uyBsvxWFIfoC+PRSll0C7AzcAZLWn3At/o77Ytoc2TgDsWkxdgOvClRtow4Gng4P5ue92eucD+S9NmYCugAH/fKLNTnbblQOhHnTYFuHgJ26wDPE9118yutNcALwG791M/hlPdu2PPwXo8WvswWI9F3Ya/AQcPxuPQXT8G27Go23IfsAtwNfCDOn3QHI/F9WF5HItBO0KQl597cFlL1mB47sGm9bDVA0nOTbJpnf5aYAyNPpVS5gHXMnD71E6bx1N9ADfvJXE98AwDr187JZmV5J4kZyQZ3ch7EzCURfv6MHAn/dePtahG+p6s1wfj8WjtQ5dBcyySDEnyYarg5gYG53Horh9dBsuxmAycX0r5TUv6YDoei+tDlz47FgPxPgTtGgUMAWa2pM8Edl3+zWnbzcD+wF3AaODLwA31uawxdZnu+rTR8mrgUmqnzWOAx0sdrgKUUkqSWY3tB4JLgAuBB4BxwFeBq5K8qZQyn6qtC3jl/c9n0n/9OBW4DbixXh+Mx6O1DzBIjkWSbanavTrVh8k/llJuT9L1z3dQHIfF9aPOHizH4kBgM+Cj3WQPir+LHvoAfXwsBnNAMCiVUn7dXK8nhNwP7Afc1O1GWi5KKec2Vm9PcivwELAH1R/hgJLkZKohzZ1KKQv6uz29sbg+DKJjcTewHdVQ7QeAnySZ0K8t6p1u+1FKuWMwHIskW1LNJduplPJCf7enN9rpQ18fi0F7yoAV5LkHpZS5wJ+AzXm53YOpT+20eQawXvLyw+Lr16MZuP2ilPIY1YSrzeukGVSjUqNaii7345Pku1STht5RSrm/kTVojscS+vAKA/VYlFKeL6X8pZRyaynlGKqRjs8ziI4DLLEf3ZUdiMdifL3/PyV5McmLwD8Ah9Svn2i0aXFt7O/jscQ+JFmtdYNOH4tBGxCUFeS5B0lWB15PNeHlAaqDtltL/s4M3D610+Ybqc5Jjm9sNx5Yk4HbL5KMohpOnF4n3Qq8wKJ93ZhqMtJy60eSU3n5g/SuluxBcTx66EN35QfksejGq4DVGCTHYQm6+vEKA/RY/DfV1VrbNZZpwLn163sY+Mejpz4837pBx4/F8po52UezMT9Uv0mfrDt8KtX5r036u21LaPNJVFHfa4EdgYuBp7raTHX5zxzg/cA29S/DY8Ba/djm4Y1f0GeB4+rXY9ttM/Br4HaqP7Dx9euLBko/6ryT6raNo7o07kaq6LvZj3+v03alutT1N1TfpoYspz6cVv++vIPqnGDXMrxRZkAfj576MIiOxTepPlDGUf0j/wbVbO53D4bj0E4/BsuxWEy/rmbRGfqD4ngsrg/L41j0Syc7/IYdAjwIzKeKjt7e323qob1dv4TPA48CFwB/18gP1aWJ04HngGuAbfq5zROoLr1pXaa022ZgJPCfVB8ET9WvRwyUflBdgnQp1bXJz1Odl5sCvKaljtWA71MNQT4LXNRapo/70F37CzBpaX6H+vN49NSHQXQsptRtm1+39Qoal3YN9OPQTj8Gy7FYTL+uZtGAYFAcj8X1YXkcC59lIEmSBu8cAkmS1DkGBJIkyYBAkiQZEEiSJAwIJEkSBgSSJAkDAkn9KMm4JCXJDv3dFmllZ0AgrQSSTEly8creBkmLZ0AgSZIMCKSVXZJ1kkxOMivJ00muaQ7hJ9k/ydwk70xyR5JnkvwmyWtb6jkmycy67FlJ/jXJg3XeJKpHfO9RnyIoLY8J3iTJ5UmeTfLnJK0PLZPUxwwIpJVY/XjX/6F6Ytp7qR6Gci1wVZINGkVXA44B/pnq4SojgB816vkw8K/Al4DtgTuBIxrbnwT8jOo++RvUS/Ppa18Dvge8EbgFODfJ8E71U1LPDAikldsuVE98/EAp5bellL+UUr4C3A98rFFuFeAzdZk/Un3AT2g8O/4wqodd/biUck8p5RvAzV0bl1LmAvOA+aWUGfXSfJzrd0spF5VS7gWOBdat2yVpOTEgkFZubwLWAB6vh/rnJplL9XjY1zXKzS+l3N1YfwxYlerpcACvB37bUvfNtO+PLXUDjF6K7SUto1X6uwGS+tWrgJnAzt3kPdV4/WJLXtdjUjv1peKFhRWXUuqBB7+wSMuRAYG0cvsdsD7wUinl/mWo5y7gzcD/baS9paXM88CQZdiHpD5kQCCtPNZO0npe/i/A9cAvkhxF9cE+BpgIXFFKua7Nuk8FzkxyC3Ad8I/AjsCTjTIPAu9OsiXwBDCntx2R1HkGBNLKY2fg9y1pFwDvAb4KnEF13n4mVZBwVrsVl1LOTbIp8E2qOQkXUl2FsFej2BnABGAaMJxqQuODS98NSX0hpZSeS0nSUkryc2CVUsqe/d0WST1zhEDSMkuyBvBp4BKqCYh7U40O7N2f7ZLUPkcIJC2zJMOAi6hubDQMuBc4sZRydr82TFLbDAgkSZLX+UqSJAMCSZKEAYEkScKAQJIkYUAgSZIwIJAkScD/A2rEPCgAhvsjAAAAAElFTkSuQmCC\n", - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - } - ], - "source": [ - "%%time\n", - "from matplotlib import cycler, pyplot as plt\n", - "%matplotlib inline\n", - "\n", - "# Document lengths.\n", - "lens = [len(doc) for doc in scm_corpus]\n", - "\n", - "# Plot.\n", - "plt.rc('figure', figsize=(8,6))\n", - "plt.rc('font', size=14)\n", - "plt.rc('lines', linewidth=2)\n", - "plt.rc('axes', prop_cycle=cycler('color', ('#377eb8','#e41a1c','#4daf4a',\n", - " '#984ea3','#ff7f00','#ffff33')))\n", - "# Histogram.\n", - "plt.hist(lens, bins=20, edgecolor=\"k\")\n", - "# Average length.\n", - "avg_len = sum(lens) / float(len(lens))\n", - "plt.axvline(avg_len, color='#e41a1c')\n", - "plt.title('Histogram of document lengths.')\n", - "plt.xlabel('Length')\n", - "plt.xlim((0, 450))\n", - "plt.text(100, 800, 'mean = %.2f' % avg_len)\n", - "plt.show()" + "from gensim.utils import simple_preprocess\n", + "from nltk.corpus import stopwords\n", + "from nltk import download\n", + "import wget\n", + "\n", + "download(\"stopwords\") # Download stopwords list.\n", + "stopwords = set(stopwords.words(\"english\"))\n", + "\n", + "def preprocess(doc):\n", + " doc = sub(r']+(>|$)', \" image_token \", doc)\n", + " doc = sub(r'<[^<>]+(>|$)', \" \", doc)\n", + " doc = sub(r'\\[img_assist[^]]*?\\]', \" \", doc)\n", + " doc = sub(r'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\\(\\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+', \" url_token \", doc)\n", + " return [token for token in simple_preprocess(doc, min_len=0, max_len=float(\"inf\")) if token not in stopwords]\n", + "\n", + "if not isfile(\"semeval-2016_2017-task3-subtaskA-unannotated-english.json.gz\"): # TODO: Replace with a gensim-data call.\n", + " wget.download(\"https://github.com/Witiko/semeval-2016_2017-task3-subtaskA-unannotated-english/releases/download/2018-01-29/semeval-2016_2017-task3-subtaskA-unannotated-english.json.gz\")\n", + "with gzip.open(\"semeval-2016_2017-task3-subtaskA-unannotated-english.json.gz\", \"rt\") as json_file:\n", + " json_data = json.loads(json_file.read())\n", + " corpus = list(chain(*[\n", + " chain(\n", + " [preprocess(thread[\"RelQuestion\"][\"RelQSubject\"]), preprocess(thread[\"RelQuestion\"][\"RelQBody\"])],\n", + " [preprocess(relcomment[\"RelCText\"]) for relcomment in thread[\"RelComments\"]])\n", + " for thread in json_data]))\n", + "\n", + "print(\"Number of documents: %d\" % len(documents))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "Now we want to initialize the similarity class with a corpus and a word2vec model (which provides the embeddings and the `softcossim` method itself)." + "Using the corpus we have just build, we will now construct a [dictionary][], a [TF-IDF model][tfidf], a [word2vec model][word2vec], and a term similarity matrix.\n", + "\n", + "[dictionary]: https://radimrehurek.com/gensim/corpora/dictionary.html\n", + "[tfidf]: https://radimrehurek.com/gensim/models/tfidfmodel.html\n", + "[word2vec]: https://radimrehurek.com/gensim/models/word2vec.html" ] }, { "cell_type": "code", - "execution_count": 11, + "execution_count": 8, "metadata": { - "scrolled": false + "scrolled": true }, "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "2018-02-05 10:52:53,477 : INFO : built Dictionary(462807 unique tokens: ['reclarify', 'depeneded', 'autralia', 'cloudnight', 'openmoko']...) from 2274338 documents (total 40096354 corpus positions)\n", + "2018-02-05 10:56:50,633 : INFO : training on a 200481770 raw words (192577574 effective words) took 224.3s, 858402 effective words/s\n", + "2018-02-05 11:13:14,895 : INFO : constructed a term similarity matrix with 0.003564 % nonzero elements\n" + ] + }, { "name": "stdout", "output_type": "stream", "text": [ - "Cell took 41.35 seconds to run.\n" + "Number of unique words: 462807\n", + "CPU times: user 1h 2min 21s, sys: 12min 56s, total: 1h 15min 17s\n", + "Wall time: 21min 27s\n" ] } ], "source": [ "%%time\n", + "from gensim.corpora import Dictionary\n", + "from gensim.models import TfidfModel\n", "from gensim.models import Word2Vec\n", + "from multiprocessing import cpu_count\n", "\n", - "# Train Word2Vec on all the restaurants.\n", - "model = Word2Vec(w2v_corpus, workers=3, size=100)\n", + "dictionary = Dictionary(corpus)\n", + "tfidf = TfidfModel(dictionary=dictionary)\n", + "w2v_model = Word2Vec(corpus, workers=cpu_count(), min_count=5, size=300, seed=12345)\n", + "similarity_matrix = w2v_model.wv.similarity_matrix(dictionary, tfidf, nonzero_limit=100)\n", "\n", - "# Initialize SoftCosineSimilarity.\n", - "from gensim import corpora\n", - "from gensim.similarities import SoftCosineSimilarity\n", - "num_best = 10\n", - "dictionary = corpora.Dictionary(scm_corpus)\n", - "scm_corpus = [dictionary.doc2bow(document) for document in scm_corpus]\n", - "similarity_matrix = model.wv.similarity_matrix(dictionary)\n", - "instance = SoftCosineSimilarity(scm_corpus, similarity_matrix, num_best=num_best)" + "print(\"Number of unique words: %d\" % len(dictionary))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "The `num_best` parameter decides how many results the queries return. Now let's try making a query. The output is a list of indeces and similarities of documents in the corpus, sorted by similarity.\n", - "\n", - "Note that the output format is slightly different when `num_best` is `None` (i.e. not assigned). In this case, you get an array of similarities, corresponding to each of the documents in the corpus.\n", - "\n", - "The query below is taken directly from one of the reviews in the corpus. Let's see if there are other reviews that are similar to this one." + "Next, we will load the validation and test datasets that were used by the SemEval 2016 and 2017 contestants. The datasets contain 208 original questions posted by the forum members. For each question, there is a list of 10 threads with a human annotation denoting whether or not the thread is relevant to the original question. Our task will be to order the threads so that relevant threads rank above irrelevant threads." ] }, { "cell_type": "code", - "execution_count": 12, + "execution_count": 9, "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Cell took 47.43 seconds to run.\n" - ] - } - ], + "outputs": [], "source": [ - "%%time\n", - "sent = 'Yummy! Great view of the Bellagio Fountain show.'\n", - "query = dictionary.doc2bow(preprocess(sent))\n", - "\n", - "sims = instance[query] # A query is simply a \"look-up\" in the similarity class." + "# TODO: Replace with a gensim-data call.\n", + "if not isfile(\"semeval-2016_2017-task3-subtaskB-english.json.gz\"):\n", + " wget.download(\"https://github.com/Witiko/semeval-2016_2017-task3-subtaskB-english/releases/download/2018-01-29/semeval-2016_2017-task3-subtaskB-english.json.gz\")\n", + "with gzip.open(\"semeval-2016_2017-task3-subtaskB-english.json.gz\", \"rt\") as json_file:\n", + " datasets = json.loads(json_file.read())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "The query and the most similar documents, together with the similarities, are printed below. We see that the retrieved documents are discussing the same thing as the query, although using different words. The query talks about the food being \"yummy\", while the second best result talk about it being \"good\"." + "Finally, we will perform an evaluation to compare three unsupervised similarity measures – the Soft Cosine Measure, two different implementations of the [Word Mover's Distance][wmd], and standard cosine similarity. We will use the [Mean Average Precision (MAP)][map] as an evaluation measure and 10-fold cross-validation to get an estimate of the variance of MAP for each similarity measure.\n", + "\n", + "[wmd]: http://vene.ro/blog/word-movers-distance-in-python.html\n", + "[map]: https://medium.com/@pds.bangalore/mean-average-precision-abd77d0b9a7e" ] }, { "cell_type": "code", - "execution_count": 13, + "execution_count": 10, "metadata": {}, + "outputs": [], + "source": [ + "from math import isnan\n", + "from time import time\n", + "\n", + "from gensim.similarities import MatrixSimilarity, WmdSimilarity, SoftCosineSimilarity\n", + "import numpy as np\n", + "from sklearn.model_selection import KFold\n", + "from wmd import WMD\n", + "\n", + "def produce_test_data(dataset):\n", + " for orgquestion in datasets[dataset]:\n", + " query = preprocess(orgquestion[\"OrgQSubject\"]) + preprocess(orgquestion[\"OrgQBody\"])\n", + " documents = [\n", + " preprocess(thread[\"RelQuestion\"][\"RelQSubject\"]) + preprocess(thread[\"RelQuestion\"][\"RelQBody\"])\n", + " for thread in orgquestion[\"Threads\"]]\n", + " relevance = [\n", + " thread[\"RelQuestion\"][\"RELQ_RELEVANCE2ORGQ\"] in (\"PerfectMatch\", \"Relevant\")\n", + " for thread in orgquestion[\"Threads\"]]\n", + " yield query, documents, relevance\n", + "\n", + "def cossim(query, documents):\n", + " # Compute cosine similarity between the query and the documents.\n", + " query = tfidf[dictionary.doc2bow(query)]\n", + " index = MatrixSimilarity(\n", + " tfidf[[dictionary.doc2bow(document) for document in documents]],\n", + " num_features=len(dictionary))\n", + " similarities = index[query]\n", + " return similarities\n", + "\n", + "def softcossim(query, documents):\n", + " # Compute Soft Cosine Measure between the query and the documents.\n", + " query = tfidf[dictionary.doc2bow(query)]\n", + " index = SoftCosineSimilarity(\n", + " tfidf[[dictionary.doc2bow(document) for document in documents]],\n", + " similarity_matrix)\n", + " similarities = index[query]\n", + " return similarities\n", + "\n", + "def wmd_gensim(query, documents):\n", + " # Compute Word Mover's Distance as implemented in PyEMD by William Mayner\n", + " # between the query and the documents.\n", + " index = WmdSimilarity(documents, w2v_model)\n", + " similarities = index[query]\n", + " return similarities\n", + "\n", + "def wmd_relax(query, documents):\n", + " # Compute Word Mover's Distance as implemented in WMD by Source{d}\n", + " # between the query and the documents.\n", + " words = [word for word in set(chain(query, *documents)) if word in w2v_model.wv]\n", + " indices, words = zip(*sorted((\n", + " (index, word) for (index, _), word in zip(dictionary.doc2bow(words), words))))\n", + " query = dict(tfidf[dictionary.doc2bow(query)])\n", + " query = [\n", + " (new_index, query[dict_index])\n", + " for new_index, dict_index in enumerate(indices)\n", + " if dict_index in query]\n", + " documents = [dict(tfidf[dictionary.doc2bow(document)]) for document in documents]\n", + " documents = [[\n", + " (new_index, document[dict_index])\n", + " for new_index, dict_index in enumerate(indices)\n", + " if dict_index in document] for document in documents]\n", + " embeddings = np.array([w2v_model.wv[word] for word in words], dtype=np.float32)\n", + " nbow = dict(((index, (None, *zip(*document))) for index, document in enumerate(documents)))\n", + " nbow[\"query\"] = (None, *zip(*query))\n", + " distances = WMD(embeddings, nbow, vocabulary_min=1).nearest_neighbors(\"query\")\n", + " similarities = [-distance for _, distance in sorted(distances)]\n", + " return similarities\n", + "\n", + "strategies = {\n", + " \"cossim\" : cossim,\n", + " \"softcossim\": softcossim,\n", + " \"wmd-gensim\": wmd_gensim,\n", + " \"wmd-relax\": wmd_relax}\n", + "\n", + "def evaluate(split, strategy):\n", + " # Perform a single round of evaluation.\n", + " results = []\n", + " start_time = time()\n", + " for query, documents, relevance in split:\n", + " similarities = strategies[strategy](query, documents)\n", + " assert len(similarities) == len(documents)\n", + " precision = [\n", + " (num_correct + 1) / (num_total + 1) for num_correct, num_total in enumerate(\n", + " num_total for num_total, (_, relevant) in enumerate(\n", + " sorted(zip(similarities, relevance), reverse=True)) if relevant)]\n", + " average_precision = np.mean(precision) if precision else 0.0\n", + " results.append(average_precision)\n", + " return (np.mean(results) * 100, time() - start_time)\n", + "\n", + "def crossvalidate(args):\n", + " # Perform a cross-validation.\n", + " dataset, strategy = args\n", + " test_data = np.array(list(produce_test_data(dataset)))\n", + " kf = KFold(n_splits=10)\n", + " samples = []\n", + " for _, test_index in kf.split(test_data):\n", + " samples.append(evaluate(test_data[test_index], strategy))\n", + " return (np.mean(samples, axis=0), np.std(samples, axis=0))" + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "metadata": { + "scrolled": true + }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ - "Query:\n", - "Yummy! Great view of the Bellagio Fountain show.\n", - "\n", - "sim = 1.0000\n", - "Yummy! Great view of the Bellagio Fountain show.\n", - "\n", - "sim = 0.8114\n", - "Food was good. Awesome service. Great view of the water show at the Bellagio\n", - "\n", - "sim = 0.7813\n", - "This is a great place to eat after a show. Great atmosphere with the Bellagio Fountain across the street. The food is really good.\n", - "\n", - "sim = 0.7719\n", - "Love this place! It has a great atmosphere, the food is consistently good, if you sit on the patio you can watch the Fountain show of the Bellagio.\n", - "\n", - "sim = 0.7680\n", - "Solid food; great service. Beautiful view of the Bellagio fountains across the street.\n", - "\n", - "sim = 0.7627\n", - "Nice French food with a great view if the Bellagio fountains\n", - "\n", - "sim = 0.7597\n", - "Great environment, great service and great food with relatively affordable price! What can be better than enjoying a glass of sweet Frangria under the sun while watching the fountain show at the Bellagio right across the street during your vacay?\n", - "\n", - "sim = 0.7585\n", - "Amazing food, amazing service, great view of Bellagio fountains\n", - "\n", - "sim = 0.7569\n", - "Consistently good food with a view of the fountains at bellagio.\n", - "\n", - "sim = 0.7565\n", - "Great food with a great view! Time it right with the bellagio fountains!\n" + "CPU times: user 1.49 s, sys: 1.28 s, total: 2.77 s\n", + "Wall time: 1min 42s\n" ] } ], "source": [ - "# Print the query and the retrieved documents, together with their similarities.\n", - "print('Query:')\n", - "print(sent)\n", - "for i in range(num_best):\n", - " print()\n", - " print('sim = %.4f' % sims[i][1])\n", - " print(documents[sims[i][0]])" + "%%time\n", + "from multiprocessing import Pool\n", + "\n", + "args_list = [\n", + " (dataset, technique)\n", + " for dataset in (\"2016-test\", \"2017-test\")\n", + " for technique in (\"softcossim\", \"wmd-gensim\", \"wmd-relax\", \"cossim\")]\n", + "with Pool() as pool:\n", + " results = pool.map(crossvalidate, args_list)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "Let us now remove the word \"yummy\" from the query. We can see that\n", - "\n", - "> Food was good. Awesome service. Great view of the water show at the Bellagio\n", - "\n", - "drops from the second to the seventh place even though it does not actually contain the word \"yummy\"." + "The table below shows the pointwise estimates of means and standard variances for MAP scores and elapsed times. Baselines and winners for each year are displayed in bold. We can see that the Soft Cosine Measure gives a strong performance on both the 2016 and the 2017 dataset." ] }, { "cell_type": "code", - "execution_count": 14, + "execution_count": 12, "metadata": {}, "outputs": [ { - "name": "stdout", - "output_type": "stream", - "text": [ - "Query:\n", - "Great view of the Bellagio Fountain show.\n", - "\n", - "sim = 0.9591\n", - "Yummy! Great view of the Bellagio Fountain show.\n", - "\n", - "sim = 0.8020\n", - "Solid food; great service. Beautiful view of the Bellagio fountains across the street.\n", - "\n", - "sim = 0.7899\n", - "Nice French food with a great view if the Bellagio fountains\n", - "\n", - "sim = 0.7820\n", - "Food was good. Awesome service. Great view of the water show at the Bellagio\n", - "\n", - "sim = 0.7797\n", - "This is a great place to eat after a show. Great atmosphere with the Bellagio Fountain across the street. The food is really good.\n", - "\n", - "sim = 0.7684\n", - "Love this place! It has a great atmosphere, the food is consistently good, if you sit on the patio you can watch the Fountain show of the Bellagio.\n", - "\n", - "sim = 0.7648\n", - "Consistently good food with a view of the fountains at bellagio.\n", - "\n", - "sim = 0.7641\n", - "Great food with a great view! Time it right with the bellagio fountains!\n", - "\n", - "sim = 0.7631\n", - "Great environment, great service and great food with relatively affordable price! What can be better than enjoying a glass of sweet Frangria under the sun while watching the fountain show at the Bellagio right across the street during your vacay?\n", - "\n", - "sim = 0.7519\n", - "They have very unique thin steaks that have great flavor. Directly across from the fountains at the Bellagio so you get a great view with dinner as well.\n", - "Cell took 46.75 seconds to run.\n" - ] + "data": { + "text/markdown": [ + "\n", + "\n", + "Dataset | Strategy | MAP score | Elapsed time (sec)\n", + ":---|:---|:---|---:\n", + "2016-test|softcossim|77.29 ±10.35|0.20 ±0.06\n", + "2016-test|**Winner (UH-PRHLT-primary)**|76.70 ±0.00|\n", + "2016-test|cossim|76.45 ±10.40|0.48 ±0.07\n", + "2016-test|wmd-gensim|76.07 ±11.52|8.36 ±2.05\n", + "2016-test|**Baseline 1 (IR)**|74.75 ±0.00|\n", + "2016-test|wmd-relax|73.01 ±10.33|0.97 ±0.16\n", + "2016-test|**Baseline 2 (random)**|46.98 ±0.00|\n", + "\n", + "\n", + "Dataset | Strategy | MAP score | Elapsed time (sec)\n", + ":---|:---|:---|---:\n", + "2017-test|**Winner (SimBow-primary)**|47.22 ±0.00|\n", + "2017-test|softcossim|46.06 ±18.00|0.15 ±0.03\n", + "2017-test|cossim|44.38 ±14.71|0.43 ±0.07\n", + "2017-test|wmd-gensim|44.20 ±16.02|9.78 ±1.80\n", + "2017-test|**Baseline 1 (IR)**|41.85 ±0.00|\n", + "2017-test|wmd-relax|41.24 ±14.87|1.00 ±0.26\n", + "2017-test|**Baseline 2 (random)**|29.81 ±0.00|" + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" } ], "source": [ - "%%time\n", - "sent = 'Great view of the Bellagio Fountain show.'\n", - "query = dictionary.doc2bow(preprocess(sent))\n", - "\n", - "sims = instance[query] # A query is simply a \"look-up\" in the similarity class.\n", - "\n", - "print('Query:')\n", - "print(sent)\n", - "for i in range(num_best):\n", - " print()\n", - " print('sim = %.4f' % sims[i][1])\n", - " print(documents[sims[i][0]])" + "from IPython.display import display, Markdown\n", + "\n", + "output = []\n", + "baselines = [\n", + " ((\"2016-test\", \"**Winner (UH-PRHLT-primary)**\"), ((76.70, 0), (0, 0))),\n", + " ((\"2016-test\", \"**Baseline 1 (IR)**\"), ((74.75, 0), (0, 0))),\n", + " ((\"2016-test\", \"**Baseline 2 (random)**\"), ((46.98, 0), (0, 0))),\n", + " ((\"2017-test\", \"**Winner (SimBow-primary)**\"), ((47.22, 0), (0, 0))),\n", + " ((\"2017-test\", \"**Baseline 1 (IR)**\"), ((41.85, 0), (0, 0))),\n", + " ((\"2017-test\", \"**Baseline 2 (random)**\"), ((29.81, 0), (0, 0)))]\n", + "table_header = [\"Dataset | Strategy | MAP score | Elapsed time (sec)\", \":---|:---|:---|---:\"]\n", + "for row, ((dataset, technique), ((mean_map_score, mean_duration), (std_map_score, std_duration))) \\\n", + " in enumerate(sorted(chain(zip(args_list, results), baselines), key=lambda x: (x[0][0], -x[1][0][0]))):\n", + " if row % (len(strategies) + 3) == 0:\n", + " output.extend(chain([\"\\n\"], table_header))\n", + " map_score = \"%.02f ±%.02f\" % (mean_map_score, std_map_score)\n", + " duration = \"%.02f ±%.02f\" % (mean_duration, std_duration) if mean_duration else \"\"\n", + " output.append(\"%s|%s|%s|%s\" % (dataset, technique, map_score, duration))\n", + "\n", + "display(Markdown('\\n'.join(output)))" ] }, { @@ -564,13 +570,9 @@ "source": [ "## References\n", "\n", - "1. Grigori Sidorov et al. [*Soft Similarity and Soft Cosine Measure: Similarity of Features in Vector Space Model*][1], 2014.\n", - "* Delphine Charlet and Geraldine Damnati, [*SimBow at SemEval-2017 Task 3: Soft-Cosine Semantic Similarity between Questions for Community Question Answering*][2], 2017.\n", - "* Thomas Mikolov et al. [*Efficient Estimation of Word Representations in Vector Space*][3], 2013.\n", - "\n", - " [1]: http://www.scielo.org.mx/pdf/cys/v18n3/v18n3a7.pdf (Soft Measure and Soft Cosine Measure: Measure of Features in Vector Space Model)\n", - " [2]: http://www.aclweb.org/anthology/S17-2051 (Simbow at semeval-2017 task 3: Soft-cosine semantic measure between questions for community question answering)\n", - " [3]: https://github.com/witiko-masters-thesis/thesis/blob/master/main.pdf (Vector Space Representations in Information Retrieval)" + "1. Grigori Sidorov et al. *Soft Similarity and Soft Cosine Measure: Similarity of Features in Vector Space Model*, 2014. ([link to PDF](http://www.scielo.org.mx/pdf/cys/v18n3/v18n3a7.pdf))\n", + "2. Delphine Charlet and Geraldine Damnati, SimBow at SemEval-2017 Task 3: Soft-Cosine Semantic Similarity between Questions for Community Question Answering, 2017. ([link to PDF](http://www.aclweb.org/anthology/S17-2051))\n", + "3. Thomas Mikolov et al. Efficient Estimation of Word Representations in Vector Space, 2013. ([link to PDF](https://arxiv.org/pdf/1301.3781.pdf))" ] } ],