CBOW or Continous bag of words is to use embeddings in order to train a neural network where the context is represented by multiple words for a given target words.
For example, we could use “cat” and “tree” as context words for “climbed” as the target word. This calls for a modification to the neural network architecture. The modification, shown below, consists of replicating the input to hidden layer connections C times, the number of context words, and adding a divide by C operation in the hidden layer neurons.
The CBOW architecture is pretty simple contains :
- the word embeddings as inputs (idx)
- the linear model as the hidden layer
- the log_softmax as the output
sentence = "we are about to study the idea of computational"
Input is context words around centered target word: [context <--
, context <-
, target, context ->
, context -->
] in form ([context], target):
[(['we', 'are', 'to', 'study'], 'about'), (['are', 'about', 'study', 'the'], 'to'), (['about', 'to', 'the', 'idea'], 'study'), (['to', 'study', 'idea', 'of'], 'the'), (['study', 'the', 'of', 'computational'], 'idea')]
Trained verification input:
# (['we', 'are', 'to', 'study'], 'about')
word = predict(['we', 'are', 'to', 'study'])
word ='about'
Training on train-nn.txt, embedded_size=100, windowed_sz=4, input data.size=3298
<< loss : 98.180715
<< sucess: 95.451788
- NLPTools with text preprocessing
- nlp-starter-continuous-bag-of-words-cbow
- SentEval by Facebook
- InferSent by Facebook
- Mini-Word2Vec
- CBOW Is Not All You Need: Combining CBOW with the Compositional Matrix Space Model with word2mat
- Context encoders as a simple but powerful extension of word2vec
- Corrected CBOW Performs as well as Skip-gram + Dataset C4 (Colossal Clean Crawled Corpus) to download for repo: https://github.com/bloomberg/koan
- CBOW: nlp-starter-logsoftmax-nlloss-cross-entropy