You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Pretty cool stuff.
Reading the code I'm just wondering about why so many levels of indirection from indexes to word2vec sentence matrixes.
It's like parsing -> creation of an "alphabet" to map words to indexes -> creation of questions / answers as series of alphabet indexes -> creation of an alphabet index to word2vec mapping.
This also requiring a nn layer that will do the lookup index to word2vec vector, before the convolution.
Is there a reason to bother with indexes at all, and not transforming everything straight into a word2vec matrix either at parsing time or even before the feed forward phase ?
Seems like this way the code would be more tolerant to being fed new document pairs containing words that exist in the word2vec but not in the "alphabet" mapping.
The text was updated successfully, but these errors were encountered:
Pretty cool stuff.
Reading the code I'm just wondering about why so many levels of indirection from indexes to word2vec sentence matrixes.
It's like parsing -> creation of an "alphabet" to map words to indexes -> creation of questions / answers as series of alphabet indexes -> creation of an alphabet index to word2vec mapping.
This also requiring a nn layer that will do the lookup index to word2vec vector, before the convolution.
Is there a reason to bother with indexes at all, and not transforming everything straight into a word2vec matrix either at parsing time or even before the feed forward phase ?
Seems like this way the code would be more tolerant to being fed new document pairs containing words that exist in the word2vec but not in the "alphabet" mapping.
The text was updated successfully, but these errors were encountered: