-
-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MRG] Refactor phrases #2976
[MRG] Refactor phrases #2976
Conversation
More detailed timing info, all measured on
@gojomo @mpenkov seems better across the board; please review and let's include this in 4.0.0. I'll update the unit tests next. |
return None, None | ||
|
||
phrase = self.delimiter.join([word_a] + in_between + [word_b]) | ||
# XXX: Why do we care about *all* phrase tokens? Why not just score the start+end bigram? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@alexgarel @gojomo do you remember why we score bigrams on word_a + in_between + word_b
, instead of just word_a + word_b
? What's the point of including the common words in the score calculation?
In other words:
- "bigram" count of the phrase "eye of the beholder" now =
#['eye', 'of', 'the', 'beholder']
- why not
#['eye', 'beholder']
instead? (phrases "eye of the beholder", "eye beholder", "eye the beholder", "eye of beholder" etc would all share the same bigram count)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd guess that tracks back to feature #1258, with an abortive implementation in #1567, arriving in #1568. It sounded useful to me in some situations, but hard enough to understand, and twisty enough in its effect on existing code, that I suggested a couple times it be a separate class, rather than options on Phrases
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(And relatedly: my hunch is this has no effect unless that non-default option is activated, because in_between
is then always empty.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Your hunch is correct – common_words
is empty by default. Although I'm tempted to change the default to "all English articles + prepositions", it seems pretty useful.
I'm leaving the logic of #['eye', 'of', 'the', 'beholder']
vs #['eye', 'beholder']
unchanged. Scoring just the brigram makes more sense IMO, but I don't know if anyone's using common_words
and if they are, let's avoid surprises.
The effect of common_words
on the code base is now minimal, both in lines-of-code and speed, so separate class not needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Re: defaults
Despite my own perspective as an English monoglot, I'd hate to change a language-agnostic algorithm, that previously had no English-specific initialization, to become English-specific by default – even in a major version bump, with lots of warning. (On the other hand, providing handy switch/presets that make enabling an English-specific mode, and/or other languages, sounds useful.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Implemented in #2979. Not sure which default to use… I agree language-agnostic is nice, but English is the overwhelmingly common use-case.
I eyeballed the exported phrases and common words do make a huge difference in quality.
- removed testing of loading of old models for backward compatibility, because the wrappers use plain pickle and so don't support SaveLoad overrides
2ea2634
to
aaa79dd
Compare
Following #2973, I set out to improve the documentation for our phrases module. While at it, I saw many inefficiencies and cumbersome constructs there, so ended up with a larger refactor.
Benefits of this PR:
Not in this PR:
I also added migration code for
load()
, so old models continue to work.