-
Notifications
You must be signed in to change notification settings - Fork 24.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CJK analyzer tokenization issues #34285
Comments
Pinging @elastic/es-search-aggs |
@romseygeek Could you take a look at this one? |
Thanks for the very detailed issue @Trey314159. It seems that all of these issues are solved by using some combination of the ICU normalizers and tokenizers.
The advantage of the cc @jimczi |
For |
Thanks for the feedback. I agree that Not having the desired normalization for these relatively common invisible characters in the standard analyzer is a reasonable design choice, but I can't imagine anyone would expect that searching for hyphenation would fail to find hyphenation, because they look exactly the same. Opening issues in Lucene for |
I opened https://issues.apache.org/jira/browse/LUCENE-8526 for the |
As explained by Steve in the Lucene issue I was wrong about the
What do you think of Alan's idea to provide an
It would be nice to have some documentation in the ICU plugin that explains these normalization issues. @Trey314159 is this something you would be interested to contribute on ? |
Paraphrasing @romseygeek, I agree that pointing people to analysis-icu for CJK text and providing an icu_analyzer out of the box could be a good thing.
Maybe? I wouldn't want to commit to any really big project, but if you want some examples with explanation, I could probably put something together, covering at least the issues I regularly run into. Would it go here? Would it be best to issue a pull request there or just provide some descriptive text to someone else who better knows where to put it? |
Yes a pull request there would be great. |
The ICU plugin provides the building blocks of an analysis chain, but doesn't actually have a prebuilt analyzer. It would be a better for users if there was a simple analyzer that they could use out of the box, and also something we can point to from the CJK Analyzer docs as a superior alternative. Relates to #34285
The ICU plugin provides the building blocks of an analysis chain, but doesn't actually have a prebuilt analyzer. It would be a better for users if there was a simple analyzer that they could use out of the box, and also something we can point to from the CJK Analyzer docs as a superior alternative. Relates to #34285
The ICU analyzer will be in 6.6, and docs on CJKAnalyzer point to it as an alternative. Can we close this one out now, or is there more to do? |
I'm going to close this now, as we have a working alternative in the |
It makes sense to me to report these all together, but I can split these into separate bugs if that's better.
Elasticsearch version (
curl -XGET 'localhost:9200'
):Plugins installed: [analysis-icu, analysis-nori]
JVM version (
java -version
):openjdk version "1.8.0_181"
OpenJDK Runtime Environment (build 1.8.0_181-8u181-b13-1~deb9u1-b13)
OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode)
OS version (
uname -a
if on a Unix-like system):Linux vagrantes6 4.9.0-6-amd64 #1 SMP Debian 4.9.82-1+deb9u3 (2018-03-02) x86_64 GNU/Linux
Description of the problem including expected versus actual behavior:
I've uncovered a number of oddities in tokenization in the CJK analyzer. All examples are from Korean Wikipedia or Korean Wiktionary (including non-CJK examples). In rough order of importance:
A. Mixed-script tokens (Korean and non-CJK—such as numbers, Latin characters) are treated as one long token, rather than being broken up into bigrams. For example,
안녕은하철도999극장판2.1981년8월8일.일본개봉작1999년재더빙video판
is tokenized as one token.B. Middle dots (·, U+00B7) can be used as list separators in Korean. When they are, the text is not broken up into bigrams. For example,
경승지·산악·협곡·해협·곶·심연·폭포·호수·급류
is tokenized as one token. I'm not sure whether this is a special case of (A) or not.Work around: use a character filter to convert middle dots to spaces before CJK.
C. The CJK analyzer eats encircled numbers (①②③), "dingbat" circled numbers (➀➁➂), parenthesized numbers (⑴⑵⑶), fractions (¼ ⅓ ⅜ ½ ⅔ ¾), superscript numbers (¹²³), and subscript numbers (₁₂₃). They just disappear.
Work around: use the icu_normalizer before CJK to convert these to ASCII numbers.
D. Soft hyphens (U+00AD) and zero-width non-joiners (U+200C), and left-to-right and right-to-left markers (U+200E and U+200F) are left in tokens. They should be stripped out. Examples: hyphenation (soft hyphen) and بازیهای (zero-width non-joiners), הארץ (left-to-right mark).
Work around: use a character filter to strip these characters before CJK.
Steps to reproduce:
Please include a minimal but complete recreation of the problem, including
(e.g.) index creation, mappings, settings, query etc. The easier you make for
us to reproduce it, the more likely that somebody will take the time to look at it.
A. Mixed Korean–Non-CJK characters
B. Middle dots as lists
C. Unicode numerical characters disappear
D. soft hyphens, zero-width non-joiners, left-to-right and right-to-left markers (note that these are usually invisible)
The text was updated successfully, but these errors were encountered: