Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Website & API Doc site generator using DocFx script #206

Merged
merged 66 commits into from
Feb 26, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
66 commits
Select commit Hold shift + click to select a range
0e78fa3
Initial commit of powershell build to create an API doc site with docfx
Shazwazza May 16, 2017
98daf25
Updates styles, etc... for the docs
Shazwazza May 16, 2017
beec80a
updates build script to serve website
Shazwazza May 16, 2017
11c9cc4
updates build to properly serve with an option to not clean cache files
Shazwazza May 16, 2017
6359774
Merge remote-tracking branch 'lucene/master' into docfx-apidocs
Shazwazza Jun 30, 2017
2d362b2
adds index file for api docs
Shazwazza Jun 30, 2017
d4a90da
fixes a couple of crefs
Shazwazza Jun 30, 2017
2cf2aff
creates custom docs files
Shazwazza Jun 30, 2017
789237e
updates script to ensure it only includes csproj's that are in the sl…
Shazwazza Jun 30, 2017
d440348
Adds wiki example docs, fixes up some toc, adds logging to build, fix…
Shazwazza Jul 7, 2017
856faf5
Removes use of custom template files since we can just use the built …
Shazwazza Jul 7, 2017
710b193
Adds test files, fixing up some doc refs
Shazwazza Jul 7, 2017
7a84897
Fixes namespace overwrite issue, adds solution for custom markdown pl…
Shazwazza Jul 7, 2017
248a08f
fixes exposed name of plugin
Shazwazza Jul 7, 2017
c31809e
Moves source code for docs formatter to the 'src' folder
Shazwazza Jul 10, 2017
713c911
Merge remote-tracking branch 'lucene/master' into docfx-apidocs
Shazwazza Jul 17, 2017
a4f2717
Updates build script to ensure the custom DocFx plugin is built for t…
Shazwazza Jul 17, 2017
e5a30e1
Merge remote-tracking branch 'lucene/master' into docfx-apidocs
Shazwazza Aug 24, 2017
f311b11
Updates to latest docfx version
Shazwazza Aug 24, 2017
0efb8ab
Splitting build into separate projects so we can browse APIs per proj…
Shazwazza Sep 7, 2017
c9da684
Gets projects all building separately, added a custom toc, and now we…
Shazwazza Sep 7, 2017
eba9e22
updates build, ignore and toc
Shazwazza Sep 7, 2017
9fcf112
OK, gets projects -> namespace api docs working but the breadcrumb is…
Shazwazza Sep 7, 2017
928ba72
turns it into a 3 level toc for now which is better than before, awai…
Shazwazza Sep 7, 2017
307e6ba
Merge remote-tracking branch 'lucene/master' into docfx-apidocs
Shazwazza Sep 27, 2017
44fecfc
updates to latest docfx including the references in the docs plugin p…
Shazwazza Sep 27, 2017
07abde8
Gets CLI docs building and included as a header link and adds toc fil…
Shazwazza Sep 27, 2017
a685ced
fixes some csproj refs
Shazwazza Sep 27, 2017
981d175
adds the Kuromoji package
Shazwazza Sep 27, 2017
e3cd5bd
Gets more building, includes the markdown docs for use as the namespa…
Shazwazza Sep 27, 2017
6753ff7
removes the replicator from the docs since that was erroring for some…
Shazwazza Sep 27, 2017
eebcb6e
Moves the docfx build yml files to a better temporary folder making i…
Shazwazza Sep 28, 2017
3c94355
fixes the sln file since there was a duplicate project declared
Shazwazza Sep 28, 2017
9a3a7f7
fixes toc references
Shazwazza Sep 28, 2017
b57e572
ensure the docfx log location is absolute
Shazwazza Sep 28, 2017
0bf4876
Adds demo, removes old unused doc example files, updates and includes…
Shazwazza Oct 6, 2017
92466a6
Merge remote-tracking branch 'LUCENE/master' into docfx-apidocs
Shazwazza Nov 10, 2017
e399009
re-organizes the files that are included as files vs namespace overri…
Shazwazza Dec 22, 2017
03e5691
Get the correct /api URI paths for the generated api docs
Shazwazza Jan 8, 2018
dbdb0a2
fix whitespace for the @lucene.experimental thing to work
Shazwazza Jan 8, 2018
3a75ab7
Updates build to include TestFramework, updates index to match the ja…
Shazwazza Jan 8, 2018
c367ead
Gets the index page back to normal with the deep links to API docs, f…
Shazwazza Jan 8, 2018
ae1358b
removes duplicate entry
Shazwazza Jan 8, 2018
4774cea
removes the test framework docs from building because this causes col…
Shazwazza Jan 15, 2018
498b34e
Gets the website up and running with a nice template, updates styles …
Shazwazza Jun 5, 2018
4cce528
moves the quick start into a partial
Shazwazza Jun 5, 2018
f5dd33b
Gets most info and links all ready for the website
Shazwazza Jun 5, 2018
4410d78
Updates more docs for the website and fixes some invalid links
Shazwazza Jun 6, 2018
c7847af
commits whitespace changes as a result of a slightly different doc co…
Shazwazza Jun 6, 2018
3fe7a94
Revert "commits whitespace changes as a result of a slightly differen…
Shazwazza Jun 6, 2018
adbd663
Updates docs based on the new output of the converter
Shazwazza Jun 6, 2018
641f61d
Gets more docs converted properly with the converter
Shazwazza Jun 7, 2018
4181930
Updates the doc converter to append yaml headers correctly
Shazwazza Jun 7, 2018
d30666f
Fixes most of the xref links
Shazwazza Jun 7, 2018
544731a
Fixes link parsing in the doc converter
Shazwazza Jun 7, 2018
7ca803a
removes breadcrumb from download doc, more xrefs fixed
Shazwazza Aug 20, 2018
efb0b00
Attempting to modify the markdig markdown engine to process special t…
Shazwazza Aug 21, 2018
80c5f20
Revert "Attempting to modify the markdig markdown engine to process s…
Shazwazza Aug 21, 2018
2dd11c0
Gets the DFM markdown engine running again so the @lucene.experimenta…
Shazwazza Aug 21, 2018
99ef2ff
Updates some website info
Shazwazza Feb 19, 2019
1634932
Adds separate docs page to link to the various docs for different ver…
Shazwazza Feb 19, 2019
e288834
fix typo
Shazwazza Feb 19, 2019
7d05a79
bumps the date, small change to the source code doc
Shazwazza Feb 19, 2019
6ee616c
Gets the download page all working for the different versions with ch…
Shazwazza Feb 22, 2019
9ad9d77
Fixing links to the download-package
Tasteful Feb 24, 2019
61ab984
Merge pull request #1 from Tasteful/patch-1
Shazwazza Feb 24, 2019
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 10 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -49,4 +49,13 @@ release/
.tools/

# NUnit test result file produced by nunit3-console.exe
[Tt]est[Rr]esult.xml
[Tt]est[Rr]esult.xml
websites/**/_site/*
websites/**/tools/*
websites/**/_exported_templates/*
websites/**/api/.manifest
websites/**/docfx.log
websites/**/lucenetemplate/plugins/*
websites/apidocs/api/**/*.yml
websites/apidocs/api/**/*.manifest
!websites/apidocs/api/toc.yml
11 changes: 10 additions & 1 deletion Lucene.Net.sln
Original file line number Diff line number Diff line change
Expand Up @@ -112,6 +112,15 @@ Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "Lucene.Net.Tests.Join", "sr
EndProject
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "Lucene.Net.Tests.Memory", "src\Lucene.Net.Tests.Memory\Lucene.Net.Tests.Memory.csproj", "{3BE7B6EA-8DBC-45E2-947C-1CA7E63B5603}"
EndProject
Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "apidocs", "apidocs", "{58FD6E39-F30F-4566-90E5-B7C9D6BC0660}"
ProjectSection(SolutionItems) = preProject
apidocs\docfx.filter.yml = apidocs\docfx.filter.yml
apidocs\docfx.json = apidocs\docfx.json
apidocs\docs.ps1 = apidocs\docs.ps1
apidocs\index.md = apidocs\index.md
apidocs\toc.yml = apidocs\toc.yml
EndProjectSection
EndProject
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "Lucene.Net.Tests.Misc", "src\Lucene.Net.Tests.Misc\Lucene.Net.Tests.Misc.csproj", "{F8DDC5B7-A621-4B67-AB4B-BBE083C05BB8}"
EndProject
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "Lucene.Net.Tests.Queries", "src\Lucene.Net.Tests.Queries\Lucene.Net.Tests.Queries.csproj", "{AC750DC0-05A3-4F96-8CC5-CFC8FD01D4CF}"
Expand Down Expand Up @@ -357,8 +366,8 @@ Global
HideSolutionNode = FALSE
EndGlobalSection
GlobalSection(NestedProjects) = preSolution
{EFB2E31A-5917-49D5-A808-FE5061A550B4} = {8CA61D33-3590-4024-A304-7B1F75B50653}
{4DF7EACE-2B25-43F6-B558-8520BF20BD76} = {8CA61D33-3590-4024-A304-7B1F75B50653}
{EFB2E31A-5917-49D5-A808-FE5061A550B4} = {8CA61D33-3590-4024-A304-7B1F75B50653}
{119BBACD-D4DB-4E3B-922F-3DA83E0B29E2} = {4DF7EACE-2B25-43F6-B558-8520BF20BD76}
{CF3A74CA-FEFD-4F41-961B-CC8CF8D96286} = {8CA61D33-3590-4024-A304-7B1F75B50653}
{4B054831-5275-44E2-A4D4-CA0B19BEE19A} = {8CA61D33-3590-4024-A304-7B1F75B50653}
Expand Down
2 changes: 1 addition & 1 deletion src/Lucene.Net.Analysis.Common/Analysis/Cjk/package.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
limitations under the License.
-->

<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">


Analyzer for Chinese, Japanese, and Korean, which indexes bigrams.
This analyzer generates bigram terms, which are overlapping groups of two adjacent Han, Hiragana, Katakana, or Hangul characters.
Expand Down
2 changes: 1 addition & 1 deletion src/Lucene.Net.Analysis.Common/Analysis/Cn/package.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
limitations under the License.
-->

<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">


Analyzer for Chinese, which indexes unigrams (individual chinese characters).

Expand Down
8 changes: 4 additions & 4 deletions src/Lucene.Net.Analysis.Common/Analysis/Compound/package.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,8 +74,8 @@ filter available:

#### HyphenationCompoundWordTokenFilter

The [](xref:Lucene.Net.Analysis.Compound.HyphenationCompoundWordTokenFilter
HyphenationCompoundWordTokenFilter) uses hyphenation grammars to find
The [
HyphenationCompoundWordTokenFilter](xref:Lucene.Net.Analysis.Compound.HyphenationCompoundWordTokenFilter) uses hyphenation grammars to find
potential subwords that a worth to check against the dictionary. It can be used
without a dictionary as well but then produces a lot of "nonword" tokens.
The quality of the output tokens is directly connected to the quality of the
Expand All @@ -101,8 +101,8 @@ Credits for the hyphenation code go to the

#### DictionaryCompoundWordTokenFilter

The [](xref:Lucene.Net.Analysis.Compound.DictionaryCompoundWordTokenFilter
DictionaryCompoundWordTokenFilter) uses a dictionary-only approach to
The [
DictionaryCompoundWordTokenFilter](xref:Lucene.Net.Analysis.Compound.DictionaryCompoundWordTokenFilter) uses a dictionary-only approach to
find subwords in a compound word. It is much slower than the one that
uses the hyphenation grammars. You can use it as a first start to
see if your dictionary is good or not because it is much simpler in design.
Expand Down
11 changes: 4 additions & 7 deletions src/Lucene.Net.Analysis.Common/Analysis/Payloads/package.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,11 +15,8 @@
See the License for the specific language governing permissions and
limitations under the License.
-->
<HTML>
<HEAD>
<TITLE>org.apache.lucene.analysis.payloads</TITLE>
</HEAD>
<BODY>



Provides various convenience classes for creating payloads on Tokens.
</BODY>
</HTML>

15 changes: 6 additions & 9 deletions src/Lucene.Net.Analysis.Common/Analysis/Sinks/package.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,13 +15,10 @@
See the License for the specific language governing permissions and
limitations under the License.
-->
<HTML>
<HEAD>
<TITLE>org.apache.lucene.analysis.sinks</TITLE>
</HEAD>
<BODY>
[](xref:Lucene.Net.Analysis.Sinks.TeeSinkTokenFilter) and implementations
of [](xref:Lucene.Net.Analysis.Sinks.TeeSinkTokenFilter.SinkFilter) that



<xref:Lucene.Net.Analysis.Sinks.TeeSinkTokenFilter> and implementations
of <xref:Lucene.Net.Analysis.Sinks.TeeSinkTokenFilter.SinkFilter> that
might be useful.
</BODY>
</HTML>

Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
limitations under the License.
-->

[](xref:Lucene.Net.Analysis.TokenFilter) and [](xref:Lucene.Net.Analysis.Analyzer) implementations that use Snowball
<xref:Lucene.Net.Analysis.TokenFilter> and <xref:Lucene.Net.Analysis.Analyzer> implementations that use Snowball
stemmers.

This project provides pre-compiled version of the Snowball stemmers based on revision 500 of the Tartarus Snowball repository, together with classes integrating them with the Lucene search engine.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,4 @@
limitations under the License.
-->

Backwards-compatible implementation to match [](xref:Lucene.Net.Util.Version.LUCENE_31)
Backwards-compatible implementation to match [#LUCENE_31](xref:Lucene.Net.Util.Version)
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,4 @@
limitations under the License.
-->

Backwards-compatible implementation to match [](xref:Lucene.Net.Util.Version.LUCENE_34)
Backwards-compatible implementation to match [#LUCENE_34](xref:Lucene.Net.Util.Version)
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,4 @@
limitations under the License.
-->

Backwards-compatible implementation to match [](xref:Lucene.Net.Util.Version.LUCENE_36)
Backwards-compatible implementation to match [#LUCENE_36](xref:Lucene.Net.Util.Version)
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,4 @@
limitations under the License.
-->

Backwards-compatible implementation to match [](xref:Lucene.Net.Util.Version.LUCENE_40)
Backwards-compatible implementation to match [#LUCENE_40](xref:Lucene.Net.Util.Version)
38 changes: 19 additions & 19 deletions src/Lucene.Net.Analysis.Common/Analysis/Standard/package.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,40 +20,40 @@

The `org.apache.lucene.analysis.standard` package contains three fast grammar-based tokenizers constructed with JFlex:

* [](xref:Lucene.Net.Analysis.Standard.StandardTokenizer):
* <xref:Lucene.Net.Analysis.Standard.StandardTokenizer>:
as of Lucene 3.1, implements the Word Break rules from the Unicode Text
Segmentation algorithm, as specified in
[Unicode Standard Annex #29](http://unicode.org/reports/tr29/).
Unlike `UAX29URLEmailTokenizer`, URLs and email addresses are
**not** tokenized as single tokens, but are instead split up into
tokens according to the UAX#29 word break rules.

[](xref:Lucene.Net.Analysis.Standard.StandardAnalyzer StandardAnalyzer) includes
[](xref:Lucene.Net.Analysis.Standard.StandardTokenizer StandardTokenizer),
[](xref:Lucene.Net.Analysis.Standard.StandardFilter StandardFilter),
[](xref:Lucene.Net.Analysis.Core.LowerCaseFilter LowerCaseFilter)
and [](xref:Lucene.Net.Analysis.Core.StopFilter StopFilter).
[StandardAnalyzer](xref:Lucene.Net.Analysis.Standard.StandardAnalyzer) includes
[StandardTokenizer](xref:Lucene.Net.Analysis.Standard.StandardTokenizer),
[StandardFilter](xref:Lucene.Net.Analysis.Standard.StandardFilter),
[LowerCaseFilter](xref:Lucene.Net.Analysis.Core.LowerCaseFilter)
and [StopFilter](xref:Lucene.Net.Analysis.Core.StopFilter).
When the `Version` specified in the constructor is lower than
3.1, the [](xref:Lucene.Net.Analysis.Standard.ClassicTokenizer ClassicTokenizer)
3.1, the [ClassicTokenizer](xref:Lucene.Net.Analysis.Standard.ClassicTokenizer)
implementation is invoked.
* [](xref:Lucene.Net.Analysis.Standard.ClassicTokenizer ClassicTokenizer):
* [ClassicTokenizer](xref:Lucene.Net.Analysis.Standard.ClassicTokenizer):
this class was formerly (prior to Lucene 3.1) named
`StandardTokenizer`. (Its tokenization rules are not
based on the Unicode Text Segmentation algorithm.)
[](xref:Lucene.Net.Analysis.Standard.ClassicAnalyzer ClassicAnalyzer) includes
[](xref:Lucene.Net.Analysis.Standard.ClassicTokenizer ClassicTokenizer),
[](xref:Lucene.Net.Analysis.Standard.StandardFilter StandardFilter),
[](xref:Lucene.Net.Analysis.Core.LowerCaseFilter LowerCaseFilter)
and [](xref:Lucene.Net.Analysis.Core.StopFilter StopFilter).
[ClassicAnalyzer](xref:Lucene.Net.Analysis.Standard.ClassicAnalyzer) includes
[ClassicTokenizer](xref:Lucene.Net.Analysis.Standard.ClassicTokenizer),
[StandardFilter](xref:Lucene.Net.Analysis.Standard.StandardFilter),
[LowerCaseFilter](xref:Lucene.Net.Analysis.Core.LowerCaseFilter)
and [StopFilter](xref:Lucene.Net.Analysis.Core.StopFilter).

* [](xref:Lucene.Net.Analysis.Standard.UAX29URLEmailTokenizer UAX29URLEmailTokenizer):
* [UAX29URLEmailTokenizer](xref:Lucene.Net.Analysis.Standard.UAX29URLEmailTokenizer):
implements the Word Break rules from the Unicode Text Segmentation
algorithm, as specified in
[Unicode Standard Annex #29](http://unicode.org/reports/tr29/).
URLs and email addresses are also tokenized according to the relevant RFCs.

[](xref:Lucene.Net.Analysis.Standard.UAX29URLEmailAnalyzer UAX29URLEmailAnalyzer) includes
[](xref:Lucene.Net.Analysis.Standard.UAX29URLEmailTokenizer UAX29URLEmailTokenizer),
[](xref:Lucene.Net.Analysis.Standard.StandardFilter StandardFilter),
[](xref:Lucene.Net.Analysis.Core.LowerCaseFilter LowerCaseFilter)
and [](xref:Lucene.Net.Analysis.Core.StopFilter StopFilter).
[UAX29URLEmailAnalyzer](xref:Lucene.Net.Analysis.Standard.UAX29URLEmailAnalyzer) includes
[UAX29URLEmailTokenizer](xref:Lucene.Net.Analysis.Standard.UAX29URLEmailTokenizer),
[StandardFilter](xref:Lucene.Net.Analysis.Standard.StandardFilter),
[LowerCaseFilter](xref:Lucene.Net.Analysis.Core.LowerCaseFilter)
and [StopFilter](xref:Lucene.Net.Analysis.Core.StopFilter).
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,4 @@
limitations under the License.
-->

Custom [](xref:Lucene.Net.Util.AttributeImpl) for indexing collation keys as index terms.
Custom <xref:Lucene.Net.Util.AttributeImpl> for indexing collation keys as index terms.
4 changes: 2 additions & 2 deletions src/Lucene.Net.Analysis.Common/Collation/package.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,8 +28,8 @@
very slow.)

* Effective Locale-specific normalization (case differences, diacritics, etc.).
([](xref:Lucene.Net.Analysis.Core.LowerCaseFilter) and
[](xref:Lucene.Net.Analysis.Miscellaneous.ASCIIFoldingFilter) provide these services
(<xref:Lucene.Net.Analysis.Core.LowerCaseFilter> and
<xref:Lucene.Net.Analysis.Miscellaneous.ASCIIFoldingFilter> provide these services
in a generic way that doesn't take into account locale-specific needs.)

## Example Usages
Expand Down
11 changes: 8 additions & 3 deletions src/Lucene.Net.Analysis.Common/overview.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,9 @@
<!--
---
uid: Lucene.Net.Analysis.Common
summary: *content
---

<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
Expand All @@ -17,6 +22,6 @@

Analyzers for indexing content in different languages and domains.

For an introduction to Lucene's analysis API, see the [](xref:Lucene.Net.Analysis) package documentation.
For an introduction to Lucene's analysis API, see the <xref:Lucene.Net.Analysis> package documentation.

This module contains concrete components ([](xref:Lucene.Net.Analysis.CharFilter)s, [](xref:Lucene.Net.Analysis.Tokenizer)s, and ([](xref:Lucene.Net.Analysis.TokenFilter)s) for analyzing different types of content. It also provides a number of [](xref:Lucene.Net.Analysis.Analyzer)s for different languages that you can use to get started quickly.
This module contains concrete components (<xref:Lucene.Net.Analysis.CharFilter>s, <xref:Lucene.Net.Analysis.Tokenizer>s, and (<xref:Lucene.Net.Analysis.TokenFilter>s) for analyzing different types of content. It also provides a number of <xref:Lucene.Net.Analysis.Analyzer>s for different languages that you can use to get started quickly.
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,4 @@
limitations under the License.
-->

Custom [](xref:Lucene.Net.Util.AttributeImpl) for indexing collation keys as index terms.
Custom <xref:Lucene.Net.Util.AttributeImpl> for indexing collation keys as index terms.
21 changes: 12 additions & 9 deletions src/Lucene.Net.Analysis.ICU/overview.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,9 @@
<!--
---
uid: Lucene.Net.Analysis.Icu
summary: *content
---

<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
Expand All @@ -16,18 +21,16 @@
-->
<!-- :Post-Release-Update-Version.LUCENE_XY: - several mentions in this file -->

<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>
Apache Lucene ICU integration module
</title>



This module exposes functionality from
[ICU](http://site.icu-project.org/) to Apache Lucene. ICU4J is a Java
library that enhances Java's internationalization support by improving
performance, keeping current with the Unicode Standard, and providing richer
APIs.

For an introduction to Lucene's analysis API, see the [](xref:Lucene.Net.Analysis) package documentation.
For an introduction to Lucene's analysis API, see the <xref:Lucene.Net.Analysis> package documentation.

This module exposes the following functionality:

Expand Down Expand Up @@ -84,8 +87,8 @@ For an introduction to Lucene's analysis API, see the [](xref:Lucene.Net.Analysi
very slow.)

* Effective Locale-specific normalization (case differences, diacritics, etc.).
([](xref:Lucene.Net.Analysis.Core.LowerCaseFilter) and
[](xref:Lucene.Net.Analysis.Miscellaneous.ASCIIFoldingFilter) provide these services
(<xref:Lucene.Net.Analysis.Core.LowerCaseFilter> and
<xref:Lucene.Net.Analysis.Miscellaneous.ASCIIFoldingFilter> provide these services
in a generic way that doesn't take into account locale-specific needs.)

## Example Usages
Expand Down Expand Up @@ -266,7 +269,7 @@ For an introduction to Lucene's analysis API, see the [](xref:Lucene.Net.Analysi

# [Backwards Compatibility]()

This module exists to provide up-to-date Unicode functionality that supports the most recent version of Unicode (currently 6.3). However, some users who wish for stronger backwards compatibility can restrict [](xref:Lucene.Net.Analysis.Icu.ICUNormalizer2Filter) to operate on only a specific Unicode Version by using a {@link com.ibm.icu.text.FilteredNormalizer2}.
This module exists to provide up-to-date Unicode functionality that supports the most recent version of Unicode (currently 6.3). However, some users who wish for stronger backwards compatibility can restrict <xref:Lucene.Net.Analysis.Icu.ICUNormalizer2Filter> to operate on only a specific Unicode Version by using a {@link com.ibm.icu.text.FilteredNormalizer2}.

## Example Usages

Expand Down
13 changes: 8 additions & 5 deletions src/Lucene.Net.Analysis.Kuromoji/overview.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,9 @@
<!--
---
uid: Lucene.Net.Analysis.Kuromoji
summary: *content
---

<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
Expand All @@ -15,12 +20,10 @@
limitations under the License.
-->

<title>
Apache Lucene Kuromoji Analyzer
</title>


Kuromoji is a morphological analyzer for Japanese text.

This module provides support for Japanese text analysis, including features such as part-of-speech tagging, lemmatization, and compound word analysis.

For an introduction to Lucene's analysis API, see the [](xref:Lucene.Net.Analysis) package documentation.
For an introduction to Lucene's analysis API, see the <xref:Lucene.Net.Analysis> package documentation.
13 changes: 8 additions & 5 deletions src/Lucene.Net.Analysis.Phonetic/overview.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,9 @@
<!--
---
uid: Lucene.Net.Analysis.Phonetic
summary: *content
---

<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
Expand All @@ -15,12 +20,10 @@
limitations under the License.
-->

<title>
analyzers-phonetic
</title>


Analysis for indexing phonetic signatures (for sounds-alike search)

For an introduction to Lucene's analysis API, see the [](xref:Lucene.Net.Analysis) package documentation.
For an introduction to Lucene's analysis API, see the <xref:Lucene.Net.Analysis> package documentation.

This module provides analysis components (using encoders from [Apache Commons Codec](http://commons.apache.org/codec/)) that index and search phonetic signatures.
2 changes: 1 addition & 1 deletion src/Lucene.Net.Analysis.SmartCn/HHMM/package.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
limitations under the License.
-->

<META http-equiv="Content-Type" content="text/html; charset=UTF-8">


SmartChineseAnalyzer Hidden Markov Model package.
@lucene.experimental
Loading