diff --git a/packages/google-cloud-language/docs/api.rst b/packages/google-cloud-language/docs/api.rst deleted file mode 100644 index 8720e9fa571e..000000000000 --- a/packages/google-cloud-language/docs/api.rst +++ /dev/null @@ -1,40 +0,0 @@ -Language Client API Reference -============================= - -This package includes clients for multiple versions of the Natural Language -API. By default, you will get ``v1``, the latest GA version. - -.. toctree:: - :maxdepth: 2 - - language_v1/services - language_v1/types - -If you are interested in beta features ahead of the latest GA, you may -opt-in to the v1.1 beta, which is spelled ``v1beta2``. In order to do this, -you will want to import from ``google.cloud.language_v1beta2`` in lieu of -``google.cloud.language``. - -An API and type reference is provided for the v1.1 beta also: - -.. toctree:: - :maxdepth: 2 - - language_v1beta2/services - language_v1beta2/types - -Migration Guide ---------------- - -See the guide below for instructions on migrating to the 2.x release of this library. - -.. toctree:: - :maxdepth: 2 - - UPGRADING - -.. note:: - - The client for the beta API is provided on a provisional basis. The API - surface is subject to change, and it is possible that this client will be - deprecated or removed after its features become GA. diff --git a/packages/google-cloud-language/docs/index.rst b/packages/google-cloud-language/docs/index.rst index 368f811d4193..524632b77033 100644 --- a/packages/google-cloud-language/docs/index.rst +++ b/packages/google-cloud-language/docs/index.rst @@ -2,10 +2,44 @@ .. include:: multiprocessing.rst +This package includes clients for multiple versions of Natural Language. +By default, you will get version ``language_v1``. + + +API Reference +------------- +.. toctree:: + :maxdepth: 2 + + language_v1/services + language_v1/types + +API Reference +------------- +.. toctree:: + :maxdepth: 2 + + language_v1beta2/services + language_v1beta2/types + + +Migration Guide +--------------- + +See the guide below for instructions on migrating to the latest version. + +.. toctree:: + :maxdepth: 2 + +  UPGRADING + + +Changelog +--------- + +For a list of all ``google-cloud-language`` releases: .. toctree:: - :maxdepth: 2 + :maxdepth: 2 - usage - api - changelog + changelog diff --git a/packages/google-cloud-language/docs/usage.rst b/packages/google-cloud-language/docs/usage.rst deleted file mode 100644 index f2e459342e74..000000000000 --- a/packages/google-cloud-language/docs/usage.rst +++ /dev/null @@ -1,199 +0,0 @@ -Using the Language Client -========================= - -Documents -********* - -The Google Natural Language API has the following supported methods: - -- `analyzeEntities`_ -- `analyzeSentiment`_ -- `analyzeEntitySentiment`_ -- `annotateText`_ -- `classifyText`_ - -and each method uses a :class:`~.language_v1.types.Document` for representing -text. - - .. code-block:: python - - >>> document = language.types.Document( - ... content='Google, headquartered in Mountain View, unveiled the ' - ... 'new Android phone at the Consumer Electronic Show. ' - ... 'Sundar Pichai said in his keynote that users love ' - ... 'their new Android phones.', - ... language='en', - ... type='PLAIN_TEXT', - ... ) - - -The document's language defaults to ``None``, which will cause the API to -auto-detect the language. - -In addition, you can construct an HTML document: - - .. code-block:: python - - >>> html_content = """\ - ... - ... - ... El Tiempo de las Historias</time> - ... </head> - ... <body> - ... <p>La vaca saltó sobre la luna.</p> - ... </body> - ... </html> - ... """ - >>> document = language.types.Document( - ... content=html_content, - ... language='es', - ... type='HTML', - ... ) - -The ``language`` argument can be either ISO-639-1 or BCP-47 language -codes. The API reference page contains the full list of `supported languages`_. - -.. _supported languages: https://cloud.google.com/natural-language/docs/languages - - -In addition to supplying the text / HTML content, a document can refer -to content stored in `Google Cloud Storage`_. - - .. code-block:: python - - >>> document = language.types.Document( - ... gcs_content_uri='gs://my-text-bucket/sentiment-me.txt', - ... type=language.enums.HTML, - ... ) - -.. _analyzeEntities: https://cloud.google.com/natural-language/docs/reference/rest/v1/documents/analyzeEntities -.. _analyzeSentiment: https://cloud.google.com/natural-language/docs/reference/rest/v1/documents/analyzeSentiment -.. _analyzeEntitySentiment: https://cloud.google.com/natural-language/docs/reference/rest/v1/documents/analyzeEntitySentiment -.. _annotateText: https://cloud.google.com/natural-language/docs/reference/rest/v1/documents/annotateText -.. _classifyText: https://cloud.google.com/natural-language/docs/reference/rest/v1/documents/classifyText -.. _Google Cloud Storage: https://cloud.google.com/storage/ - -Analyze Entities -**************** - -The :meth:`~.language_v1.LanguageServiceClient.analyze_entities` -method finds named entities (i.e. proper names) in the text. This method -returns a :class:`~.language_v1.types.AnalyzeEntitiesResponse`. - - .. code-block:: python - - >>> document = language.types.Document( - ... content='Michelangelo Caravaggio, Italian painter, is ' - ... 'known for "The Calling of Saint Matthew".', - ... type=language.enums.Document.Type.PLAIN_TEXT, - ... ) - >>> response = client.analyze_entities( - ... document=document, - ... encoding_type='UTF32', - ... ) - >>> for entity in response.entities: - ... print('=' * 20) - ... print(' name: {0}'.format(entity.name)) - ... print(' type: {0}'.format(entity.type)) - ... print(' metadata: {0}'.format(entity.metadata)) - ... print(' salience: {0}'.format(entity.salience)) - ==================== - name: Michelangelo Caravaggio - type: PERSON - metadata: {'wikipedia_url': 'https://en.wikipedia.org/wiki/Caravaggio'} - salience: 0.7615959 - ==================== - name: Italian - type: LOCATION - metadata: {'wikipedia_url': 'https://en.wikipedia.org/wiki/Italy'} - salience: 0.19960518 - ==================== - name: The Calling of Saint Matthew - type: EVENT - metadata: {'wikipedia_url': 'https://en.wikipedia.org/wiki/The_Calling_of_St_Matthew_(Caravaggio)'} - salience: 0.038798928 - -.. note:: - - It is recommended to send an ``encoding_type`` argument to Natural - Language methods, so they provide useful offsets for the data they return. - While the correct value varies by environment, in Python you *usually* - want ``UTF32``. - - -Analyze Sentiment -***************** - -The :meth:`~.language_v1.LanguageServiceClient.analyze_sentiment` method -analyzes the sentiment of the provided text. This method returns a -:class:`~.language_v1.types.AnalyzeSentimentResponse`. - - .. code-block:: python - - >>> document = language.types.Document( - ... content='Jogging is not very fun.', - ... type='PLAIN_TEXT', - ... ) - >>> response = client.analyze_sentiment( - ... document=document, - ... encoding_type='UTF32', - ... ) - >>> sentiment = response.document_sentiment - >>> print(sentiment.score) - -1 - >>> print(sentiment.magnitude) - 0.8 - -.. note:: - - It is recommended to send an ``encoding_type`` argument to Natural - Language methods, so they provide useful offsets for the data they return. - While the correct value varies by environment, in Python you *usually* - want ``UTF32``. - - -Analyze Entity Sentiment -************************ - -The :meth:`~.language_v1.LanguageServiceClient.analyze_entity_sentiment` -method is effectively the amalgamation of -:meth:`~.language_v1.LanguageServiceClient.analyze_entities` and -:meth:`~.language_v1.LanguageServiceClient.analyze_sentiment`. -This method returns a -:class:`~.language_v1.types.AnalyzeEntitySentimentResponse`. - -.. code-block:: python - - >>> document = language.types.Document( - ... content='Mona said that jogging is very fun.', - ... type='PLAIN_TEXT', - ... ) - >>> response = client.analyze_entity_sentiment( - ... document=document, - ... encoding_type='UTF32', - ... ) - >>> entities = response.entities - >>> entities[0].name - 'Mona' - >>> entities[1].name - 'jogging' - >>> entities[1].sentiment.magnitude - 0.8 - >>> entities[1].sentiment.score - 0.8 - -.. note:: - - It is recommended to send an ``encoding_type`` argument to Natural - Language methods, so they provide useful offsets for the data they return. - While the correct value varies by environment, in Python you *usually* - want ``UTF32``. - - -Annotate Text -************* - -The :meth:`~.language_v1.LanguageServiceClient.annotate_text` method -analyzes a document and is intended for users who are familiar with -machine learning and need in-depth text features to build upon. This method -returns a :class:`~.language_v1.types.AnnotateTextResponse`. diff --git a/packages/google-cloud-language/noxfile.py b/packages/google-cloud-language/noxfile.py index f041f1f5a4ae..2a2001c49998 100644 --- a/packages/google-cloud-language/noxfile.py +++ b/packages/google-cloud-language/noxfile.py @@ -175,7 +175,7 @@ def cover(session): test runs (not system test runs), and then erases coverage data. """ session.install("coverage", "pytest-cov") - session.run("coverage", "report", "--show-missing", "--fail-under=98") + session.run("coverage", "report", "--show-missing", "--fail-under=100") session.run("coverage", "erase") diff --git a/packages/google-cloud-language/owlbot.py b/packages/google-cloud-language/owlbot.py deleted file mode 100644 index 11b0c990ec45..000000000000 --- a/packages/google-cloud-language/owlbot.py +++ /dev/null @@ -1,51 +0,0 @@ -# Copyright 2018 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""This script is used to synthesize generated parts of this library.""" - -import synthtool as s -from synthtool import gcp -from synthtool.languages import python - -common = gcp.CommonTemplates() -default_version = "v1" - -for library in s.get_staging_dirs(default_version): - # Work around generator issue https://github.com/googleapis/gapic-generator-python/issues/902 - s.replace(library / f"google/cloud/language_{library.name}/types/language_service.py", - r"""Represents the input to API methods. - Attributes:""", - r"""Represents the input to API methods.\n - Attributes:""") - - s.move(library, excludes=["docs/index.rst", "README.rst", "setup.py"]) - -s.remove_staging_dirs() - -# ---------------------------------------------------------------------------- -# Add templated files -# ---------------------------------------------------------------------------- -templated_files = common.py_library(cov_level=98, samples=True, microgenerator=True,) - -s.move(templated_files, excludes=['.coveragerc']) - -s.shell.run(["nox", "-s", "blacken"], hide_output=False) - -# ---------------------------------------------------------------------------- -# Samples templates -# ---------------------------------------------------------------------------- - -python.py_samples(skip_readmes=True) - -s.shell.run(["nox", "-s", "blacken"], hide_output=False)