diff --git a/sdk/textanalytics/azure-ai-textanalytics/samples/README.md b/sdk/textanalytics/azure-ai-textanalytics/samples/README.md index 3b675e31efa0..5fe2f816e3a5 100644 --- a/sdk/textanalytics/azure-ai-textanalytics/samples/README.md +++ b/sdk/textanalytics/azure-ai-textanalytics/samples/README.md @@ -1,41 +1,49 @@ --- -topic: sample +page_type: sample languages: - python products: - azure - azure-ai-textanalytics +urlFragment: textanalytics-samples --- # Samples for Azure Text Analytics client library for Python These code samples show common scenario operations with the Azure Text Analytics client library. -The async versions of the samples (the python sample files appended with `_async`) show asynchronous operations -with Text Analytics and require Python 3.5 or later. +The async versions of the samples require Python 3.5 or later. -Authenticate the client with a Cognitive Services/Text Analytics subscription key or a token credential from [azure-identity](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/identity/azure-identity): -* [sample_authentication.py](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/textanalytics/azure-ai-textanalytics/samples/sample_authentication.py) ([async version](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_authentication_async.py)) +You can authenticate your client with a Cognitive Services/Text Analytics API key or through Azure Active Directory with a token credential from [azure-identity][azure_identity]: +* See [sample_authentication.py][sample_authentication] and [sample_authentication_async.py][sample_authentication_async] for how to authenticate in the above cases. -In a batch of documents: -* Detect language: [sample_detect_language.py](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/textanalytics/azure-ai-textanalytics/samples/sample_detect_language.py) ([async version](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_detect_language_async.py)) -* Recognize entities: [sample_recognize_entities.py](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/textanalytics/azure-ai-textanalytics/samples/sample_recognize_entities.py) ([async version](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_recognize_entities_async.py)) -* Recognize linked entities: [sample_recognize_linked_entities.py](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/textanalytics/azure-ai-textanalytics/samples/sample_recognize_linked_entities.py) ([async version](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_recognize_linked_entities_async.py)) -* Recognize personally identifiable information: [sample_recognize_pii_entities.py](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/textanalytics/azure-ai-textanalytics/samples/sample_recognize_pii_entities.py) ([async version](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_recognize_pii_entities_async.py)) -* Extract key phrases: [sample_extract_key_phrases.py](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/textanalytics/azure-ai-textanalytics/samples/sample_extract_key_phrases.py) ([async version](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_extract_key_phrases_async.py)) -* Analyze sentiment: [sample_analyze_sentiment.py](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/textanalytics/azure-ai-textanalytics/samples/sample_analyze_sentiment.py) ([async version](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_analyze_sentiment_async.py)) +These sample programs show common scenarios for the Text Analytics client's offerings. + +|**File Name**|**Description**| +|----------------|-------------| +|[sample_detect_language.py][detect_language] and [sample_detect_language_async.py][detect_language_async]|Detect language in documents| +|[sample_recognize_entities.py][recognize_entities] and [sample_recognize_entities_async.py][recognize_entities_async]|Recognize named entities in documents| +|[sample_recognize_linked_entities.py][recognize_linked_entities] and [sample_recognize_linked_entities_async.py][recognize_linked_entities_async]|Recognize linked entities in documents| +|[sample_recognize_pii_entities.py][recognize_pii_entities] and [sample_recognize_pii_entities_async.py][recognize_pii_entities_async]|Recognize personally identifiable information in documents| +|[sample_extract_key_phrases.py][extract_key_phrases] and [sample_extract_key_phrases_async.py][extract_key_phrases_async]|Extract key phrases from documents| +|[sample_analyze_sentiment.py][analyze_sentiment] and [sample_analyze_sentiment_async.py][analyze_sentiment_async]|Analyze the sentiment of documents| +|[sample_alternative_document_input.py][sample_alternative_document_input] and [sample_alternative_document_input_async.py][sample_alternative_document_input_async]|Pass documents to an endpoint using dicts| ## Prerequisites * Python 2.7, or 3.5 or later is required to use this package (3.5 or later if using asyncio) -* You must have an [Azure subscription](https://azure.microsoft.com/free/) and an -[Azure Text Analytics account](https://docs.microsoft.com/azure/cognitive-services/cognitive-services-apis-create-account?tabs=singleservice%2Cwindows) to run these samples. +* You must have an [Azure subscription][azure_subscription] and an +[Azure Text Analytics account][azure_text_analytics_account] to run these samples. ## Setup -1. Install the Azure Text Analytics client library for Python with [pip](https://pypi.org/project/pip/): +1. Install the Azure Text Analytics client library for Python with [pip][pip]: ```bash pip install azure-ai-textanalytics --pre ``` +* If authenticating with Azure Active Directory, make sure you have [azure-identity][azure_identity_pip] installed: + ```bash + pip install azure-identity + ``` 2. Clone or download this sample repository 3. Open the sample folder in Visual Studio Code or your IDE of choice. @@ -48,5 +56,34 @@ pip install azure-ai-textanalytics --pre ## Next steps -Check out the [API reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-0-Preview-1/operations/Languages) to learn more about +Check out the [API reference documentation][api_reference_documentation] to learn more about what you can do with the Azure Text Analytics client library. + +|**Advanced Sample File Name**|**Description**| +|----------------|-------------| +|[sample_get_detailed_diagnostics_information.py][get_detailed_diagnostics_information] and [sample_get_detailed_diagnostics_information_async.py][get_detailed_diagnostics_information_async]|Get the request batch statistics, model version, and raw response through a callback| + +[azure_identity]: https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/identity/azure-identity +[sample_authentication]: https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/textanalytics/azure-ai-textanalytics/samples/sample_authentication.py +[sample_authentication_async]: https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_authentication_async.py +[detect_language]: https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/textanalytics/azure-ai-textanalytics/samples/sample_detect_language.py +[detect_language_async]: https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_detect_language_async.py +[recognize_entities]: https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/textanalytics/azure-ai-textanalytics/samples/sample_recognize_entities.py +[recognize_entities_async]: https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_recognize_entities_async.py +[recognize_linked_entities]: https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/textanalytics/azure-ai-textanalytics/samples/sample_recognize_linked_entities.py +[recognize_linked_entities_async]: https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_recognize_linked_entities_async.py +[recognize_pii_entities]: https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/textanalytics/azure-ai-textanalytics/samples/sample_recognize_pii_entities.py +[recognize_pii_entities_async]: https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_recognize_pii_entities_async.py +[extract_key_phrases]: https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/textanalytics/azure-ai-textanalytics/samples/sample_extract_key_phrases.py +[extract_key_phrases_async]: https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_extract_key_phrases_async.py +[analyze_sentiment]: https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/textanalytics/azure-ai-textanalytics/samples/sample_analyze_sentiment.py +[analyze_sentiment_async]: https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_analyze_sentiment_async.py +[get_detailed_diagnostics_information]: https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/textanalytics/azure-ai-textanalytics/samples/sample_get_detailed_diagnostics_information.py +[get_detailed_diagnostics_information_async]: https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_get_detailed_diagnostics_information_async.py +[sample_alternative_document_input]: https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/textanalytics/azure-ai-textanalytics/samples/sample_alternative_document_input.py +[sample_alternative_document_input_async]: https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_alternative_document_input_async.py +[pip]: https://pypi.org/project/pip/ +[azure_subscription]: https://azure.microsoft.com/free/ +[azure_text_analytics_account]: https://docs.microsoft.com/azure/cognitive-services/cognitive-services-apis-create-account?tabs=singleservice%2Cwindows +[azure_identity_pip]: https://pypi.org/project/azure-identity/ +[api_reference_documentation]: https://aka.ms/azsdk-python-textanalytics-ref-docs \ No newline at end of file diff --git a/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_alternative_document_input_async.py b/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_alternative_document_input_async.py new file mode 100644 index 000000000000..7bfb10b33ef1 --- /dev/null +++ b/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_alternative_document_input_async.py @@ -0,0 +1,66 @@ +# coding: utf-8 + +# ------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for +# license information. +# -------------------------------------------------------------------------- + +""" +FILE: sample_alternative_document_input_async.py + +DESCRIPTION: + This sample shows an alternative way to pass in the input documents. + Here we specify our own IDs and the text language along with the text. + +USAGE: + python sample_alternative_document_input_async.py + + Set the environment variables with your own values before running the sample: + 1) AZURE_TEXT_ANALYTICS_ENDPOINT - the endpoint to your Cognitive Services resource. + 2) AZURE_TEXT_ANALYTICS_KEY - your Text Analytics subscription key +""" + +import os +import asyncio + + +class AlternativeDocumentInputSampleAsync(object): + + endpoint = os.getenv("AZURE_TEXT_ANALYTICS_ENDPOINT") + key = os.getenv("AZURE_TEXT_ANALYTICS_KEY") + + async def alternative_document_input(self): + from azure.ai.textanalytics.aio import TextAnalyticsClient + from azure.ai.textanalytics import TextAnalyticsApiKeyCredential + text_analytics_client = TextAnalyticsClient(endpoint=self.endpoint, credential=TextAnalyticsApiKeyCredential(self.key)) + + documents = [ + {"id": "0", "language": "en", "text": "I had the best day of my life."}, + {"id": "1", "language": "en", + "text": "This was a waste of my time. The speaker put me to sleep."}, + {"id": "2", "language": "es", "text": "No tengo dinero ni nada que dar..."}, + {"id": "3", "language": "fr", + "text": "L'hôtel n'était pas très confortable. L'éclairage était trop sombre."} + ] + async with text_analytics_client: + result = await text_analytics_client.detect_language(documents) + + for idx, doc in enumerate(result): + if not doc.is_error: + print("Document text: {}".format(documents[idx])) + print("Language detected: {}".format(doc.primary_language.name)) + print("ISO6391 name: {}".format(doc.primary_language.iso6391_name)) + print("Confidence score: {}\n".format(doc.primary_language.score)) + if doc.is_error: + print(doc.id, doc.error) + + +async def main(): + sample = AlternativeDocumentInputSampleAsync() + await sample.alternative_document_input() + + +if __name__ == '__main__': + loop = asyncio.get_event_loop() + loop.run_until_complete(main()) diff --git a/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_analyze_sentiment_async.py b/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_analyze_sentiment_async.py index 3daac5758bb7..8d901b861319 100644 --- a/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_analyze_sentiment_async.py +++ b/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_analyze_sentiment_async.py @@ -10,15 +10,15 @@ FILE: sample_analyze_sentiment_async.py DESCRIPTION: - This sample demonstrates how to analyze sentiment in a batch of documents. + This sample demonstrates how to analyze sentiment in documents. An overall and per-sentence sentiment is returned. USAGE: python sample_analyze_sentiment_async.py Set the environment variables with your own values before running the sample: - 1) AZURE_TEXT_ANALYTICS_ENDPOINT - the endpoint to your cognitive services resource. - 2) AZURE_TEXT_ANALYTICS_KEY - your text analytics subscription key + 1) AZURE_TEXT_ANALYTICS_ENDPOINT - the endpoint to your Cognitive Services resource. + 2) AZURE_TEXT_ANALYTICS_KEY - your Text Analytics subscription key """ import os @@ -51,63 +51,24 @@ async def analyze_sentiment_async(self): print("Document text: {}".format(documents[idx])) print("Overall sentiment: {}".format(doc.sentiment)) # [END batch_analyze_sentiment_async] - print("Overall confidence scores: positive={0:.3f}; neutral={1:.3f}; negative={2:.3f} \n".format( + print("Overall confidence scores: positive={}; neutral={}; negative={} \n".format( doc.confidence_scores.positive, doc.confidence_scores.neutral, doc.confidence_scores.negative, )) for idx, sentence in enumerate(doc.sentences): print("Sentence {} sentiment: {}".format(idx+1, sentence.sentiment)) - print("Sentence confidence scores: positive={0:.3f}; neutral={1:.3f}; negative={2:.3f}".format( + print("Sentence confidence scores: positive={}; neutral={}; negative={}".format( sentence.confidence_scores.positive, sentence.confidence_scores.neutral, sentence.confidence_scores.negative, )) - print("Offset: {}".format(sentence.grapheme_offset)) - print("Length: {}\n".format(sentence.grapheme_length)) print("------------------------------------") - async def alternative_scenario_analyze_sentiment_async(self): - """This sample demonstrates how to retrieve batch statistics, the - model version used, and the raw response returned from the service. - - It additionally shows an alternative way to pass in the input documents - using a list[TextDocumentInput] and supplying your own IDs and language hints along - with the text. - """ - from azure.ai.textanalytics.aio import TextAnalyticsClient - from azure.ai.textanalytics import TextAnalyticsApiKeyCredential - text_analytics_client = TextAnalyticsClient(endpoint=self.endpoint, credential=TextAnalyticsApiKeyCredential(self.key)) - - documents = [ - {"id": "0", "language": "en", "text": "I had the best day of my life."}, - {"id": "1", "language": "en", - "text": "This was a waste of my time. The speaker put me to sleep."}, - {"id": "2", "language": "es", "text": "No tengo dinero ni nada que dar..."}, - {"id": "3", "language": "fr", - "text": "L'hôtel n'était pas très confortable. L'éclairage était trop sombre."} - ] - - extras = [] - - def callback(resp): - extras.append(resp.statistics) - extras.append(resp.model_version) - extras.append(resp.raw_response) - - async with text_analytics_client: - result = await text_analytics_client.analyze_sentiment( - documents, - show_stats=True, - model_version="latest", - raw_response_hook=callback - ) - async def main(): sample = AnalyzeSentimentSampleAsync() await sample.analyze_sentiment_async() - await sample.alternative_scenario_analyze_sentiment_async() if __name__ == '__main__': diff --git a/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_authentication_async.py b/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_authentication_async.py index 28e833cc6c74..2983d4615f57 100644 --- a/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_authentication_async.py +++ b/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_authentication_async.py @@ -10,11 +10,11 @@ FILE: sample_authentication_async.py DESCRIPTION: - This sample demonstrates how to authenticate with the text analytics service. + This sample demonstrates how to authenticate to the Text Analytics service. There are two supported methods of authentication: - 1) Use a cognitive services/text analytics API key with TextAnalyticsApiKeyCredential - 2) Use a token credential to authenticate with Azure Active Directory + 1) Use a Cognitive Services/Text Analytics API key with TextAnalyticsApiKeyCredential + 2) Use a token credential from azure-identity to authenticate with Azure Active Directory See more details about authentication here: https://docs.microsoft.com/azure/cognitive-services/authentication @@ -23,8 +23,8 @@ python sample_authentication_async.py Set the environment variables with your own values before running the sample: - 1) AZURE_TEXT_ANALYTICS_ENDPOINT - the endpoint to your cognitive services resource. - 2) AZURE_TEXT_ANALYTICS_KEY - your cognitive services/text analytics API key + 1) AZURE_TEXT_ANALYTICS_ENDPOINT - the endpoint to your Cognitive Services resource. + 2) AZURE_TEXT_ANALYTICS_KEY - your Cognitive Services/Text Analytics API key 3) AZURE_CLIENT_ID - the client ID of your active directory application. 4) AZURE_TENANT_ID - the tenant ID of your active directory application. 5) AZURE_CLIENT_SECRET - the secret of your active directory application. @@ -37,6 +37,7 @@ class AuthenticationSampleAsync(object): async def authentication_with_api_key_credential_async(self): + print("\n.. authentication_with_api_key_credential_async") # [START create_ta_client_with_key_async] from azure.ai.textanalytics.aio import TextAnalyticsClient from azure.ai.textanalytics import TextAnalyticsApiKeyCredential @@ -54,9 +55,10 @@ async def authentication_with_api_key_credential_async(self): print("Confidence score: {}".format(result[0].primary_language.score)) async def authentication_with_azure_active_directory_async(self): - """DefaultAzureCredential will use the values from the environment + """DefaultAzureCredential will use the values from these environment variables: AZURE_CLIENT_ID, AZURE_TENANT_ID, AZURE_CLIENT_SECRET """ + print("\n.. authentication_with_azure_active_directory_async") # [START create_ta_client_with_aad_async] from azure.ai.textanalytics.aio import TextAnalyticsClient from azure.identity.aio import DefaultAzureCredential diff --git a/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_detect_language_async.py b/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_detect_language_async.py index 8ae20b0b55df..ea3d01f0a4f8 100644 --- a/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_detect_language_async.py +++ b/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_detect_language_async.py @@ -17,8 +17,8 @@ python sample_detect_language_async.py Set the environment variables with your own values before running the sample: - 1) AZURE_TEXT_ANALYTICS_ENDPOINT - the endpoint to your cognitive services resource. - 2) AZURE_TEXT_ANALYTICS_KEY - your text analytics subscription key + 1) AZURE_TEXT_ANALYTICS_ENDPOINT - the endpoint to your Cognitive Services resource. + 2) AZURE_TEXT_ANALYTICS_KEY - your Text Analytics subscription key """ import os @@ -55,46 +55,10 @@ async def detect_language_async(self): print(doc.id, doc.error) # [END batch_detect_language_async] - async def alternative_scenario_detect_language_async(self): - """This sample demonstrates how to retrieve batch statistics, the - model version used, and the raw response returned from the service. - - It additionally shows an alternative way to pass in the input documents - using a list[DetectLanguageInput] and supplying your own IDs and country hints along - with the text. - """ - from azure.ai.textanalytics.aio import TextAnalyticsClient - from azure.ai.textanalytics import TextAnalyticsApiKeyCredential - text_analytics_client = TextAnalyticsClient(endpoint=self.endpoint, credential=TextAnalyticsApiKeyCredential(self.key)) - - documents = [ - {"id": "0", "country_hint": "US", "text": "This is a document written in English."}, - {"id": "1", "country_hint": "MX", "text": "Este es un document escrito en Español."}, - {"id": "2", "country_hint": "CN", "text": "这是一个用中文写的文件"}, - {"id": "3", "country_hint": "DE", "text": "Dies ist ein Dokument in englischer Sprache."}, - {"id": "4", "country_hint": "SE", "text": "Detta är ett dokument skrivet på engelska."} - ] - - extras = [] - - def callback(resp): - extras.append(resp.statistics) - extras.append(resp.model_version) - extras.append(resp.raw_response) - - async with text_analytics_client: - result = await text_analytics_client.detect_language( - documents, - show_stats=True, - model_version="latest", - raw_response_hook=callback - ) - async def main(): sample = DetectLanguageSampleAsync() await sample.detect_language_async() - await sample.alternative_scenario_detect_language_async() if __name__ == '__main__': diff --git a/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_extract_key_phrases_async.py b/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_extract_key_phrases_async.py index 33d36aa024dd..1a5330bd3e3e 100644 --- a/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_extract_key_phrases_async.py +++ b/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_extract_key_phrases_async.py @@ -16,8 +16,8 @@ python sample_extract_key_phrases_async.py Set the environment variables with your own values before running the sample: - 1) AZURE_TEXT_ANALYTICS_ENDPOINT - the endpoint to your cognitive services resource. - 2) AZURE_TEXT_ANALYTICS_KEY - your text analytics subscription key + 1) AZURE_TEXT_ANALYTICS_ENDPOINT - the endpoint to your Cognitive Services resource. + 2) AZURE_TEXT_ANALYTICS_KEY - your Text Analytics subscription key """ import os @@ -50,45 +50,10 @@ async def extract_key_phrases_async(self): print(doc.id, doc.error) # [END batch_extract_key_phrases_async] - async def alternative_scenario_extract_key_phrases_async(self): - """This sample demonstrates how to retrieve batch statistics, the - model version used, and the raw response returned from the service. - - It additionally shows an alternative way to pass in the input documents - using a list[TextDocumentInput] and supplying your own IDs and language hints along - with the text. - """ - from azure.ai.textanalytics.aio import TextAnalyticsClient - from azure.ai.textanalytics import TextAnalyticsApiKeyCredential - text_analytics_client = TextAnalyticsClient(endpoint=self.endpoint, credential=TextAnalyticsApiKeyCredential(self.key)) - - documents = [ - {"id": "0", "language": "en", - "text": "Redmond is a city in King County, Washington, United States, located 15 miles east of Seattle."}, - {"id": "1", "language": "en", - "text": "I need to take my cat to the veterinarian."}, - {"id": "2", "language": "en", "text": "I will travel to South America in the summer."} - ] - extras = [] - - def callback(resp): - extras.append(resp.statistics) - extras.append(resp.model_version) - extras.append(resp.raw_response) - - async with text_analytics_client: - result = await text_analytics_client.extract_key_phrases( - documents, - show_stats=True, - model_version="latest", - raw_response_hook=callback - ) - async def main(): sample = ExtractKeyPhrasesSampleAsync() await sample.extract_key_phrases_async() - await sample.alternative_scenario_extract_key_phrases_async() if __name__ == '__main__': diff --git a/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_get_detailed_diagnostics_information_async.py b/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_get_detailed_diagnostics_information_async.py new file mode 100644 index 000000000000..d5bff03a8f1e --- /dev/null +++ b/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_get_detailed_diagnostics_information_async.py @@ -0,0 +1,71 @@ +# coding: utf-8 + +# ------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for +# license information. +# -------------------------------------------------------------------------- + +""" +FILE: sample_get_detailed_diagnostics_information_async.py + +DESCRIPTION: + This sample demonstrates how to retrieve batch statistics, the + model version used, and the raw response returned from the service. + +USAGE: + python sample_get_detailed_diagnostics_information_async.py + + Set the environment variables with your own values before running the sample: + 1) AZURE_TEXT_ANALYTICS_ENDPOINT - the endpoint to your Cognitive Services resource. + 2) AZURE_TEXT_ANALYTICS_KEY - your Text Analytics subscription key +""" + +import os +import asyncio +import logging + +_LOGGER = logging.getLogger(__name__) + +class GetDetailedDiagnosticsInformationSampleAsync(object): + + endpoint = os.getenv("AZURE_TEXT_ANALYTICS_ENDPOINT") + key = os.getenv("AZURE_TEXT_ANALYTICS_KEY") + + async def get_detailed_diagnostics_information_async(self): + from azure.ai.textanalytics.aio import TextAnalyticsClient + from azure.ai.textanalytics import TextAnalyticsApiKeyCredential + text_analytics_client = TextAnalyticsClient(endpoint=self.endpoint, credential=TextAnalyticsApiKeyCredential(self.key)) + + documents = [ + "I had the best day of my life.", + "This was a waste of my time. The speaker put me to sleep.", + "No tengo dinero ni nada que dar...", + "L'hôtel n'était pas très confortable. L'éclairage était trop sombre." + ] + + def callback(resp): + _LOGGER.info("document_count: {}".format(resp.statistics["document_count"])) + _LOGGER.info("valid_document_count: {}".format(resp.statistics["valid_document_count"])) + _LOGGER.info("erroneous_document_count: {}".format(resp.statistics["erroneous_document_count"])) + _LOGGER.info("transaction_count: {}".format(resp.statistics["transaction_count"])) + _LOGGER.info("model_version: {}".format(resp.model_version)) + _LOGGER.info("raw_response: {}".format(resp.raw_response)) + + async with text_analytics_client: + result = await text_analytics_client.analyze_sentiment( + documents, + show_stats=True, + model_version="latest", + raw_response_hook=callback + ) + + +async def main(): + sample = GetDetailedDiagnosticsInformationSampleAsync() + await sample.get_detailed_diagnostics_information_async() + + +if __name__ == '__main__': + loop = asyncio.get_event_loop() + loop.run_until_complete(main()) diff --git a/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_recognize_entities_async.py b/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_recognize_entities_async.py index 52c6c25f9770..ec0c6b5d2d50 100644 --- a/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_recognize_entities_async.py +++ b/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_recognize_entities_async.py @@ -16,8 +16,8 @@ python sample_recognize_entities_async.py Set the environment variables with your own values before running the sample: - 1) AZURE_TEXT_ANALYTICS_ENDPOINT - the endpoint to your cognitive services resource. - 2) AZURE_TEXT_ANALYTICS_KEY - your text analytics subscription key + 1) AZURE_TEXT_ANALYTICS_ENDPOINT - the endpoint to your Cognitive Services resource. + 2) AZURE_TEXT_ANALYTICS_KEY - your Text Analytics subscription key """ import os @@ -49,47 +49,13 @@ async def recognize_entities_async(self): print("\nDocument text: {}".format(documents[idx])) for entity in doc.entities: print("Entity: \t", entity.text, "\tCategory: \t", entity.category, - "\tConfidence Score: \t", round(entity.score, 3)) + "\tConfidence Score: \t", entity.score) # [END batch_recognize_entities_async] - async def alternative_scenario_recognize_entities_async(self): - """This sample demonstrates how to retrieve batch statistics, the - model version used, and the raw response returned from the service. - - It additionally shows an alternative way to pass in the input documents - using a list[TextDocumentInput] and supplying your own IDs and language hints along - with the text. - """ - from azure.ai.textanalytics.aio import TextAnalyticsClient - from azure.ai.textanalytics import TextAnalyticsApiKeyCredential - text_analytics_client = TextAnalyticsClient(endpoint=self.endpoint, credential=TextAnalyticsApiKeyCredential(self.key)) - - documents = [ - {"id": "0", "language": "en", "text": "Microsoft was founded by Bill Gates and Paul Allen."}, - {"id": "1", "language": "de", "text": "I had a wonderful trip to Seattle last week."}, - {"id": "2", "language": "es", "text": "I visited the Space Needle 2 times."}, - ] - - extras = [] - - def callback(resp): - extras.append(resp.statistics) - extras.append(resp.model_version) - extras.append(resp.raw_response) - - async with text_analytics_client: - result = await text_analytics_client.recognize_entities( - documents, - show_stats=True, - model_version="latest", - raw_response_hook=callback - ) - async def main(): sample = RecognizeEntitiesSampleAsync() await sample.recognize_entities_async() - await sample.alternative_scenario_recognize_entities_async() if __name__ == '__main__': diff --git a/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_recognize_linked_entities_async.py b/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_recognize_linked_entities_async.py index eea122ae016c..be294a6fb205 100644 --- a/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_recognize_linked_entities_async.py +++ b/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_recognize_linked_entities_async.py @@ -18,8 +18,8 @@ python sample_recognize_linked_entities_async.py Set the environment variables with your own values before running the sample: - 1) AZURE_TEXT_ANALYTICS_ENDPOINT - the endpoint to your cognitive services resource. - 2) AZURE_TEXT_ANALYTICS_KEY - your text analytics subscription key + 1) AZURE_TEXT_ANALYTICS_ENDPOINT - the endpoint to your Cognitive Services resource. + 2) AZURE_TEXT_ANALYTICS_KEY - your Text Analytics subscription key """ import os @@ -54,50 +54,15 @@ async def recognize_linked_entities_async(self): print("Url: {}".format(entity.url)) print("Data Source: {}".format(entity.data_source)) for match in entity.matches: - print("Score: {0:.3f}".format(match.score)) - print("Offset: {}".format(match.grapheme_offset)) - print("Length: {}\n".format(match.grapheme_length)) + print("Confidence Score: {}".format(match.score)) + print("Entity as appears in request: {}".format(match.text)) print("------------------------------------------") # [END batch_recognize_linked_entities_async] - async def alternative_scenario_recognize_linked_entities_async(self): - """This sample demonstrates how to retrieve batch statistics, the - model version used, and the raw response returned from the service. - - It additionally shows an alternative way to pass in the input documents - using a list[TextDocumentInput] and supplying your own IDs and language hints along - with the text. - """ - from azure.ai.textanalytics.aio import TextAnalyticsClient - from azure.ai.textanalytics import TextAnalyticsApiKeyCredential - text_analytics_client = TextAnalyticsClient(endpoint=self.endpoint, credential=TextAnalyticsApiKeyCredential(self.key)) - - documents = [ - {"id": "0", "language": "en", "text": "Microsoft moved its headquarters to Bellevue, Washington in January 1979."}, - {"id": "1", "language": "en", "text": "Steve Ballmer stepped down as CEO of Microsoft and was succeeded by Satya Nadella."}, - {"id": "2", "language": "es", "text": "Microsoft superó a Apple Inc. como la compañía más valiosa que cotiza en bolsa en el mundo."}, - ] - - extras = [] - - def callback(resp): - extras.append(resp.statistics) - extras.append(resp.model_version) - extras.append(resp.raw_response) - - async with text_analytics_client: - result = await text_analytics_client.recognize_linked_entities( - documents, - show_stats=True, - model_version="latest", - raw_response_hook=callback - ) - async def main(): sample = RecognizeLinkedEntitiesSampleAsync() await sample.recognize_linked_entities_async() - await sample.alternative_scenario_recognize_linked_entities_async() if __name__ == '__main__': loop = asyncio.get_event_loop() diff --git a/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_recognize_pii_entities_async.py b/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_recognize_pii_entities_async.py index 69b49804d3bc..8ac60e319478 100644 --- a/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_recognize_pii_entities_async.py +++ b/sdk/textanalytics/azure-ai-textanalytics/samples/async_samples/sample_recognize_pii_entities_async.py @@ -16,8 +16,8 @@ python sample_recognize_pii_entities_async.py Set the environment variables with your own values before running the sample: - 1) AZURE_TEXT_ANALYTICS_ENDPOINT - the endpoint to your cognitive services resource. - 2) AZURE_TEXT_ANALYTICS_KEY - your text analytics subscription key + 1) AZURE_TEXT_ANALYTICS_ENDPOINT - the endpoint to your Cognitive Services resource. + 2) AZURE_TEXT_ANALYTICS_KEY - your Text Analytics subscription key """ import os @@ -53,44 +53,10 @@ async def recognize_pii_entities_async(self): print("Confidence Score: {}\n".format(entity.score)) # [END batch_recognize_pii_entities_async] - async def alternative_scenario_recognize_pii_entities_async(self): - """This sample demonstrates how to retrieve batch statistics, the - model version used, and the raw response returned from the service. - - It additionally shows an alternative way to pass in the input documents - using a list[TextDocumentInput] and supplying your own IDs and language hints along - with the text. - """ - from azure.ai.textanalytics.aio import TextAnalyticsClient - from azure.ai.textanalytics import TextAnalyticsApiKeyCredential - text_analytics_client = TextAnalyticsClient(endpoint=self.endpoint, credential=TextAnalyticsApiKeyCredential(self.key)) - - documents = [ - {"id": "0", "language": "en", "text": "The employee's SSN is 555-55-5555."}, - {"id": "1", "language": "en", "text": "Your ABA number - 111000025 - is the first 9 digits in the lower left hand corner of your personal check."}, - {"id": "2", "language": "en", "text": "Is 998.214.865-68 your Brazilian CPF number?"} - ] - - extras = [] - - def callback(resp): - extras.append(resp.statistics) - extras.append(resp.model_version) - extras.append(resp.raw_response) - - async with text_analytics_client: - result = await text_analytics_client.recognize_pii_entities( - documents, - show_stats=True, - model_version="latest", - raw_response_hook=callback - ) - async def main(): sample = RecognizePiiEntitiesSampleAsync() await sample.recognize_pii_entities_async() - await sample.alternative_scenario_recognize_pii_entities_async() if __name__ == '__main__': loop = asyncio.get_event_loop() diff --git a/sdk/textanalytics/azure-ai-textanalytics/samples/sample_alternative_document_input.py b/sdk/textanalytics/azure-ai-textanalytics/samples/sample_alternative_document_input.py new file mode 100644 index 000000000000..a64d37d0c328 --- /dev/null +++ b/sdk/textanalytics/azure-ai-textanalytics/samples/sample_alternative_document_input.py @@ -0,0 +1,60 @@ +# coding: utf-8 + +# ------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for +# license information. +# -------------------------------------------------------------------------- + +""" +FILE: sample_alternative_document_input.py + +DESCRIPTION: + This sample shows an alternative way to pass in the input documents. + Here we specify our own IDs and the text language along with the text. + +USAGE: + python sample_alternative_document_input.py + + Set the environment variables with your own values before running the sample: + 1) AZURE_TEXT_ANALYTICS_ENDPOINT - the endpoint to your Cognitive Services resource. + 2) AZURE_TEXT_ANALYTICS_KEY - your Text Analytics subscription key +""" + +import os +import logging + +_LOGGER = logging.getLogger(__name__) + +class AlternativeDocumentInputSample(object): + endpoint = os.getenv("AZURE_TEXT_ANALYTICS_ENDPOINT") + key = os.getenv("AZURE_TEXT_ANALYTICS_KEY") + + def alternative_document_input(self): + from azure.ai.textanalytics import TextAnalyticsClient, TextAnalyticsApiKeyCredential + text_analytics_client = TextAnalyticsClient(endpoint=self.endpoint, credential=TextAnalyticsApiKeyCredential(self.key)) + + documents = [ + {"id": "0", "language": "en", "text": "I had the best day of my life."}, + {"id": "1", "language": "en", + "text": "This was a waste of my time. The speaker put me to sleep."}, + {"id": "2", "language": "es", "text": "No tengo dinero ni nada que dar..."}, + {"id": "3", "language": "fr", + "text": "L'hôtel n'était pas très confortable. L'éclairage était trop sombre."} + ] + + result = text_analytics_client.detect_language(documents) + + for idx, doc in enumerate(result): + if not doc.is_error: + print("Document text: {}".format(documents[idx])) + print("Language detected: {}".format(doc.primary_language.name)) + print("ISO6391 name: {}".format(doc.primary_language.iso6391_name)) + print("Confidence score: {}\n".format(doc.primary_language.score)) + if doc.is_error: + print(doc.id, doc.error) + + +if __name__ == '__main__': + sample = AlternativeDocumentInputSample() + sample.alternative_document_input() diff --git a/sdk/textanalytics/azure-ai-textanalytics/samples/sample_analyze_sentiment.py b/sdk/textanalytics/azure-ai-textanalytics/samples/sample_analyze_sentiment.py index 0fff36a70460..52464dd939ae 100644 --- a/sdk/textanalytics/azure-ai-textanalytics/samples/sample_analyze_sentiment.py +++ b/sdk/textanalytics/azure-ai-textanalytics/samples/sample_analyze_sentiment.py @@ -10,15 +10,15 @@ FILE: sample_analyze_sentiment.py DESCRIPTION: - This sample demonstrates how to analyze sentiment in a batch of documents. + This sample demonstrates how to analyze sentiment in documents. An overall and per-sentence sentiment is returned. USAGE: python sample_analyze_sentiment.py Set the environment variables with your own values before running the sample: - 1) AZURE_TEXT_ANALYTICS_ENDPOINT - the endpoint to your cognitive services resource. - 2) AZURE_TEXT_ANALYTICS_KEY - your text analytics subscription key + 1) AZURE_TEXT_ANALYTICS_ENDPOINT - the endpoint to your Cognitive Services resource. + 2) AZURE_TEXT_ANALYTICS_KEY - your Text Analytics subscription key """ import os @@ -47,58 +47,21 @@ def analyze_sentiment(self): print("Document text: {}".format(documents[idx])) print("Overall sentiment: {}".format(doc.sentiment)) # [END batch_analyze_sentiment] - print("Overall confidence scores: positive={0:.3f}; neutral={1:.3f}; negative={2:.3f} \n".format( + print("Overall confidence scores: positive={}; neutral={}; negative={} \n".format( doc.confidence_scores.positive, doc.confidence_scores.neutral, doc.confidence_scores.negative, )) for idx, sentence in enumerate(doc.sentences): print("Sentence {} sentiment: {}".format(idx+1, sentence.sentiment)) - print("Sentence confidence scores: positive={0:.3f}; neutral={1:.3f}; negative={2:.3f}".format( + print("Sentence confidence scores: positive={}; neutral={}; negative={}".format( sentence.confidence_scores.positive, sentence.confidence_scores.neutral, sentence.confidence_scores.negative, )) - print("Offset: {}".format(sentence.grapheme_offset)) - print("Length: {}\n".format(sentence.grapheme_length)) print("------------------------------------") - def alternative_scenario_analyze_sentiment(self): - """This sample demonstrates how to retrieve batch statistics, the - model version used, and the raw response returned from the service. - - It additionally shows an alternative way to pass in the input documents - using a list[TextDocumentInput] and supplying your own IDs and language hints along - with the text. - """ - from azure.ai.textanalytics import TextAnalyticsClient, TextAnalyticsApiKeyCredential - text_analytics_client = TextAnalyticsClient(endpoint=self.endpoint, credential=TextAnalyticsApiKeyCredential(self.key)) - - documents = [ - {"id": "0", "language": "en", "text": "I had the best day of my life."}, - {"id": "1", "language": "en", - "text": "This was a waste of my time. The speaker put me to sleep."}, - {"id": "2", "language": "es", "text": "No tengo dinero ni nada que dar..."}, - {"id": "3", "language": "fr", - "text": "L'hôtel n'était pas très confortable. L'éclairage était trop sombre."} - ] - - extras = [] - - def callback(resp): - extras.append(resp.statistics) - extras.append(resp.model_version) - extras.append(resp.raw_response) - - result = text_analytics_client.analyze_sentiment( - documents, - show_stats=True, - model_version="latest", - raw_response_hook=callback - ) - if __name__ == '__main__': sample = AnalyzeSentimentSample() sample.analyze_sentiment() - sample.alternative_scenario_analyze_sentiment() diff --git a/sdk/textanalytics/azure-ai-textanalytics/samples/sample_authentication.py b/sdk/textanalytics/azure-ai-textanalytics/samples/sample_authentication.py index 1be013eab9f1..87fd2972ee51 100644 --- a/sdk/textanalytics/azure-ai-textanalytics/samples/sample_authentication.py +++ b/sdk/textanalytics/azure-ai-textanalytics/samples/sample_authentication.py @@ -10,11 +10,11 @@ FILE: sample_authentication.py DESCRIPTION: - This sample demonstrates how to authenticate with the text analytics service. + This sample demonstrates how to authenticate to the Text Analytics service. There are two supported methods of authentication: - 1) Use a cognitive services/text analytics API key with TextAnalyticsApiKeyCredential - 2) Use a token credential to authenticate with Azure Active Directory + 1) Use a Cognitive Services/Text Analytics API key with TextAnalyticsApiKeyCredential + 2) Use a token credential from azure-identity to authenticate with Azure Active Directory See more details about authentication here: https://docs.microsoft.com/azure/cognitive-services/authentication @@ -23,8 +23,8 @@ python sample_authentication.py Set the environment variables with your own values before running the sample: - 1) AZURE_TEXT_ANALYTICS_ENDPOINT - the endpoint to your cognitive services/text analytics resource. - 2) AZURE_TEXT_ANALYTICS_KEY - your text analytics API key + 1) AZURE_TEXT_ANALYTICS_ENDPOINT - the endpoint to your Cognitive Services/Text Analytics resource. + 2) AZURE_TEXT_ANALYTICS_KEY - your Text Analytics API key 3) AZURE_CLIENT_ID - the client ID of your active directory application. 4) AZURE_TENANT_ID - the tenant ID of your active directory application. 5) AZURE_CLIENT_SECRET - the secret of your active directory application. @@ -36,6 +36,7 @@ class AuthenticationSample(object): def authentication_with_api_key_credential(self): + print("\n.. authentication_with_api_key_credential") # [START create_ta_client_with_key] from azure.ai.textanalytics import TextAnalyticsClient, TextAnalyticsApiKeyCredential endpoint = os.getenv("AZURE_TEXT_ANALYTICS_ENDPOINT") @@ -51,9 +52,10 @@ def authentication_with_api_key_credential(self): print("Confidence score: {}".format(result[0].primary_language.score)) def authentication_with_azure_active_directory(self): - """DefaultAzureCredential will use the values from the environment + """DefaultAzureCredential will use the values from these environment variables: AZURE_CLIENT_ID, AZURE_TENANT_ID, AZURE_CLIENT_SECRET """ + print("\n.. authentication_with_api_key_credential") # [START create_ta_client_with_aad] from azure.ai.textanalytics import TextAnalyticsClient from azure.identity import DefaultAzureCredential diff --git a/sdk/textanalytics/azure-ai-textanalytics/samples/sample_detect_language.py b/sdk/textanalytics/azure-ai-textanalytics/samples/sample_detect_language.py index 4330a9596f9f..49bf3467f331 100644 --- a/sdk/textanalytics/azure-ai-textanalytics/samples/sample_detect_language.py +++ b/sdk/textanalytics/azure-ai-textanalytics/samples/sample_detect_language.py @@ -17,8 +17,8 @@ python sample_detect_language.py Set the environment variables with your own values before running the sample: - 1) AZURE_TEXT_ANALYTICS_ENDPOINT - the endpoint to your cognitive services resource. - 2) AZURE_TEXT_ANALYTICS_KEY - your text analytics subscription key + 1) AZURE_TEXT_ANALYTICS_ENDPOINT - the endpoint to your Cognitive Services resource. + 2) AZURE_TEXT_ANALYTICS_KEY - your Text Analytics subscription key """ import os @@ -53,41 +53,7 @@ def detect_language(self): print(doc.id, doc.error) # [END batch_detect_language] - def alternative_scenario_detect_language(self): - """This sample demonstrates how to retrieve batch statistics, the - model version used, and the raw response returned from the service. - - It additionally shows an alternative way to pass in the input documents - using a list[DetectLanguageInput] and supplying your own IDs and country hints along - with the text. - """ - from azure.ai.textanalytics import TextAnalyticsClient, TextAnalyticsApiKeyCredential - text_analytics_client = TextAnalyticsClient(endpoint=self.endpoint, credential=TextAnalyticsApiKeyCredential(self.key)) - - documents = [ - {"id": "0", "country_hint": "US", "text": "This is a document written in English."}, - {"id": "1", "country_hint": "MX", "text": "Este es un document escrito en Español."}, - {"id": "2", "country_hint": "CN", "text": "这是一个用中文写的文件"}, - {"id": "3", "country_hint": "DE", "text": "Dies ist ein Dokument in englischer Sprache."}, - {"id": "4", "country_hint": "SE", "text": "Detta är ett dokument skrivet på engelska."} - ] - - extras = [] - - def callback(resp): - extras.append(resp.statistics) - extras.append(resp.model_version) - extras.append(resp.raw_response) - - result = text_analytics_client.detect_language( - documents, - show_stats=True, - model_version="latest", - raw_response_hook=callback - ) - if __name__ == '__main__': sample = DetectLanguageSample() sample.detect_language() - sample.alternative_scenario_detect_language() diff --git a/sdk/textanalytics/azure-ai-textanalytics/samples/sample_extract_key_phrases.py b/sdk/textanalytics/azure-ai-textanalytics/samples/sample_extract_key_phrases.py index a438e3d07b37..8a28bb93a706 100644 --- a/sdk/textanalytics/azure-ai-textanalytics/samples/sample_extract_key_phrases.py +++ b/sdk/textanalytics/azure-ai-textanalytics/samples/sample_extract_key_phrases.py @@ -16,8 +16,8 @@ python sample_extract_key_phrases.py Set the environment variables with your own values before running the sample: - 1) AZURE_TEXT_ANALYTICS_ENDPOINT - the endpoint to your cognitive services resource. - 2) AZURE_TEXT_ANALYTICS_KEY - your text analytics subscription key + 1) AZURE_TEXT_ANALYTICS_ENDPOINT - the endpoint to your Cognitive Services resource. + 2) AZURE_TEXT_ANALYTICS_KEY - your Text Analytics subscription key """ import os @@ -46,40 +46,7 @@ def extract_key_phrases(self): print(doc.id, doc.error) # [END batch_extract_key_phrases] - def alternative_scenario_extract_key_phrases(self): - """This sample demonstrates how to retrieve batch statistics, the - model version used, and the raw response returned from the service. - - It additionally shows an alternative way to pass in the input documents - using a list[TextDocumentInput] and supplying your own IDs and language hints along - with the text. - """ - from azure.ai.textanalytics import TextAnalyticsClient, TextAnalyticsApiKeyCredential - text_analytics_client = TextAnalyticsClient(endpoint=self.endpoint, credential=TextAnalyticsApiKeyCredential(self.key)) - - documents = [ - {"id": "0", "language": "en", - "text": "Redmond is a city in King County, Washington, United States, located 15 miles east of Seattle."}, - {"id": "1", "language": "en", - "text": "I need to take my cat to the veterinarian."}, - {"id": "2", "language": "en", "text": "I will travel to South America in the summer."} - ] - extras = [] - - def callback(resp): - extras.append(resp.statistics) - extras.append(resp.model_version) - extras.append(resp.raw_response) - - result = text_analytics_client.extract_key_phrases( - documents, - show_stats=True, - model_version="latest", - raw_response_hook=callback - ) - if __name__ == '__main__': sample = ExtractKeyPhrasesSample() sample.extract_key_phrases() - sample.alternative_scenario_extract_key_phrases() diff --git a/sdk/textanalytics/azure-ai-textanalytics/samples/sample_get_detailed_diagnostics_information.py b/sdk/textanalytics/azure-ai-textanalytics/samples/sample_get_detailed_diagnostics_information.py new file mode 100644 index 000000000000..9b62b9563605 --- /dev/null +++ b/sdk/textanalytics/azure-ai-textanalytics/samples/sample_get_detailed_diagnostics_information.py @@ -0,0 +1,62 @@ +# coding: utf-8 + +# ------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for +# license information. +# -------------------------------------------------------------------------- + +""" +FILE: sample_get_detailed_diagnostics_information.py + +DESCRIPTION: + This sample demonstrates how to retrieve batch statistics, the + model version used, and the raw response returned from the service. + +USAGE: + python sample_get_detailed_diagnostics_information.py + + Set the environment variables with your own values before running the sample: + 1) AZURE_TEXT_ANALYTICS_ENDPOINT - the endpoint to your Cognitive Services resource. + 2) AZURE_TEXT_ANALYTICS_KEY - your Text Analytics subscription key +""" + +import os +import logging + +_LOGGER = logging.getLogger(__name__) + +class GetDetailedDiagnosticsInformationSample(object): + endpoint = os.getenv("AZURE_TEXT_ANALYTICS_ENDPOINT") + key = os.getenv("AZURE_TEXT_ANALYTICS_KEY") + + def get_detailed_diagnostics_information(self): + from azure.ai.textanalytics import TextAnalyticsClient, TextAnalyticsApiKeyCredential + text_analytics_client = TextAnalyticsClient(endpoint=self.endpoint, credential=TextAnalyticsApiKeyCredential(self.key)) + + documents = [ + "I had the best day of my life.", + "This was a waste of my time. The speaker put me to sleep.", + "No tengo dinero ni nada que dar...", + "L'hôtel n'était pas très confortable. L'éclairage était trop sombre." + ] + + def callback(resp): + _LOGGER.info("document_count: {}".format(resp.statistics["document_count"])) + _LOGGER.info("valid_document_count: {}".format(resp.statistics["valid_document_count"])) + _LOGGER.info("erroneous_document_count: {}".format(resp.statistics["erroneous_document_count"])) + _LOGGER.info("transaction_count: {}".format(resp.statistics["transaction_count"])) + _LOGGER.info("model_version: {}".format(resp.model_version)) + _LOGGER.info("raw_response: {}".format(resp.raw_response)) + + result = text_analytics_client.analyze_sentiment( + documents, + show_stats=True, + model_version="latest", + raw_response_hook=callback + ) + + +if __name__ == '__main__': + sample = GetDetailedDiagnosticsInformationSample() + sample.get_detailed_diagnostics_information() diff --git a/sdk/textanalytics/azure-ai-textanalytics/samples/sample_recognize_entities.py b/sdk/textanalytics/azure-ai-textanalytics/samples/sample_recognize_entities.py index f998695db50d..b1fb4cdf032a 100644 --- a/sdk/textanalytics/azure-ai-textanalytics/samples/sample_recognize_entities.py +++ b/sdk/textanalytics/azure-ai-textanalytics/samples/sample_recognize_entities.py @@ -16,8 +16,8 @@ python sample_recognize_entities.py Set the environment variables with your own values before running the sample: - 1) AZURE_TEXT_ANALYTICS_ENDPOINT - the endpoint to your cognitive services resource. - 2) AZURE_TEXT_ANALYTICS_KEY - your text analytics subscription key + 1) AZURE_TEXT_ANALYTICS_ENDPOINT - the endpoint to your Cognitive Services resource. + 2) AZURE_TEXT_ANALYTICS_KEY - your Text Analytics subscription key """ import os @@ -45,42 +45,10 @@ def recognize_entities(self): print("\nDocument text: {}".format(documents[idx])) for entity in doc.entities: print("Entity: \t", entity.text, "\tCategory: \t", entity.category, - "\tConfidence Score: \t", round(entity.score, 3)) + "\tConfidence Score: \t", entity.score) # [END batch_recognize_entities] - def alternative_scenario_recognize_entities(self): - """This sample demonstrates how to retrieve batch statistics, the - model version used, and the raw response returned from the service. - - It additionally shows an alternative way to pass in the input documents - using a list[TextDocumentInput] and supplying your own IDs and language hints along - with the text. - """ - from azure.ai.textanalytics import TextAnalyticsClient, TextAnalyticsApiKeyCredential - text_analytics_client = TextAnalyticsClient(endpoint=self.endpoint, credential=TextAnalyticsApiKeyCredential(self.key)) - - documents = [ - {"id": "0", "language": "en", "text": "Microsoft was founded by Bill Gates and Paul Allen."}, - {"id": "1", "language": "de", "text": "I had a wonderful trip to Seattle last week."}, - {"id": "2", "language": "es", "text": "I visited the Space Needle 2 times."}, - ] - - extras = [] - - def callback(resp): - extras.append(resp.statistics) - extras.append(resp.model_version) - extras.append(resp.raw_response) - - result = text_analytics_client.recognize_entities( - documents, - show_stats=True, - model_version="latest", - raw_response_hook=callback - ) - if __name__ == '__main__': sample = RecognizeEntitiesSample() sample.recognize_entities() - sample.alternative_scenario_recognize_entities() diff --git a/sdk/textanalytics/azure-ai-textanalytics/samples/sample_recognize_linked_entities.py b/sdk/textanalytics/azure-ai-textanalytics/samples/sample_recognize_linked_entities.py index 3d8e64b34633..c1990e9c631d 100644 --- a/sdk/textanalytics/azure-ai-textanalytics/samples/sample_recognize_linked_entities.py +++ b/sdk/textanalytics/azure-ai-textanalytics/samples/sample_recognize_linked_entities.py @@ -18,8 +18,8 @@ python sample_recognize_linked_entities.py Set the environment variables with your own values before running the sample: - 1) AZURE_TEXT_ANALYTICS_ENDPOINT - the endpoint to your cognitive services resource. - 2) AZURE_TEXT_ANALYTICS_KEY - your text analytics subscription key + 1) AZURE_TEXT_ANALYTICS_ENDPOINT - the endpoint to your Cognitive Services resource. + 2) AZURE_TEXT_ANALYTICS_KEY - your Text Analytics subscription key """ import os @@ -50,45 +50,12 @@ def recognize_linked_entities(self): print("Url: {}".format(entity.url)) print("Data Source: {}".format(entity.data_source)) for match in entity.matches: - print("Score: {0:.3f}".format(match.score)) - print("Offset: {}".format(match.grapheme_offset)) - print("Length: {}\n".format(match.grapheme_length)) + print("Confidence Score: {}".format(match.score)) + print("Entity as appears in request: {}".format(match.text)) print("------------------------------------------") # [END batch_recognize_linked_entities] - def alternative_scenario_recognize_linked_entities(self): - """This sample demonstrates how to retrieve batch statistics, the - model version used, and the raw response returned from the service. - - It additionally shows an alternative way to pass in the input documents - using a list[TextDocumentInput] and supplying your own IDs and language hints along - with the text. - """ - from azure.ai.textanalytics import TextAnalyticsClient, TextAnalyticsApiKeyCredential - text_analytics_client = TextAnalyticsClient(endpoint=self.endpoint, credential=TextAnalyticsApiKeyCredential(self.key)) - - documents = [ - {"id": "0", "language": "en", "text": "Microsoft moved its headquarters to Bellevue, Washington in January 1979."}, - {"id": "1", "language": "en", "text": "Steve Ballmer stepped down as CEO of Microsoft and was succeeded by Satya Nadella."}, - {"id": "2", "language": "es", "text": "Microsoft superó a Apple Inc. como la compañía más valiosa que cotiza en bolsa en el mundo."}, - ] - - extras = [] - - def callback(resp): - extras.append(resp.statistics) - extras.append(resp.model_version) - extras.append(resp.raw_response) - - result = text_analytics_client.recognize_linked_entities( - documents, - show_stats=True, - model_version="latest", - raw_response_hook=callback - ) - if __name__ == '__main__': sample = RecognizeLinkedEntitiesSample() sample.recognize_linked_entities() - sample.alternative_scenario_recognize_linked_entities() diff --git a/sdk/textanalytics/azure-ai-textanalytics/samples/sample_recognize_pii_entities.py b/sdk/textanalytics/azure-ai-textanalytics/samples/sample_recognize_pii_entities.py index 0a2c734efa41..3e979c6be7bd 100644 --- a/sdk/textanalytics/azure-ai-textanalytics/samples/sample_recognize_pii_entities.py +++ b/sdk/textanalytics/azure-ai-textanalytics/samples/sample_recognize_pii_entities.py @@ -16,8 +16,8 @@ python sample_recognize_pii_entities.py Set the environment variables with your own values before running the sample: - 1) AZURE_TEXT_ANALYTICS_ENDPOINT - the endpoint to your cognitive services resource. - 2) AZURE_TEXT_ANALYTICS_KEY - your text analytics subscription key + 1) AZURE_TEXT_ANALYTICS_ENDPOINT - the endpoint to your Cognitive Services resource. + 2) AZURE_TEXT_ANALYTICS_KEY - your Text Analytics subscription key """ import os @@ -49,39 +49,7 @@ def recognize_pii_entities(self): print("Confidence Score: {}\n".format(entity.score)) # [END batch_recognize_pii_entities] - def alternative_scenario_recognize_pii_entities(self): - """This sample demonstrates how to retrieve batch statistics, the - model version used, and the raw response returned from the service. - - It additionally shows an alternative way to pass in the input documents - using a list[TextDocumentInput] and supplying your own IDs and language hints along - with the text. - """ - from azure.ai.textanalytics import TextAnalyticsClient, TextAnalyticsApiKeyCredential - text_analytics_client = TextAnalyticsClient(endpoint=self.endpoint, credential=TextAnalyticsApiKeyCredential(self.key)) - - documents = [ - {"id": "0", "language": "en", "text": "The employee's SSN is 555-55-5555."}, - {"id": "1", "language": "en", "text": "Your ABA number - 111000025 - is the first 9 digits in the lower left hand corner of your personal check."}, - {"id": "2", "language": "en", "text": "Is 998.214.865-68 your Brazilian CPF number?"} - ] - - extras = [] - - def callback(resp): - extras.append(resp.statistics) - extras.append(resp.model_version) - extras.append(resp.raw_response) - - result = text_analytics_client.recognize_pii_entities( - documents, - show_stats=True, - model_version="latest", - raw_response_hook=callback - ) - if __name__ == '__main__': sample = RecognizePiiEntitiesSample() sample.recognize_pii_entities() - sample.alternative_scenario_recognize_pii_entities()