Skip to content

Commit

Permalink
re-gen clu LLC client (Azure#21417)
Browse files Browse the repository at this point in the history
* regenrate client

* update model names

* tmp commit

* update model names

* fixing tests

* fix samples

* update readme

* fix remaining samples

* fix env key error

* add recorded tests

* update samples

* add additional samples

* async samples

* disable some samples

* update samples readme

* revert setup.py

* fix broken link
  • Loading branch information
mshaban-msft authored Oct 30, 2021
1 parent 91f6d74 commit 4b79413
Show file tree
Hide file tree
Showing 49 changed files with 2,361 additions and 1,918 deletions.
44 changes: 22 additions & 22 deletions sdk/cognitivelanguage/azure-ai-language-conversations/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@

# Azure Conversational Language Understanding client library for Python
Conversational Language Understanding, aka **CLU** for short, is a cloud-based conversational AI service which is mainly used in bots to extract useful information from user utterance (natural language processing).
The CLU **analyze api** encompasses two projects; deepstack, and workflow projects.
You can use the "deepstack" project if you want to extract intents (intention behind a user utterance] and custom entities.
You can also use the "workflow" project which orchestrates multiple language apps to get the best response (language apps like Question Answering, Luis, and Deepstack).
The CLU **analyze api** encompasses two projects; conversation, and orchestration projects.
You can use the "conversation" project if you want to extract intents (intention behind a user utterance) and custom entities.
You can also use the "orchestration" project which orchestrates multiple language apps to get the best response (language apps like Question Answering, Luis, and Conversation).

[Source code][conversationallanguage_client_src] | [Package (PyPI)][conversationallanguage_pypi_package] | [API reference documentation][conversationallanguage_refdocs] | [Product documentation][conversationallanguage_docs] | [Samples][conversationallanguage_samples]

Expand Down Expand Up @@ -67,16 +67,16 @@ The `azure-ai-language-conversation` client library provides both synchronous an

The following examples show common scenarios using the `client` [created above](#create-conversationanalysisclient).

### Analyze a conversation with a Deepstack App
If you would like to extract custom intents and entities from a user utterance, you can call the `client.analyze_conversations()` method with your deepstack's project name as follows:
### Analyze a conversation with a Conversation App
If you would like to extract custom intents and entities from a user utterance, you can call the `client.analyze_conversations()` method with your conversation's project name as follows:

```python
# import libraries
import os
from azure.core.credentials import AzureKeyCredential

from azure.ai.language.conversations import ConversationAnalysisClient
from azure.ai.language.conversations.models import AnalyzeConversationOptions
from azure.ai.language.conversations.models import ConversationAnalysisOptions

# get secrets
conv_endpoint = os.environ["AZURE_CONVERSATIONS_ENDPOINT"]
Expand All @@ -85,7 +85,7 @@ conv_project = os.environ["AZURE_CONVERSATIONS_PROJECT"]

# prepare data
query = "One california maki please."
input = AnalyzeConversationOptions(
input = ConversationAnalysisOptions(
query=query
)

Expand All @@ -103,7 +103,7 @@ print("query: {}".format(result.query))
print("project kind: {}\n".format(result.prediction.project_kind))

print("view top intent:")
print("top intent: {}".format(result.prediction.top_intent))
print("\ttop intent: {}".format(result.prediction.top_intent))
print("\tcategory: {}".format(result.prediction.intents[0].category))
print("\tconfidence score: {}\n".format(result.prediction.intents[0].confidence_score))

Expand All @@ -114,26 +114,26 @@ for entity in result.prediction.entities:
print("\tconfidence score: {}".format(entity.confidence_score))
```

### Analyze conversation with a Workflow App
### Analyze conversation with a Orchestration App

If you would like to pass the user utterance to your orchestrator (worflow) app, you can call the `client.analyze_conversations()` method with your workflow's project name. The orchestrator project simply orchestrates the submitted user utterance between your language apps (Luis, Deepstack, and Question Answering) to get the best response according to the user intent. See the next example:
If you would like to pass the user utterance to your orchestrator (worflow) app, you can call the `client.analyze_conversations()` method with your orchestration's project name. The orchestrator project simply orchestrates the submitted user utterance between your language apps (Luis, Conversation, and Question Answering) to get the best response according to the user intent. See the next example:

```python
# import libraries
import os
from azure.core.credentials import AzureKeyCredential

from azure.ai.language.conversations import ConversationAnalysisClient
from azure.ai.language.conversations.models import AnalyzeConversationOptions
from azure.ai.language.conversations.models import ConversationAnalysisOptions

# get secrets
conv_endpoint = os.environ["AZURE_CONVERSATIONS_ENDPOINT"]
conv_key = os.environ["AZURE_CONVERSATIONS_KEY"]
workflow_project = os.environ["AZURE_CONVERSATIONS_WORKFLOW_PROJECT")
orchestration_project = os.environ["AZURE_CONVERSATIONS_WORKFLOW_PROJECT")

# prepare data
query = "How do you make sushi rice?",
input = AnalyzeConversationOptions(
input = ConversationAnalysisOptions(
query=query
)

Expand All @@ -142,7 +142,7 @@ client = ConversationAnalysisClient(conv_endpoint, AzureKeyCredential(conv_key))
with client:
result = client.analyze_conversations(
input,
project_name=workflow_project,
project_name=orchestration_project,
deployment_name='production',
)

Expand All @@ -151,35 +151,35 @@ print("query: {}".format(result.query))
print("project kind: {}\n".format(result.prediction.project_kind))

print("view top intent:")
print("top intent: {}".format(result.prediction.top_intent))
print("\ttop intent: {}".format(result.prediction.top_intent))
print("\tcategory: {}".format(result.prediction.intents[0].category))
print("\tconfidence score: {}\n".format(result.prediction.intents[0].confidence_score))

print("view Question Answering result:")
print("\tresult: {}\n".format(result.prediction.intents[0].result))
```

### Analyze conversation with a Workflow (Direct) App
### Analyze conversation with a Orchestration (Direct) App

If you would like to use an orchestrator (workflow) app, and you want to call a specific one of your language apps directly, you can call the `client.analyze_conversations()` method with your workflow's project name and the diirect target name which corresponds to your one of you language apps as follows:
If you would like to use an orchestrator (orchestration) app, and you want to call a specific one of your language apps directly, you can call the `client.analyze_conversations()` method with your orchestration's project name and the diirect target name which corresponds to your one of you language apps as follows:

```python
# import libraries
import os
from azure.core.credentials import AzureKeyCredential

from azure.ai.language.conversations import ConversationAnalysisClient
from azure.ai.language.conversations.models import AnalyzeConversationOptions
from azure.ai.language.conversations.models import ConversationAnalysisOptions

# get secrets
conv_endpoint = os.environ["AZURE_CONVERSATIONS_ENDPOINT"]
conv_key = os.environ["AZURE_CONVERSATIONS_KEY"]
workflow_project = os.environ["AZURE_CONVERSATIONS_WORKFLOW_PROJECT")
orchestration_project = os.environ["AZURE_CONVERSATIONS_WORKFLOW_PROJECT")

# prepare data
query = "How do you make sushi rice?",
target_intent = "SushiMaking"
input = AnalyzeConversationOptions(
input = ConversationAnalysisOptions(
query=query,
direct_target=target_intent,
parameters={
Expand All @@ -198,7 +198,7 @@ client = ConversationAnalysisClient(conv_endpoint, AzureKeyCredential(conv_key))
with client:
result = client.analyze_conversations(
input,
project_name=workflow_project,
project_name=orchestration_project,
deployment_name='production',
)

Expand All @@ -207,7 +207,7 @@ print("query: {}".format(result.query))
print("project kind: {}\n".format(result.prediction.project_kind))

print("view top intent:")
print("top intent: {}".format(result.prediction.top_intent))
print("\ttop intent: {}".format(result.prediction.top_intent))
print("\tcategory: {}".format(result.prediction.intents[0].category))
print("\tconfidence score: {}\n".format(result.prediction.intents[0].confidence_score))

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ def __init__(

self.endpoint = endpoint
self.credential = credential
self.api_version = "2021-07-15-preview"
self.api_version = "2021-11-01-preview"
kwargs.setdefault('sdk_moniker', 'ai-language-conversations/{}'.format(VERSION))
self._configure(**kwargs)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@
from azure.core.rest import HttpRequest, HttpResponse

class ConversationAnalysisClient(ConversationAnalysisClientOperationsMixin):
"""This API accepts a request and mediates among multiple language projects, such as LUIS Generally Available, Question Answering, LUIS Deepstack, and then calls the best candidate service to handle the request. At last, it returns a response with the candidate service's response as a payload.
"""This API accepts a request and mediates among multiple language projects, such as LUIS Generally Available, Question Answering, Conversation, and then calls the best candidate service to handle the request. At last, it returns a response with the candidate service's response as a payload.
In some cases, this API needs to forward requests and responses between the caller and an upstream service.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ def __init__(

self.endpoint = endpoint
self.credential = credential
self.api_version = "2021-07-15-preview"
self.api_version = "2021-11-01-preview"
kwargs.setdefault('sdk_moniker', 'ai-language-conversations/{}'.format(VERSION))
self._configure(**kwargs)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
from .operations import ConversationAnalysisClientOperationsMixin

class ConversationAnalysisClient(ConversationAnalysisClientOperationsMixin):
"""This API accepts a request and mediates among multiple language projects, such as LUIS Generally Available, Question Answering, LUIS Deepstack, and then calls the best candidate service to handle the request. At last, it returns a response with the candidate service's response as a payload.
"""This API accepts a request and mediates among multiple language projects, such as LUIS Generally Available, Question Answering, Conversation, and then calls the best candidate service to handle the request. At last, it returns a response with the candidate service's response as a payload.
In some cases, this API needs to forward requests and responses between the caller and an upstream service.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,17 +26,17 @@ class ConversationAnalysisClientOperationsMixin:
@distributed_trace_async
async def analyze_conversations(
self,
analyze_conversation_options: "_models.AnalyzeConversationOptions",
conversation_analysis_options: "_models.ConversationAnalysisOptions",
*,
project_name: str,
deployment_name: str,
**kwargs: Any
) -> "_models.AnalyzeConversationResult":
"""Analyzes the input conversation utterance.
:param analyze_conversation_options: Post body of the request.
:type analyze_conversation_options:
~azure.ai.language.conversations.models.AnalyzeConversationOptions
:param conversation_analysis_options: Post body of the request.
:type conversation_analysis_options:
~azure.ai.language.conversations.models.ConversationAnalysisOptions
:keyword project_name: The name of the project to use.
:paramtype project_name: str
:keyword deployment_name: The name of the specific deployment of the project to use.
Expand All @@ -53,7 +53,7 @@ async def analyze_conversations(

content_type = kwargs.pop('content_type', "application/json") # type: Optional[str]

json = self._serialize.body(analyze_conversation_options, 'AnalyzeConversationOptions')
json = self._serialize.body(conversation_analysis_options, 'ConversationAnalysisOptions')

request = build_analyze_conversations_request(
content_type=content_type,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,89 +7,99 @@
# --------------------------------------------------------------------------

try:
from ._models_py3 import AnalyzeConversationOptions
from ._models_py3 import AnalysisParameters
from ._models_py3 import AnalyzeConversationResult
from ._models_py3 import AnalyzeParameters
from ._models_py3 import AnswerSpan
from ._models_py3 import BasePrediction
from ._models_py3 import DSTargetIntentResult
from ._models_py3 import DeepStackEntityResolution
from ._models_py3 import DeepstackCallingOptions
from ._models_py3 import DeepstackEntity
from ._models_py3 import DeepstackIntent
from ._models_py3 import DeepstackParameters
from ._models_py3 import DeepstackPrediction
from ._models_py3 import DeepstackResult
from ._models_py3 import DictionaryNormalizedValueResolution
from ._models_py3 import ConversationAnalysisOptions
from ._models_py3 import ConversationCallingOptions
from ._models_py3 import ConversationEntity
from ._models_py3 import ConversationIntent
from ._models_py3 import ConversationParameters
from ._models_py3 import ConversationPrediction
from ._models_py3 import ConversationResult
from ._models_py3 import ConversationTargetIntentResult
from ._models_py3 import Error
from ._models_py3 import ErrorResponse
from ._models_py3 import InnerErrorModel
from ._models_py3 import KnowledgeBaseAnswer
from ._models_py3 import KnowledgeBaseAnswerDialog
from ._models_py3 import KnowledgeBaseAnswerPrompt
from ._models_py3 import KnowledgeBaseAnswers
from ._models_py3 import LUISCallingOptions
from ._models_py3 import LUISParameters
from ._models_py3 import LUISTargetIntentResult
from ._models_py3 import NoneLinkedTargetIntentResult
from ._models_py3 import OrchestratorPrediction
from ._models_py3 import QuestionAnsweringParameters
from ._models_py3 import QuestionAnsweringTargetIntentResult
from ._models_py3 import TargetIntentResult
from ._models_py3 import WorkflowPrediction
except (SyntaxError, ImportError):
from ._models import AnalyzeConversationOptions # type: ignore
from ._models import AnalysisParameters # type: ignore
from ._models import AnalyzeConversationResult # type: ignore
from ._models import AnalyzeParameters # type: ignore
from ._models import AnswerSpan # type: ignore
from ._models import BasePrediction # type: ignore
from ._models import DSTargetIntentResult # type: ignore
from ._models import DeepStackEntityResolution # type: ignore
from ._models import DeepstackCallingOptions # type: ignore
from ._models import DeepstackEntity # type: ignore
from ._models import DeepstackIntent # type: ignore
from ._models import DeepstackParameters # type: ignore
from ._models import DeepstackPrediction # type: ignore
from ._models import DeepstackResult # type: ignore
from ._models import DictionaryNormalizedValueResolution # type: ignore
from ._models import ConversationAnalysisOptions # type: ignore
from ._models import ConversationCallingOptions # type: ignore
from ._models import ConversationEntity # type: ignore
from ._models import ConversationIntent # type: ignore
from ._models import ConversationParameters # type: ignore
from ._models import ConversationPrediction # type: ignore
from ._models import ConversationResult # type: ignore
from ._models import ConversationTargetIntentResult # type: ignore
from ._models import Error # type: ignore
from ._models import ErrorResponse # type: ignore
from ._models import InnerErrorModel # type: ignore
from ._models import KnowledgeBaseAnswer # type: ignore
from ._models import KnowledgeBaseAnswerDialog # type: ignore
from ._models import KnowledgeBaseAnswerPrompt # type: ignore
from ._models import KnowledgeBaseAnswers # type: ignore
from ._models import LUISCallingOptions # type: ignore
from ._models import LUISParameters # type: ignore
from ._models import LUISTargetIntentResult # type: ignore
from ._models import NoneLinkedTargetIntentResult # type: ignore
from ._models import OrchestratorPrediction # type: ignore
from ._models import QuestionAnsweringParameters # type: ignore
from ._models import QuestionAnsweringTargetIntentResult # type: ignore
from ._models import TargetIntentResult # type: ignore
from ._models import WorkflowPrediction # type: ignore

from ._conversation_analysis_client_enums import (
ErrorCode,
InnerErrorCode,
ProjectKind,
ResolutionKind,
TargetKind,
)

__all__ = [
'AnalyzeConversationOptions',
'AnalysisParameters',
'AnalyzeConversationResult',
'AnalyzeParameters',
'AnswerSpan',
'BasePrediction',
'DSTargetIntentResult',
'DeepStackEntityResolution',
'DeepstackCallingOptions',
'DeepstackEntity',
'DeepstackIntent',
'DeepstackParameters',
'DeepstackPrediction',
'DeepstackResult',
'DictionaryNormalizedValueResolution',
'ConversationAnalysisOptions',
'ConversationCallingOptions',
'ConversationEntity',
'ConversationIntent',
'ConversationParameters',
'ConversationPrediction',
'ConversationResult',
'ConversationTargetIntentResult',
'Error',
'ErrorResponse',
'InnerErrorModel',
'KnowledgeBaseAnswer',
'KnowledgeBaseAnswerDialog',
'KnowledgeBaseAnswerPrompt',
'KnowledgeBaseAnswers',
'LUISCallingOptions',
'LUISParameters',
'LUISTargetIntentResult',
'NoneLinkedTargetIntentResult',
'OrchestratorPrediction',
'QuestionAnsweringParameters',
'QuestionAnsweringTargetIntentResult',
'TargetIntentResult',
'WorkflowPrediction',
'ErrorCode',
'InnerErrorCode',
'ProjectKind',
'ResolutionKind',
'TargetKind',
]
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,13 @@ class ErrorCode(with_metaclass(CaseInsensitiveEnumMeta, str, Enum)):
UNAUTHORIZED = "Unauthorized"
FORBIDDEN = "Forbidden"
NOT_FOUND = "NotFound"
PROJECT_NOT_FOUND = "ProjectNotFound"
OPERATION_NOT_FOUND = "OperationNotFound"
AZURE_COGNITIVE_SEARCH_NOT_FOUND = "AzureCognitiveSearchNotFound"
AZURE_COGNITIVE_SEARCH_INDEX_NOT_FOUND = "AzureCognitiveSearchIndexNotFound"
TOO_MANY_REQUESTS = "TooManyRequests"
AZURE_COGNITIVE_SEARCH_THROTTLING = "AzureCognitiveSearchThrottling"
AZURE_COGNITIVE_SEARCH_INDEX_LIMIT_REACHED = "AzureCognitiveSearchIndexLimitReached"
INTERNAL_SERVER_ERROR = "InternalServerError"
SERVICE_UNAVAILABLE = "ServiceUnavailable"

Expand All @@ -42,17 +48,11 @@ class ProjectKind(with_metaclass(CaseInsensitiveEnumMeta, str, Enum)):
CONVERSATION = "conversation"
WORKFLOW = "workflow"

class ResolutionKind(with_metaclass(CaseInsensitiveEnumMeta, str, Enum)):
"""The type of an entity resolution.
"""

#: Dictionary normalized entities.
DICTIONARY_NORMALIZED_VALUE = "DictionaryNormalizedValue"

class TargetKind(with_metaclass(CaseInsensitiveEnumMeta, str, Enum)):
"""The type of a target service.
"""

LUIS = "luis"
LUIS_DEEPSTACK = "luis_deepstack"
CONVERSATION = "conversation"
QUESTION_ANSWERING = "question_answering"
NON_LINKED = "non_linked"
Loading

0 comments on commit 4b79413

Please sign in to comment.