Skip to content

Commit

Permalink
spec update
Browse files Browse the repository at this point in the history
  • Loading branch information
GaspardBT committed Sep 13, 2024
1 parent 23b2af0 commit 1d406df
Show file tree
Hide file tree
Showing 160 changed files with 1,401 additions and 639 deletions.
48 changes: 33 additions & 15 deletions .speakeasy/gen.lock
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
lockVersion: 2.0.0
id: 2d045ec7-2ebb-4f4d-ad25-40953b132161
management:
docChecksum: ad1a7d6946828a089ca3831e257d307d
docChecksum: b504694f524d70325c81c4bd7542c5cf
docVersion: 0.0.2
speakeasyVersion: 1.382.0
generationVersion: 2.404.11
releaseVersion: 1.0.3
configChecksum: 818970b881ec69b05f6660ca354f26f5
releaseVersion: 1.0.4
configChecksum: 713a7028fbef398b9a9d8e2d529f1e9a
repoURL: https://github.com/mistralai/client-python.git
installationURL: https://github.com/mistralai/client-python.git
published: true
Expand Down Expand Up @@ -77,8 +77,9 @@ generatedFiles:
- src/mistralai/utils/values.py
- src/mistralai/models/sdkerror.py
- src/mistralai/models/modellist.py
- src/mistralai/models/modelcard.py
- src/mistralai/models/basemodelcard.py
- src/mistralai/models/modelcapabilities.py
- src/mistralai/models/ftmodelcard.py
- src/mistralai/models/httpvalidationerror.py
- src/mistralai/models/validationerror.py
- src/mistralai/models/retrieve_model_v1_models_model_id_getop.py
Expand Down Expand Up @@ -129,15 +130,22 @@ generatedFiles:
- src/mistralai/models/assistantmessage.py
- src/mistralai/models/toolcall.py
- src/mistralai/models/functioncall.py
- src/mistralai/models/tooltypes.py
- src/mistralai/models/usageinfo.py
- src/mistralai/models/chatcompletionrequest.py
- src/mistralai/models/toolchoice.py
- src/mistralai/models/functionname.py
- src/mistralai/models/toolchoiceenum.py
- src/mistralai/models/tool.py
- src/mistralai/models/function.py
- src/mistralai/models/responseformat.py
- src/mistralai/models/responseformats.py
- src/mistralai/models/systemmessage.py
- src/mistralai/models/contentchunk.py
- src/mistralai/models/usermessage.py
- src/mistralai/models/textchunk.py
- src/mistralai/models/usermessage.py
- src/mistralai/models/contentchunk.py
- src/mistralai/models/imageurlchunk.py
- src/mistralai/models/imageurl.py
- src/mistralai/models/toolmessage.py
- src/mistralai/models/completionevent.py
- src/mistralai/models/completionchunk.py
Expand All @@ -154,13 +162,16 @@ generatedFiles:
- src/mistralai/models/embeddingrequest.py
- src/mistralai/models/security.py
- src/mistralai/models/__init__.py
- docs/models/data.md
- docs/models/modellist.md
- docs/models/modelcard.md
- docs/models/basemodelcard.md
- docs/models/modelcapabilities.md
- docs/models/ftmodelcard.md
- docs/models/httpvalidationerror.md
- docs/models/loc.md
- docs/models/validationerror.md
- docs/models/retrievemodelv1modelsmodelidgetrequest.md
- docs/models/retrievemodelv1modelsmodelidgetresponseretrievemodelv1modelsmodelidget.md
- docs/models/deletemodelout.md
- docs/models/deletemodelv1modelsmodeliddeleterequest.md
- docs/models/ftmodeloutobject.md
Expand Down Expand Up @@ -233,28 +244,35 @@ generatedFiles:
- docs/models/chatcompletionchoice.md
- docs/models/assistantmessagerole.md
- docs/models/assistantmessage.md
- docs/models/tooltypes.md
- docs/models/toolcall.md
- docs/models/arguments.md
- docs/models/functioncall.md
- docs/models/tooltypes.md
- docs/models/usageinfo.md
- docs/models/stop.md
- docs/models/messages.md
- docs/models/toolchoice.md
- docs/models/chatcompletionrequesttoolchoice.md
- docs/models/chatcompletionrequest.md
- docs/models/tooltooltypes.md
- docs/models/toolchoice.md
- docs/models/functionname.md
- docs/models/toolchoiceenum.md
- docs/models/tool.md
- docs/models/function.md
- docs/models/responseformats.md
- docs/models/responseformat.md
- docs/models/content.md
- docs/models/responseformats.md
- docs/models/role.md
- docs/models/content.md
- docs/models/systemmessage.md
- docs/models/contentchunk.md
- docs/models/usermessagecontent.md
- docs/models/textchunktype.md
- docs/models/textchunk.md
- docs/models/usermessagerole.md
- docs/models/usermessagecontent.md
- docs/models/usermessage.md
- docs/models/textchunk.md
- docs/models/contentchunk.md
- docs/models/imageurlchunktype.md
- docs/models/imageurlchunkimageurl.md
- docs/models/imageurlchunk.md
- docs/models/imageurl.md
- docs/models/toolmessagerole.md
- docs/models/toolmessage.md
- docs/models/completionevent.md
Expand Down
2 changes: 1 addition & 1 deletion .speakeasy/gen.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ generation:
auth:
oAuth2ClientCredentialsEnabled: true
python:
version: 1.0.3
version: 1.0.4
additionalDependencies:
dev:
pytest: ^8.2.2
Expand Down
29 changes: 14 additions & 15 deletions .speakeasy/workflow.lock
Original file line number Diff line number Diff line change
Expand Up @@ -2,45 +2,44 @@ speakeasyVersion: 1.382.0
sources:
mistral-azure-source:
sourceNamespace: mistral-openapi-azure
sourceRevisionDigest: sha256:becb324b11dfc5155aa0cc420ca312d0af5aecfcbad22fe90066a09561ae4e6a
sourceBlobDigest: sha256:84928a6297c3a838dce719ffa3da1e221cba968ce4a6c74d5c3bb41bf86a7e5d
sourceRevisionDigest: sha256:3dec9e900243dab5f6fecb4780a74e5cd26bf5660d1db0268964689cb4da043a
sourceBlobDigest: sha256:867fabbb7c8662a2f10861eb9505990aea59e4677631291989d06afb3cf497bc
tags:
- latest
mistral-google-cloud-source:
sourceNamespace: mistral-openapi-google-cloud
sourceRevisionDigest: sha256:7fee22ae1a434b8919112c7feae87af7f1378952fcc6bde081deb55f65e5bfc2
sourceBlobDigest: sha256:a4c011f461c73809a7d6cf1c9823d3c51d5050895aad246287ff14ac971efb8c
sourceRevisionDigest: sha256:caf6467696dddae2736fef96d8967b8a02a1e10e405d22d2901d0459172b739c
sourceBlobDigest: sha256:d6650e668064690efa947a29ec1712566252e3283940b67a3c602323df461cf6
tags:
- latest
mistral-openapi:
sourceNamespace: mistral-openapi
sourceRevisionDigest: sha256:421a4bd55fd50ba00d6ebf2db603888009e9996b642b0499110c223fd6ca21c2
sourceBlobDigest: sha256:1c87b4b8287f6a3083167c13ab59c5e7ac180ab7e19ad1532f3f46495cc12a26
sourceRevisionDigest: sha256:4b17326b6b91a95870383242c971996d6c6671d180b40c96f75b09c7cb1cc9c0
sourceBlobDigest: sha256:6e345b52897fc37bf0cae3ff6ddd58a09484cc0f2130387ddf41af6a76eda19f
tags:
- latest
- main
targets:
mistralai-azure-sdk:
source: mistral-azure-source
sourceNamespace: mistral-openapi-azure
sourceRevisionDigest: sha256:becb324b11dfc5155aa0cc420ca312d0af5aecfcbad22fe90066a09561ae4e6a
sourceBlobDigest: sha256:84928a6297c3a838dce719ffa3da1e221cba968ce4a6c74d5c3bb41bf86a7e5d
sourceRevisionDigest: sha256:3dec9e900243dab5f6fecb4780a74e5cd26bf5660d1db0268964689cb4da043a
sourceBlobDigest: sha256:867fabbb7c8662a2f10861eb9505990aea59e4677631291989d06afb3cf497bc
outLocation: ./packages/mistralai_azure
mistralai-gcp-sdk:
source: mistral-google-cloud-source
sourceNamespace: mistral-openapi-google-cloud
sourceRevisionDigest: sha256:7fee22ae1a434b8919112c7feae87af7f1378952fcc6bde081deb55f65e5bfc2
sourceBlobDigest: sha256:a4c011f461c73809a7d6cf1c9823d3c51d5050895aad246287ff14ac971efb8c
sourceRevisionDigest: sha256:caf6467696dddae2736fef96d8967b8a02a1e10e405d22d2901d0459172b739c
sourceBlobDigest: sha256:d6650e668064690efa947a29ec1712566252e3283940b67a3c602323df461cf6
outLocation: ./packages/mistralai_gcp
mistralai-sdk:
source: mistral-openapi
sourceNamespace: mistral-openapi
sourceRevisionDigest: sha256:421a4bd55fd50ba00d6ebf2db603888009e9996b642b0499110c223fd6ca21c2
sourceBlobDigest: sha256:1c87b4b8287f6a3083167c13ab59c5e7ac180ab7e19ad1532f3f46495cc12a26
outLocation: /github/workspace/repo
sourceRevisionDigest: sha256:4b17326b6b91a95870383242c971996d6c6671d180b40c96f75b09c7cb1cc9c0
sourceBlobDigest: sha256:6e345b52897fc37bf0cae3ff6ddd58a09484cc0f2130387ddf41af6a76eda19f
outLocation: /Users/gaspard/public-mistral/client-python
workflow:
workflowVersion: 1.0.0
speakeasyVersion: latest
speakeasyVersion: 1.382.0
sources:
mistral-azure-source:
inputs:
Expand Down
2 changes: 1 addition & 1 deletion .speakeasy/workflow.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
workflowVersion: 1.0.0
speakeasyVersion: latest
speakeasyVersion: 1.382.0
sources:
mistral-azure-source:
inputs:
Expand Down
20 changes: 10 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -450,10 +450,10 @@ if res is not None:

Handling errors in this SDK should largely match your expectations. All operations return a response object or raise an error. If Error objects are specified in your OpenAPI Spec, the SDK will raise the appropriate Error type.

| Error Object | Status Code | Content Type |
| -------------------------- | -------------------------- | -------------------------- |
| models.HTTPValidationError | 422 | application/json |
| models.SDKError | 4xx-5xx | */* |
| Error Object | Status Code | Content Type |
| -------------------------- | ----------- | ---------------- |
| models.HTTPValidationError | 422 | application/json |
| models.SDKError | 4xx-5xx | */* |

### Example

Expand Down Expand Up @@ -490,9 +490,9 @@ if res is not None:

You can override the default server globally by passing a server name to the `server: str` optional parameter when initializing the SDK client instance. The selected server will then be used as the default on the operations that use it. This table lists the names associated with the available servers:

| Name | Server | Variables |
| ----- | ------ | --------- |
| `prod` | `https://api.mistral.ai` | None |
| Name | Server | Variables |
| ------ | ------------------------ | --------- |
| `prod` | `https://api.mistral.ai` | None |

#### Example

Expand Down Expand Up @@ -625,9 +625,9 @@ s = Mistral(async_client=CustomClient(httpx.AsyncClient()))

This SDK supports the following security scheme globally:

| Name | Type | Scheme | Environment Variable |
| -------------------- | -------------------- | -------------------- | -------------------- |
| `api_key` | http | HTTP Bearer | `MISTRAL_API_KEY` |
| Name | Type | Scheme | Environment Variable |
| --------- | ---- | ----------- | -------------------- |
| `api_key` | http | HTTP Bearer | `MISTRAL_API_KEY` |

To authenticate with the API the `api_key` parameter must be set when initializing the SDK client instance. For example:
```python
Expand Down
12 changes: 2 additions & 10 deletions USAGE.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,10 +14,7 @@ s = Mistral(


res = s.chat.complete(model="mistral-small-latest", messages=[
{
"content": "Who is the best French painter? Answer in one short sentence.",
"role": "user",
},

])

if res is not None:
Expand All @@ -39,10 +36,7 @@ async def main():
api_key=os.getenv("MISTRAL_API_KEY", ""),
)
res = await s.chat.complete_async(model="mistral-small-latest", messages=[
{
"content": "Who is the best French painter? Answer in one short sentence.",
"role": "user",
},

])
if res is not None:
# handle response
Expand Down Expand Up @@ -116,7 +110,6 @@ s = Mistral(
res = s.agents.complete(messages=[
{
"content": "Who is the best French painter? Answer in one short sentence.",
"role": "user",
},
], agent_id="<value>")

Expand All @@ -141,7 +134,6 @@ async def main():
res = await s.agents.complete_async(messages=[
{
"content": "Who is the best French painter? Answer in one short sentence.",
"role": "user",
},
], agent_id="<value>")
if res is not None:
Expand Down
19 changes: 13 additions & 6 deletions docs/models/agentscompletionrequesttoolchoice.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,17 @@
# AgentsCompletionRequestToolChoice


## Values
## Supported Types

### `models.ToolChoice`

```python
value: models.ToolChoice = /* values here */
```

### `models.ToolChoiceEnum`

```python
value: models.ToolChoiceEnum = /* values here */
```

| Name | Value |
| ------ | ------ |
| `AUTO` | auto |
| `NONE` | none |
| `ANY` | any |
19 changes: 13 additions & 6 deletions docs/models/agentscompletionstreamrequesttoolchoice.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,17 @@
# AgentsCompletionStreamRequestToolChoice


## Values
## Supported Types

### `models.ToolChoice`

```python
value: models.ToolChoice = /* values here */
```

### `models.ToolChoiceEnum`

```python
value: models.ToolChoiceEnum = /* values here */
```

| Name | Value |
| ------ | ------ |
| `AUTO` | auto |
| `NONE` | none |
| `ANY` | any |
4 changes: 2 additions & 2 deletions docs/models/assistantmessage.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@

| Field | Type | Required | Description |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `role` | [Optional[models.AssistantMessageRole]](../models/assistantmessagerole.md) | :heavy_minus_sign: | N/A |
| `content` | *OptionalNullable[str]* | :heavy_minus_sign: | N/A |
| `tool_calls` | List[[models.ToolCall](../models/toolcall.md)] | :heavy_minus_sign: | N/A |
| `prefix` | *Optional[bool]* | :heavy_minus_sign: | Set this to `true` when adding an assistant message as prefix to condition the model response. The role of the prefix message is to force the model to start its answer by the content of the message. |
| `role` | [Optional[models.AssistantMessageRole]](../models/assistantmessagerole.md) | :heavy_minus_sign: | N/A |
| `prefix` | *Optional[bool]* | :heavy_minus_sign: | Set this to `true` when adding an assistant message as prefix to condition the model response. The role of the prefix message is to force the model to start its answer by the content of the message. |
Loading

0 comments on commit 1d406df

Please sign in to comment.