Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

release: 1.3.7 #898

Merged
merged 10 commits into from
Dec 1, 2023
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "1.3.6"
".": "1.3.7"
}
24 changes: 24 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,29 @@
# Changelog

## 1.3.7 (2023-12-01)

Full Changelog: [v1.3.6...v1.3.7](https://github.com/openai/openai-python/compare/v1.3.6...v1.3.7)

### Bug Fixes

* **client:** correct base_url setter implementation ([#919](https://github.com/openai/openai-python/issues/919)) ([135d9cf](https://github.com/openai/openai-python/commit/135d9cf2820f1524764bf536a9322830bdcd5875))
* **client:** don't cause crashes when inspecting the module ([#897](https://github.com/openai/openai-python/issues/897)) ([db029a5](https://github.com/openai/openai-python/commit/db029a596c90b1af4ef0bfb1cdf31f54b2f5755d))
* **client:** ensure retried requests are closed ([#902](https://github.com/openai/openai-python/issues/902)) ([e025e6b](https://github.com/openai/openai-python/commit/e025e6bee44ea145d948869ef0c79bac0c376b9f))


### Chores

* **internal:** add tests for proxy change ([#899](https://github.com/openai/openai-python/issues/899)) ([71a13d0](https://github.com/openai/openai-python/commit/71a13d0c70d105b2b97720c72a1003b942cda2ae))
* **internal:** remove unused type var ([#915](https://github.com/openai/openai-python/issues/915)) ([4233bcd](https://github.com/openai/openai-python/commit/4233bcdae5f467f10454fcc008a6e728fa846830))
* **internal:** replace string concatenation with f-strings ([#908](https://github.com/openai/openai-python/issues/908)) ([663a8f6](https://github.com/openai/openai-python/commit/663a8f6dead5aa523d1e8779e75af1dabb1690c4))
* **internal:** replace string concatenation with f-strings ([#909](https://github.com/openai/openai-python/issues/909)) ([caab767](https://github.com/openai/openai-python/commit/caab767156375114078cf8d85031863361326b5f))


### Documentation

* fix typo in readme ([#904](https://github.com/openai/openai-python/issues/904)) ([472cd44](https://github.com/openai/openai-python/commit/472cd44e45a45b0b4f12583a5402e8aeb121d7a2))
* **readme:** update example snippets ([#907](https://github.com/openai/openai-python/issues/907)) ([bbb648e](https://github.com/openai/openai-python/commit/bbb648ef81eb11f81b457e2cbf33a832f4d29a76))

## 1.3.6 (2023-11-28)

Full Changelog: [v1.3.5...v1.3.6](https://github.com/openai/openai-python/compare/v1.3.5...v1.3.6)
Expand Down
14 changes: 8 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,11 +26,12 @@ pip install openai
The full API of this library can be found in [api.md](https://www.github.com/openai/openai-python/blob/main/api.md).

```python
import os
from openai import OpenAI

client = OpenAI(
# defaults to os.environ.get("OPENAI_API_KEY")
api_key="My API Key",
# This is the default and can be omitted
api_key=os.environ.get("OPENAI_API_KEY"),
)

chat_completion = client.chat.completions.create(
Expand All @@ -54,12 +55,13 @@ so that your API Key is not stored in source control.
Simply import `AsyncOpenAI` instead of `OpenAI` and use `await` with each API call:

```python
import os
import asyncio
from openai import AsyncOpenAI

client = AsyncOpenAI(
# defaults to os.environ.get("OPENAI_API_KEY")
api_key="My API Key",
# This is the default and can be omitted
api_key=os.environ.get("OPENAI_API_KEY"),
)


Expand Down Expand Up @@ -96,7 +98,7 @@ stream = client.chat.completions.create(
)
for chunk in stream:
if chunk.choices[0].delta.content is not None:
print(part.choices[0].delta.content)
print(chunk.choices[0].delta.content)
```

The async client uses the exact same interface.
Expand All @@ -113,7 +115,7 @@ stream = await client.chat.completions.create(
)
async for chunk in stream:
if chunk.choices[0].delta.content is not None:
print(part.choices[0].delta.content)
print(chunk.choices[0].delta.content)
```

## Module-level client
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[project]
name = "openai"
version = "1.3.6"
version = "1.3.7"
description = "The official Python library for the openai API"
readme = "README.md"
license = "Apache-2.0"
Expand Down
102 changes: 81 additions & 21 deletions src/openai/_base_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -72,6 +72,7 @@
DEFAULT_TIMEOUT,
DEFAULT_MAX_RETRIES,
RAW_RESPONSE_HEADER,
STREAMED_RAW_RESPONSE_HEADER,
)
from ._streaming import Stream, AsyncStream
from ._exceptions import (
Expand Down Expand Up @@ -363,14 +364,21 @@ def _make_status_error_from_response(
self,
response: httpx.Response,
) -> APIStatusError:
err_text = response.text.strip()
body = err_text
if response.is_closed and not response.is_stream_consumed:
# We can't read the response body as it has been closed
# before it was read. This can happen if an event hook
# raises a status error.
body = None
err_msg = f"Error code: {response.status_code}"
else:
err_text = response.text.strip()
body = err_text

try:
body = json.loads(err_text)
err_msg = f"Error code: {response.status_code} - {body}"
except Exception:
err_msg = err_text or f"Error code: {response.status_code}"
try:
body = json.loads(err_text)
err_msg = f"Error code: {response.status_code} - {body}"
except Exception:
err_msg = err_text or f"Error code: {response.status_code}"

return self._make_status_error(err_msg, body=body, response=response)

Expand Down Expand Up @@ -534,6 +542,12 @@ def _process_response_data(
except pydantic.ValidationError as err:
raise APIResponseValidationError(response=response, body=data) from err

def _should_stream_response_body(self, *, request: httpx.Request) -> bool:
if request.headers.get(STREAMED_RAW_RESPONSE_HEADER) == "true":
return True

return False

@property
def qs(self) -> Querystring:
return Querystring()
Expand Down Expand Up @@ -578,7 +592,7 @@ def base_url(self) -> URL:

@base_url.setter
def base_url(self, url: URL | str) -> None:
self._client.base_url = url if isinstance(url, URL) else URL(url)
self._base_url = self._enforce_trailing_slash(url if isinstance(url, URL) else URL(url))

@lru_cache(maxsize=None)
def platform_headers(self) -> Dict[str, str]:
Expand Down Expand Up @@ -606,7 +620,7 @@ def _calculate_retry_timeout(
if response_headers is not None:
retry_header = response_headers.get("retry-after")
try:
retry_after = int(retry_header)
retry_after = float(retry_header)
except Exception:
retry_date_tuple = email.utils.parsedate_tz(retry_header)
if retry_date_tuple is None:
Expand Down Expand Up @@ -862,14 +876,21 @@ def _request(
request = self._build_request(options)
self._prepare_request(request)

response = None

try:
response = self._client.send(request, auth=self.custom_auth, stream=stream)
response = self._client.send(
request,
auth=self.custom_auth,
stream=stream or self._should_stream_response_body(request=request),
)
log.debug(
'HTTP Request: %s %s "%i %s"', request.method, request.url, response.status_code, response.reason_phrase
)
response.raise_for_status()
except httpx.HTTPStatusError as err: # thrown on 4xx and 5xx status code
if retries > 0 and self._should_retry(err.response):
err.response.close()
return self._retry_request(
options,
cast_to,
Expand All @@ -881,27 +902,39 @@ def _request(

# If the response is streamed then we need to explicitly read the response
# to completion before attempting to access the response text.
err.response.read()
if not err.response.is_closed:
err.response.read()

raise self._make_status_error_from_response(err.response) from None
except httpx.TimeoutException as err:
if response is not None:
response.close()

if retries > 0:
return self._retry_request(
options,
cast_to,
retries,
stream=stream,
stream_cls=stream_cls,
response_headers=response.headers if response is not None else None,
)

raise APITimeoutError(request=request) from err
except Exception as err:
if response is not None:
response.close()

if retries > 0:
return self._retry_request(
options,
cast_to,
retries,
stream=stream,
stream_cls=stream_cls,
response_headers=response.headers if response is not None else None,
)

raise APIConnectionError(request=request) from err

return self._process_response(
Expand All @@ -917,7 +950,7 @@ def _retry_request(
options: FinalRequestOptions,
cast_to: Type[ResponseT],
remaining_retries: int,
response_headers: Optional[httpx.Headers] = None,
response_headers: httpx.Headers | None,
*,
stream: bool,
stream_cls: type[_StreamT] | None,
Expand Down Expand Up @@ -1303,14 +1336,21 @@ async def _request(
request = self._build_request(options)
await self._prepare_request(request)

response = None

try:
response = await self._client.send(request, auth=self.custom_auth, stream=stream)
response = await self._client.send(
request,
auth=self.custom_auth,
stream=stream or self._should_stream_response_body(request=request),
)
log.debug(
'HTTP Request: %s %s "%i %s"', request.method, request.url, response.status_code, response.reason_phrase
)
response.raise_for_status()
except httpx.HTTPStatusError as err: # thrown on 4xx and 5xx status code
if retries > 0 and self._should_retry(err.response):
await err.response.aclose()
return await self._retry_request(
options,
cast_to,
Expand All @@ -1322,19 +1362,39 @@ async def _request(

# If the response is streamed then we need to explicitly read the response
# to completion before attempting to access the response text.
await err.response.aread()
if not err.response.is_closed:
await err.response.aread()

raise self._make_status_error_from_response(err.response) from None
except httpx.ConnectTimeout as err:
if retries > 0:
return await self._retry_request(options, cast_to, retries, stream=stream, stream_cls=stream_cls)
raise APITimeoutError(request=request) from err
except httpx.TimeoutException as err:
if response is not None:
await response.aclose()

if retries > 0:
return await self._retry_request(options, cast_to, retries, stream=stream, stream_cls=stream_cls)
return await self._retry_request(
options,
cast_to,
retries,
stream=stream,
stream_cls=stream_cls,
response_headers=response.headers if response is not None else None,
)

raise APITimeoutError(request=request) from err
except Exception as err:
if response is not None:
await response.aclose()

if retries > 0:
return await self._retry_request(options, cast_to, retries, stream=stream, stream_cls=stream_cls)
return await self._retry_request(
options,
cast_to,
retries,
stream=stream,
stream_cls=stream_cls,
response_headers=response.headers if response is not None else None,
)

raise APIConnectionError(request=request) from err

return self._process_response(
Expand All @@ -1350,7 +1410,7 @@ async def _retry_request(
options: FinalRequestOptions,
cast_to: Type[ResponseT],
remaining_retries: int,
response_headers: Optional[httpx.Headers] = None,
response_headers: httpx.Headers | None,
*,
stream: bool,
stream_cls: type[_AsyncStreamT] | None,
Expand Down
1 change: 1 addition & 0 deletions src/openai/_constants.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
import httpx

RAW_RESPONSE_HEADER = "X-Stainless-Raw-Response"
STREAMED_RAW_RESPONSE_HEADER = "X-Stainless-Streamed-Raw-Response"

# default timeout is 10 minutes
DEFAULT_TIMEOUT = httpx.Timeout(timeout=600.0, connect=5.0)
Expand Down
26 changes: 22 additions & 4 deletions src/openai/_utils/_proxy.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,25 +18,43 @@ class LazyProxy(Generic[T], ABC):
def __init__(self) -> None:
self.__proxied: T | None = None

# Note: we have to special case proxies that themselves return proxies
# to support using a proxy as a catch-all for any random access, e.g. `proxy.foo.bar.baz`

def __getattr__(self, attr: str) -> object:
return getattr(self.__get_proxied__(), attr)
proxied = self.__get_proxied__()
if isinstance(proxied, LazyProxy):
return proxied # pyright: ignore
return getattr(proxied, attr)

@override
def __repr__(self) -> str:
proxied = self.__get_proxied__()
if isinstance(proxied, LazyProxy):
return proxied.__class__.__name__
return repr(self.__get_proxied__())

@override
def __str__(self) -> str:
return str(self.__get_proxied__())
proxied = self.__get_proxied__()
if isinstance(proxied, LazyProxy):
return proxied.__class__.__name__
return str(proxied)

@override
def __dir__(self) -> Iterable[str]:
return self.__get_proxied__().__dir__()
proxied = self.__get_proxied__()
if isinstance(proxied, LazyProxy):
return []
return proxied.__dir__()

@property # type: ignore
@override
def __class__(self) -> type:
return self.__get_proxied__().__class__
proxied = self.__get_proxied__()
if issubclass(type(proxied), LazyProxy):
return type(proxied)
return proxied.__class__

def __get_proxied__(self) -> T:
if not self.should_cache:
Expand Down
2 changes: 1 addition & 1 deletion src/openai/_utils/_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -230,7 +230,7 @@ def human_join(seq: Sequence[str], *, delim: str = ", ", final: str = "or") -> s

def quote(string: str) -> str:
"""Add single quotation marks around the given string. Does *not* do any escaping."""
return "'" + string + "'"
return f"'{string}'"


def required_args(*variants: Sequence[str]) -> Callable[[CallableT], CallableT]:
Expand Down
2 changes: 1 addition & 1 deletion src/openai/_version.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# File generated from our OpenAPI spec by Stainless.

__title__ = "openai"
__version__ = "1.3.6" # x-release-please-version
__version__ = "1.3.7" # x-release-please-version
Loading