Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ChatCompletionStreamManager object does not support the asynchronous context manager protocol #1639

Closed
1 task done
lucashofer opened this issue Aug 12, 2024 · 1 comment · Fixed by #1640
Closed
1 task done
Labels
documentation Improvements or additions to documentation

Comments

@lucashofer
Copy link

Confirm this is an issue with the Python library and not an underlying OpenAI API

  • This is an issue with the Python library

Describe the bug

The docs here say that the following should be possible

import openai
import asyncio

async def test_streaming():
    client = openai.OpenAI()

    async with client.beta.chat.completions.stream(
        model='gpt-4o-2024-08-06',
        messages=[
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Tell me a joke."},
        ],
    ) as stream:
        async for event in stream:
            if event.type == 'content.delta':
                print(event.delta, flush=True, end='')
            elif event.type == 'content.done':
                print("\nContent generation complete.")
                break

# Run the streaming test
asyncio.run(test_streaming())

However, this gives

TypeError: 'ChatCompletionStreamManager' object does not support the asynchronous context manager protocol

When I run without async it works fine ie

import openai

def test_streaming():
    client = openai.OpenAI()

    with client.beta.chat.completions.stream(
        model='gpt-4o-2024-08-06',
        messages=[
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Tell me a joke."},
        ],
    ) as stream:
        for event in stream:
            if event.type == 'content.delta':
                print(event.delta, flush=True, end='')
            elif event.type == 'content.done':
                print("\nContent generation complete.")
                break

# Run the streaming test
test_streaming()

To Reproduce

Run the above code snippet which is the beta async chat_completion (and should handle the new pydantic parsing)

Code snippets

OS

macOS

Python version

Python 3.11-3.12

Library version

1.40.4

@lucashofer lucashofer added the bug Something isn't working label Aug 12, 2024
@RobertCraigie RobertCraigie added documentation Improvements or additions to documentation and removed bug Something isn't working labels Aug 12, 2024
@RobertCraigie
Copy link
Collaborator

ah @lucashofer, sorry those docs don't make it clear, you have to use AsyncOpenAI() for async requests.

for example

import openai
import asyncio

async def test_streaming():
    client = openai.AsyncOpenAI()

    async with client.beta.chat.completions.stream(
        model='gpt-4o-2024-08-06',
        messages=[
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Tell me a joke."},
        ],
    ) as stream:
        async for event in stream:
            if event.type == 'content.delta':
                print(event.delta, flush=True, end='')
            elif event.type == 'content.done':
                print("\nContent generation complete.")
                break

# Run the streaming test
asyncio.run(test_streaming())

stainless-app bot pushed a commit that referenced this issue Aug 12, 2024
megamanics pushed a commit to devops-testbed/openai-python that referenced this issue Aug 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants