We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
It seems that some evals require specific context window length, ex: make-me-say eval probably requires 32k?
make-me-say
It would be nice if there was a more DX friendly to know about this before it errors in the API call?
oaieval gpt-3.5-turbo,gpt-3.5-turbo,gpt-3.5-turbo make-me-say --debug
This model's maximum context length is 4097 tokens. However, your messages resulted in 4123 tokens. Please reduce the length of the messages.
No response
macOS
Python v3.9.7
openai-evals 1.0.3
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Describe the bug
It seems that some evals require specific context window length, ex:
make-me-say
eval probably requires 32k?It would be nice if there was a more DX friendly to know about this before it errors in the API call?
To Reproduce
oaieval gpt-3.5-turbo,gpt-3.5-turbo,gpt-3.5-turbo make-me-say --debug
This model's maximum context length is 4097 tokens. However, your messages resulted in 4123 tokens. Please reduce the length of the messages.
Code snippets
No response
OS
macOS
Python version
Python v3.9.7
Library version
openai-evals 1.0.3
The text was updated successfully, but these errors were encountered: