-
-
Notifications
You must be signed in to change notification settings - Fork 2.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clarification on Input/Output Length Parameters for gpt-4-1106-preview and gpt-4-0125-preview Models #533
Comments
This PR approach that: |
I'm interested in this issue. This value seems to have been discussed in #521. It appears there's confusion due to a discrepancy between what "Max Token" is described as and how it's actually utilized. "Max Token" is described in this application as "The maximum number of tokens to generate in the chat completion." BetterChatGPT/public/locales/en/model.json Lines 5 to 6 in ecad41f
However, in practice, the Line 52 in ecad41f
and here: Line 105 in ecad41f
Instead, "Max Token" is utilized as a parameter for BetterChatGPT/src/hooks/useSubmit.ts Lines 73 to 77 in ecad41f
So |
I'm not sure if the guide and the actual code match up, especially about how much data the gpt-4-1106-preview and gpt-4-0125-preview models can handle. The guide says both models can deal with the same amount of data at once. But, looking at the code, it seems there's a difference in their settings.
BetterChatGPT/src/constants/chat.ts
Lines 50 to 51 in ecad41f
Version / Description / Context
gpt-4-0125-preview
Description: The latest GPT-4 model intended to reduce cases of “laziness” where the model doesn’t complete a task. Returns a maximum of 4,096 output tokens.
Context window: 128,000 tokens
gpt-4-1106-preview
Description: GPT-4 Turbo model featuring improved instruction following, JSON mode, reproducible outputs, parallel function calling, and more. Returns a maximum of 4,096 output tokens. This preview model is not yet suited for production traffic.
Context window: 128,000 tokens
Reference:
https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo
The text was updated successfully, but these errors were encountered: