Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add .continue repl command #608

Merged
merged 1 commit into from
Jun 17, 2024
Merged

feat: add .continue repl command #608

merged 1 commit into from
Jun 17, 2024

Conversation

sigoden
Copy link
Owner

@sigoden sigoden commented Jun 17, 2024

Add .continue command to contine the incomplete response that stoped due to exceeding the max_output_tokens.

relate to #601

@sigoden sigoden merged commit ba884c9 into main Jun 17, 2024
3 checks passed
@sigoden sigoden deleted the feat branch June 17, 2024 05:19
@blob42
Copy link

blob42 commented Jun 17, 2024

I think you misunderstood how the continue feature is supposed to work. Continue should try to continue the generation by sending back the list of messages including the latest assistant message. The LLM will either continue its reply or send the end of output token.

Helping to conintue due to exceeding the max_output_tokens is just a special.

@sigoden
Copy link
Owner Author

sigoden commented Jun 17, 2024

The .continue work as you expect. The description the answer for the curious.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants