-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
whisper : improve handling of prompts #1981
Conversation
@sindresorhus lmk if this change would work for you |
Thanks for looking into this. That would work, although I still think wrapping it up into a separate function would make for a better API and make it more discoverable for consumers. I would never have thought of using // Some docs
int whisper_token_count(struct whisper_context * ctx, const char * text) {
return -whisper_tokenize(ctx, text, NULL, 0);
} |
@@ -207,7 +207,7 @@ void whisper_print_usage(int /*argc*/, char ** argv, const whisper_params & para | |||
fprintf(stderr, " -nt, --no-timestamps [%-7s] do not print timestamps\n", params.no_timestamps ? "true" : "false"); | |||
fprintf(stderr, " -l LANG, --language LANG [%-7s] spoken language ('auto' for auto-detect)\n", params.language.c_str()); | |||
fprintf(stderr, " -dl, --detect-language [%-7s] exit after automatically detecting language\n", params.detect_language ? "true" : "false"); | |||
fprintf(stderr, " --prompt PROMPT [%-7s] initial prompt\n", params.prompt.c_str()); | |||
fprintf(stderr, " --prompt PROMPT [%-7s] initial prompt (max n_text_ctx/2 tokens)\n", params.prompt.c_str()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would document the 224 number here for quick reference.
Same in
Line 507 in 5c2c07d
// maximum of whisper_n_text_ctx()/2 tokens are used |
* whisper : improve handling of prompts * whisper : add whisper_token_count helper
* whisper : improve handling of prompts * whisper : add whisper_token_count helper
* whisper : improve handling of prompts * whisper : add whisper_token_count helper
* whisper : improve handling of prompts * whisper : add whisper_token_count helper
fix #1960, #1961, #1962
Changed
whisper_tokenize()
to return the negative number of required tokens when the token buffer is not big enough. This can be used to determine the necessary tokens for a given text:Also improve docs by clarifying that the Whisper models can process prompts of up to only
n_text_ctx/2
tokens which is 224 tokens. So there is no point to provide longer prompts as they would be truncated:whisper.cpp/whisper.cpp
Lines 5474 to 5480 in 48a1452