-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Count seems to be off by 5-8 tokens #5
Comments
That info is mostly in the demo app. I would probably make a new repository for it. But I will add those file to npm so that it is possible if not ideal.
This is my current code feel free to make some edits and improve it here in the demo app. todo: add check for missing options right now it needs the functions to exist. |
https://github.com/syonfox/GPT-3-Encoder/blob/GPToken/demo_app/streamOne.js also what token estomator are you using do you have an example of the tokenized output. We could write a test to compare them. to what is expected. |
I compare it to the result given by the API when not syncing. |
si, I agree that the most usefull and original function is just to get a quick estimate. It would be useful if it was compatible with some of the guts. Anyway yeah, I think the request can give accurate tokens but a few cents here and there never hurt much. |
This is probably due to issue #6 |
Just published https://github.com/drorm/gish and you can see in the screencast, the display of the counting when streaming. Thank you. |
came here from openai/openai-node#18 and I'm using this to count tokens when streaming.
Seems to be off by 5-8 tokens in either direction when comparing to the result that the official API gives when using without streaming.
I haven't done a very deep analysis, just gave it a few prompts:
The text was updated successfully, but these errors were encountered: