Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Properly truncate to 8000 tokens when we use OpenAI Embeddings #812

Merged
merged 3 commits into from
Oct 8, 2024

Conversation

vkehfdl1
Copy link
Contributor

@vkehfdl1 vkehfdl1 commented Oct 7, 2024

close #811

It will be great we can set truncate limit as parameter => It can be a new issue.

  • Call embedding model before use
  • Adjust embedding token limit to 8000

@vkehfdl1 vkehfdl1 requested a review from bwook00 October 7, 2024 16:49
@vkehfdl1 vkehfdl1 enabled auto-merge (squash) October 7, 2024 17:08
Copy link
Contributor

@bwook00 bwook00 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@vkehfdl1 vkehfdl1 merged commit 504dfcb into main Oct 8, 2024
3 checks passed
@vkehfdl1 vkehfdl1 deleted the HotFix/#811 branch October 8, 2024 01:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[BUG] OpenAIEmbedding did not truncate as openai_truncate_by_token
2 participants