-
Notifications
You must be signed in to change notification settings - Fork 296
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Add Support on Log Probability Value from Returned Response of Gemini Models. #238
Comments
Any thoughts about it? |
I also think it would be extremely helpful if the API could provide the top-k log probabilities of each predicted token. |
Really need this as I found Gemini to be hallucinating quite abit on a RAG application. Btw your title refers to |
Thanks for reminding 🤗 |
Many hallucination detection approaches rely on log probability as a key feature. It's one of the most essential elements when building a serious product with a LLM. |
This would be extremely helpful! |
As all others previously mentioned, it would be great to somehow get access to the logsprobs in a similar fashion as OpenAI does so with the models. Based on that we could then e.g. calculate the perplexity score and various other evaluation metrics. |
Yep, this would make it possible to use gemini in production |
+1 - useful also for classification tasks. |
+1 |
+1 it is critical. Chatgpt and other major models are now all supporting this feature. |
Can you please specify what major models support this feature? Because I'm also in search for the alternatives. Now I don't see any other players supporting this except from OpenAI. No Anthropic, No Mistral... |
b/361194489 |
Yes, this would allow evaluating Gemini with threshold-free evaluation metrics. That would be excellent. |
Google's Vertex-AI Just launched this, hopefully that means it's coming soon here, but I don't have a timeline. |
can you share a link? |
They have added a field 'avgLogprobs' to the response documentation in https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#nodejs but I am unable to get a response with such field. |
Yeah, this API is related, but separate from vertex. Hopefully this API will catch up soon. |
|
This is fixed in the latest version: code: #561 |
Description of the feature request:
How about providing new support for retrieving the log probability of each predicted token by Google models like Gemini?
Something just like the same function illustrated in this OpenAI post: User Logprobs.
What problem are you trying to solve with this feature?
Getting the returned log prob of each generated token does help me (and other users maybe) to confirm the confidence of the model's prediction. Further, this feature can help users compute the perplexity of generated sentences to better understand the textual continuation quality.
Any other information you'd like to share?
No response
The text was updated successfully, but these errors were encountered: