Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Add Support on Log Probability Value from Returned Response of Gemini Models. #238

Closed
jacklanda opened this issue Mar 14, 2024 · 20 comments
Assignees
Labels
component:python sdk Issue/PR related to Python SDK type:feature request New feature request/enhancement

Comments

@jacklanda
Copy link

Description of the feature request:

How about providing new support for retrieving the log probability of each predicted token by Google models like Gemini?

Something just like the same function illustrated in this OpenAI post: User Logprobs.

What problem are you trying to solve with this feature?

Getting the returned log prob of each generated token does help me (and other users maybe) to confirm the confidence of the model's prediction. Further, this feature can help users compute the perplexity of generated sentences to better understand the textual continuation quality.

Any other information you'd like to share?

No response

@jacklanda jacklanda added component:python sdk Issue/PR related to Python SDK type:feature request New feature request/enhancement labels Mar 14, 2024
@jacklanda
Copy link
Author

Any thoughts about it?

@singhniraj08 singhniraj08 added the status:triaged Issue/PR triaged to the corresponding sub-team label Mar 19, 2024
@Brian-Ckwu
Copy link

I also think it would be extremely helpful if the API could provide the top-k log probabilities of each predicted token.

@krenova
Copy link

krenova commented Apr 9, 2024

Really need this as I found Gemini to be hallucinating quite abit on a RAG application.

Btw your title refers to claude and not gemini.
[Feature] Add Support on Log Probability Value from Returned Response of Claude Models

@jacklanda jacklanda changed the title [Feature] Add Support on Log Probability Value from Returned Response of Claude Models. [Feature] Add Support on Log Probability Value from Returned Response of Gemini Models. Apr 9, 2024
@jacklanda
Copy link
Author

Really need this as I found Gemini to be hallucinating quite abit on a RAG application.

Btw your title refers to claude and not gemini. [Feature] Add Support on Log Probability Value from Returned Response of Claude Models

Thanks for reminding 🤗

@simpleusername96
Copy link

Many hallucination detection approaches rely on log probability as a key feature. It's one of the most essential elements when building a serious product with a LLM.

@anwang427
Copy link

This would be extremely helpful!

@Said-Apollo
Copy link

As all others previously mentioned, it would be great to somehow get access to the logsprobs in a similar fashion as OpenAI does so with the models. Based on that we could then e.g. calculate the perplexity score and various other evaluation metrics.

@haugmarkus
Copy link

Yep, this would make it possible to use gemini in production

@waveworks-ai
Copy link

+1 - useful also for classification tasks.

@lavanyanemani96
Copy link

+1

@luna-b20
Copy link

+1 it is critical. Chatgpt and other major models are now all supporting this feature.
please help!

@michaelgfeldman
Copy link

+1 it is critical. Chatgpt and other major models are now all supporting this feature. please help!

Can you please specify what major models support this feature? Because I'm also in search for the alternatives. Now I don't see any other players supporting this except from OpenAI. No Anthropic, No Mistral...

@MarkDaoust
Copy link
Collaborator

b/361194489

@MFajcik
Copy link

MFajcik commented Aug 30, 2024

I also think it would be extremely helpful if the API could provide the top-k log probabilities of each predicted token.

Yes, this would allow evaluating Gemini with threshold-free evaluation metrics. That would be excellent.

@MarkDaoust
Copy link
Collaborator

MarkDaoust commented Aug 30, 2024

Google's Vertex-AI Just launched this, hopefully that means it's coming soon here, but I don't have a timeline.

@michaelgfeldman
Copy link

Google's Vertex-AI Just launched this, hopefully that means it's coming soon here, but I don't have a timeline.

can you share a link?

@haugmarkus
Copy link

Google's Vertex-AI Just launched this, hopefully that means it's coming soon here, but I don't have a timeline.

can you share a link?

They have added a field 'avgLogprobs' to the response documentation in https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#nodejs but I am unable to get a response with such field.

@MarkDaoust
Copy link
Collaborator

Yeah, this API is related, but separate from vertex. Hopefully this API will catch up soon.

@kauabh
Copy link

kauabh commented Oct 4, 2024

  • 1

@MarkDaoust
Copy link
Collaborator

This is fixed in the latest version:

code: #561
tutorial: https://github.com/google-gemini/cookbook/blob/main/quickstarts/New_in_002.ipynb

@github-actions github-actions bot removed the status:triaged Issue/PR triaged to the corresponding sub-team label Oct 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component:python sdk Issue/PR related to Python SDK type:feature request New feature request/enhancement
Projects
None yet
Development

No branches or pull requests