You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Despite being found and displayed, the LocalDocs collection chunks are not sent alongside the user's question when chating with an OpenAI Compatible model (aka external model).
Steps to Reproduce
Create a LocalDocs collection and add a few documents
Add an OpenAI compatible model (in my case, it was an Ollama server running "llama3.2:3b-instruct-q4_0")
Chat with that external model : While sources have been found, they are not part of any message sent to Ollama (checked via reverse proxy). As a result, the answer is not grounded to the LocalDocs collection.
Expected Behavior
The relevant chunks found in the LocalDocs should be sent alongside the user query to produce a grounded answer.
I should also mention that as soon as I switch the model to a "locally installed" model, the chunks are leveraged.
Maybe there is some kind of mechanism that strips the chunks from the message for security reasons ? Or perhaps is it related to differences in model config files ?
Your Environment
GPT4All version: 3.4.2
Operating System: Windows 11
Chat model used (if applicable): "llama3.2:3b-instruct-q4_0" (served by Ollama)
The text was updated successfully, but these errors were encountered:
If you open your localdocs database with the LocalDocs Inspector you may search there easily (no SQL required) for particular chunks of text that you're interested in, maybe they're not in there at all, or they are in there but not processed.
Bug Report
Despite being found and displayed, the LocalDocs collection chunks are not sent alongside the user's question when chating with an OpenAI Compatible model (aka external model).
Steps to Reproduce
Expected Behavior
The relevant chunks found in the LocalDocs should be sent alongside the user query to produce a grounded answer.
I should also mention that as soon as I switch the model to a "locally installed" model, the chunks are leveraged.
Maybe there is some kind of mechanism that strips the chunks from the message for security reasons ? Or perhaps is it related to differences in model config files ?
Your Environment
The text was updated successfully, but these errors were encountered: