How to use local docs? #3098
Replies: 1 comment
-
They are an addition to the information a model already has from its training. You can try using Prompt Engineering to squeeze as much as you can from the files. There is an external file that you can now access. Before you start building a reply to any of my queries or statements, you make sure that the only information which you will use for replying and which is related to the subject (or subjects) of my respective query or statement is information that only exists in that file and you also make sure that you avoid using any internal knowledge related to the subject (or subjects) of the respective query or statement. That^ prompt needs adjustment, but it should work with 1 file only. For instance, 1 file in 1 selected collection where there is absolutely no information about the Wizard of Oz, and you can verify if the model replies to a question about the Wizard of Oz by saying that it doesn't know nothing about no Wizard of Oz because there ain't no info about him in the file. |
Beta Was this translation helpful? Give feedback.
-
I've read the documentation (https://docs.gpt4all.io/gpt4all_desktop/localdocs.html#how-it-works) and the related topic (#1766) but still can't get the approach.
I've added the books to the collection, they were embedded and ready to use. I selected localdocs in UI.
Fine, what is the next step? Should I, anyway, select one of the existing models and it will use these books along with its own data or I can use this collection somehow independently with gpt4all instead of other existing models?
So, to put in a nutshell, can I have my own model based on my books or they just an addition?
If they just an addition, what model would you recommend to avoid "skewing" of answers by the model itself? What model would be most neutral to my LocalDocs so that I could rely in information in them, not in the model (let's assume that they could contradict each other)?
Beta Was this translation helpful? Give feedback.
All reactions