Skip to content

Interact privately with your documents using the power of GPT, 100% privately, no data leaks

License

Notifications You must be signed in to change notification settings

msrivas-7/privateGPT

 
 

Repository files navigation

privateGPT

Ask questions to your documents without an internet connection, using the power of LLMs. 100% private, no data leaves your execution environment at any point. You can ingest documents and ask questions without an internet connection!

Built with LangChain and GPT4All and LlamaCpp

demo

Environment Setup

In order to set your environment up to run the code here, first install all requirements:

pip install -r requirements.txt

Then, download the 2 models and place them in a directory of your choice.

  • LLM: default to ggml-gpt4all-j-v1.3-groovy.bin. If you prefer a different GPT4All-J compatible model, just download it and reference it in your .env file.
  • Embedding: default to ggml-model-q4_0.bin. If you prefer a different compatible Embeddings model, just download it and reference it in your .env file.

Rename example.env to .env and edit the variables appropriately.

MODEL_TYPE: supports LlamaCpp or GPT4All
PERSIST_DIRECTORY: is the folder you want your vectorstore in
LLAMA_EMBEDDINGS_MODEL: (absolute) Path to your LlamaCpp supported embeddings model
MODEL_PATH: Path to your GPT4All or LlamaCpp supported LLM
MODEL_N_CTX: Maximum token limit for both embeddings and LLM models

Note: because of the way langchain loads the LLAMMA embeddings, you need to specify the absolute path of your embeddings model binary. This means it will not work if you use a home directory shortcut (eg. ~/ or $HOME/).

Test dataset

This repo uses a state of the union transcript as an example.

Instructions for ingesting your own dataset

Put any and all of your .txt, .pdf, or .csv files into the source_documents directory

Run the following command to ingest all the data.

python ingest.py

It will create a db folder containing the local vectorstore. Will take time, depending on the size of your documents. You can ingest as many documents as you want, and all will be accumulated in the local embeddings database. If you want to start from an empty database, delete the db folder.

Note: during the ingest process no data leaves your local environment. You could ingest without an internet connection.

Ask questions to your documents, locally!

In order to ask a question, run a command like:

python privateGPT.py

And wait for the script to require your input.

> Enter a query:

Hit enter. You'll need to wait 20-30 seconds (depending on your machine) while the LLM model consumes the prompt and prepares the answer. Once done, it will print the answer and the 4 sources it used as context from your documents; you can then ask another question without re-running the script, just wait for the prompt again.

Note: you could turn off your internet connection, and the script inference would still work. No data gets out of your local environment.

Type exit to finish the script.

PrivateGPT GUI

Another way to ask questions is to use the UI designed for Private GPT. Run a command like:

python gui.py

You'll see a GUI interface with a field to enter your query, then click on Submit Query, You'll need to wait 20-30 seconds (depending on your machine) same as above when using the shell version. You can keep entering questions without re-running the GUI, All Queries and Answers will be listed out in order in the Answer output box. If you would like to clear the fields click on the Reset Button.

PrivateGPT-GUI

Note: you could turn off your internet connection, and the script inference would still work. No data gets out of your local environment.

How does it work?

Selecting the right local models and the power of LangChain you can run the entire pipeline locally, without any data leaving your environment, and with reasonable performance.

  • ingest.py uses LangChain tools to parse the document and create embeddings locally using LlamaCppEmbeddings. It then stores the result in a local vector database using Chroma vector store.
  • privateGPT.py uses a local LLM based on GPT4All-J or LlamaCpp to understand questions and create answers. The context for the answers is extracted from the local vector store using a similarity search to locate the right piece of context from the docs.
  • GPT4All-J wrapper was introduced in LangChain 0.0.162.

Disclaimer

This is a test project to validate the feasibility of a fully private solution for question answering using LLMs and Vector embeddings. It is not production ready, and it is not meant to be used in production. The models selection is not optimized for performance, but for privacy; but it is possible to use different models and vectorstores to improve performance.

About

Interact privately with your documents using the power of GPT, 100% privately, no data leaks

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%