- Install python3.9+ and pip before setting up the project-specific dependencies.
- Install all requirements.
/usr/bin/pip3 install --user gpt4all langchain langchain-openai beautifulsoup4 chromadb faiss-cpu langchainhub gradio pypdf sentence-transformers text-generation
- Run HuggingFace Text Generation Inference server locally.
mkdir -p $HOME/huggingface/data model="meta-llama/Llama-2-7b-chat-hf" volume="$HOME/huggingface/data" token="<Your own HF token>" docker run --gpus all --shm-size 1g -e HUGGING_FACE_HUB_TOKEN=$token -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:1.1.0 --model-id $model
Simple AI chat bot using gpt4all directly.
/usr/bin/python3 chat1.py
Simple AI chat bot using gpt4all via langchain framework.
/usr/bin/python3 chat2.py
Simple AI chat bot using gpt4all server (which implements the same HTTP APIs as OpenAPI) via OpenAPI chat via langchain framework. As a pre-req run the gpt4all API server in docker.
/usr/bin/python3 chat3.py
Contextual Q&A chat bot based on some sample PDF data like K8s documentation directly using GPT4All.
Build the corpus and the vector index. Sample data is hard-coded in data/k8s-docs
.
Can be updated for any directory with PDF data.
/usr/bin/python3 qa1_build.py
Build the corpus and index before running the query. The query is hard-coded. Can be updated to prompt the user for the query at runtime.
/usr/bin/python3 qa1_query.py
Contextual Q&A chat bot based on some web data using GPT4All server via langchain.
/usr/bin/python3 qa2.py