In this project, I've implemented LLMs on custom data, using the power of Langchain and RAG.
i. Add you openai api ket in constants.py
file.
ii. You need to add supported files to docs folder, then execute the index.py to get those files indexed in vector database.
iii. You do not need to reupload data again and again, you can just add more data to docs and execute index.py
to index more and more data.
iv. Then you can execute the main.py
and you are good to go.
To get started, you can clone this repository to your local machine using the following command:
git clone https://github.com/javaidiqba11/custom-rags-llm-langchain.git
To install the requirements, run the command.
pip install -r requirements.txt
Create a folder named docs, add supported files given below to docs.
Supported Document Types:
PDF (.pdf)
Text (.txt)
Word (.docx)
Excel (.xlsx)
PowerPoint (.pptx)
HTML (.html)
Markdown (.md)
EPUB (.epub)
Image (.jpg, .jpeg, .png)
CSV (.csv)
JSON (.json)
XML (.xml)
Run the command:
python index.py
To start the ChatBot, run the command
python main.py
Running the ChatBot using the above commands. It is working with the same previous command...
python main.py
Feel free to explore the code, open issues, and make pull requests. I appreciate your support and look forward to collaborating with you!