Making an artcrimebot
ingest.sh
points to the Trafficking Culture websiteingest.py
uses the directoryLoader to process the results into a vector store.- takes about 10 minutes to scrape everything, process.
This repo is an implementation of a locally hosted chatbot specifically focused on question answering over the LangChain documentation. Built with LangChain and FastAPI.
The app leverages LangChain's streaming support and async API to update the page in real time for multiple users.
- Install dependencies:
pip install -r requirements.txt
- Run
ingest.sh
to ingest LangChain docs data into the vectorstore (only needs to be done once).- You can use other Document Loaders to load your own data into the vectorstore.
- Run the app:
make start
- To enable tracing, make sure
langchain-server
is running locally and passtracing=True
toget_chain
inmain.py
. You can find more documentation here.
- To enable tracing, make sure
- Open localhost:9000 in your browser.
Deployed version (to be updated soon): chat.langchain.dev
Hugging Face Space (to be updated soon): huggingface.co/spaces/hwchase17/chat-langchain
Blog Posts:
There are two components: ingestion and question-answering.
Ingestion has the following steps:
- Pull html from documentation site
- Load html with LangChain's ReadTheDocs Loader
- Split documents with LangChain's TextSplitter
- Create a vectorstore of embeddings, using LangChain's vectorstore wrapper (with OpenAI's embeddings and FAISS vectorstore).
Question-Answering has the following steps, all handled by ChatVectorDBChain:
- Given the chat history and new user input, determine what a standalone question would be (using GPT-3).
- Given that standalone question, look up relevant documents from the vectorstore.
- Pass the standalone question and relevant documents to GPT-3 to generate a final answer.