Skip to content

Latest commit

 

History

History
51 lines (42 loc) · 2.28 KB

README.md

File metadata and controls

51 lines (42 loc) · 2.28 KB

LangChain RAG Playground 🛝

Unify_demos_RAG_Playground.mp4

RAG Playground is an application that allows you to interact with your PDF files using the Language Model of your choice.

Introduction

Streamlit application that enables users to upload a pdf file and chat with an LLM for performing document analysis in a playground environment. Compare the performance of LLMs across endpoint providers to find the best possible configuration for your speed, latency and cost requirements using the dynamic routing feature. Play intuitively tuning the model hyperparameters as temperature, chunk size, chunk overlap or try the model with/without conversational capabilities. You find more model/provider information in the Unify benchmark interface.

Usage:

  1. Visit the application: LangChain RAG Playground
  2. Input your Unify API Key. If you don’t have one yet, log in to the Unify Console to get yours.
  3. Select the Model and endpoint provider of your choice from the drop down. You can find both model and provider information in the benchmark interface.
  4. Upload your document(s) and click the Submit button
  5. Play!

Repository and Deployment

The repository is located at RAG Playground Repository. To run the application locally, follow these steps:

  1. Clone the repository to your local machine.
  2. Set up your virtual environment and install the dependencies from requirements.txt:
python -m venv .venv    # create virtual environment 
source .venv/bin/activate   # on Windows use .venv\Scripts\activate.bat
pip install -r requirements.txt
  1. Run rag_script.py from Streamlit module
python -m streamlit run rag_script.py

Contributors

Name GitHub Profile
Anthony Okonneh AO
Oscar Arroyo Vega OscarAV
Martin Oywa Martin Oywa