This Streamlit app allows users to perform SAT Maths question and answer (Q&A) operations using various LLM models and vector embeddings. The app leverages Groq AI, Hugging Face models and FAISS vector database for processing and querying document contexts.
- Python 3.7+
- Streamlit
- LangChain (including
langchain_groq
,langchain_openai
,langchain_community
, etc.) - Vector Database FAISS
python-dotenv
for environment variable management
-
Clone the Repository:
git clone https://github.com/your-repo-url.git cd your-repo-directory
-
Create a Virtual Environment (optional but recommended):
python -m venv venv source venv/bin/activate # On Windows, use `venv\Scripts\activate`
-
Install Required Packages:
pip install -r requirements.txt
-
Set Up Environment Variables:
Create a
.env
file in the same directory asapp.py
with the following content:HF_TOKEN=your_hugging_face_token GROQ_API_KEY=your_groq_api_key
-
Prepare Documents:
Create a
Documents
folder in the same directory asapp.py
and place all PDF files you want to use as context for Q&A. -
Run the Streamlit App:
streamlit run app.py
-
Interact with the App:
- Use the sidebar to input your Groq API key and Hugging Face API key.
- Click on "Document Embedding" to load and process the documents.
- Select an LLM model from the dropdown menu and set the temperature for the model.
- Enter your query in the text input box and view the response.
- Ensure that the paths and API keys are correctly set up in the
.env
file. - Adjust the paths and settings according to your local setup if needed.
Special thanks to Krish Naik’s for guidance.
- If no documents are loaded, verify the file path and contents in the
Documents
folder. - Check your API keys and ensure they are correct.
- For any errors, refer to the error messages displayed in the app or consult the LangChain documentation for additional guidance.
Feel free to contribute or open issues if you encounter any problems or have suggestions for improvements!