This is a simple, interactive chat application powered by Streamlit and the Replicate LlamaV2 model. It uses Streamlit for the front end interface and Replicate's LlamaV2 model for generating responses based on user input.
- Python 3.6 or higher
- Streamlit
- Python-dotenv
- Replicate
- Clone the repository
git clone <repo-url>
cd <repo-dir>
- Install the dependencies
pip install -r requirements.txt
- Set the environment variables
Create a .env
file in the root of your project and add the following environment variables.
# .env
REPLICATE_API_TOKEN=<Your Replicate LlamaV2 Key>
- Run the Streamlit app
streamlit run main.py
Just type your message in the text input box and press Enter. The AI model will generate a response that will be displayed on the screen.
The generate_response
function takes the user's input, sends it to the Replicate LlamaV2 model, and then receives the model's response. The response is then displayed on the Streamlit interface.
Contributions are welcome! Please read the contributing guidelines before getting started.
This project is licensed under the terms of the MIT license. See the LICENSE file.
✨ Find profitable ideas faster: Exploding Insights