Skip to content

Latest commit

 

History

History
60 lines (37 loc) · 1.46 KB

README.md

File metadata and controls

60 lines (37 loc) · 1.46 KB

Streamlit Chat Application with Replicate LlamaV2 Model

This is a simple, interactive chat application powered by Streamlit and the Replicate LlamaV2 model. It uses Streamlit for the front end interface and Replicate's LlamaV2 model for generating responses based on user input.

Prerequisites:

  • Python 3.6 or higher
  • Streamlit
  • Python-dotenv
  • Replicate

Quickstart

  1. Clone the repository
git clone <repo-url>
cd <repo-dir>
  1. Install the dependencies
pip install -r requirements.txt
  1. Set the environment variables

Create a .env file in the root of your project and add the following environment variables.

# .env
REPLICATE_API_TOKEN=<Your Replicate LlamaV2 Key>
  1. Run the Streamlit app
streamlit run main.py

Usage

Just type your message in the text input box and press Enter. The AI model will generate a response that will be displayed on the screen.

How it Works

The generate_response function takes the user's input, sends it to the Replicate LlamaV2 model, and then receives the model's response. The response is then displayed on the Streamlit interface.

Contributing

Contributions are welcome! Please read the contributing guidelines before getting started.

License

This project is licensed under the terms of the MIT license. See the LICENSE file.

Sponsors

✨ Find profitable ideas faster: Exploding Insights