Credits go to Phidata for providing local rigging scripts and repository.
Link to Llama-3-8B 🦙
Link to Llama-3-70B 🦙🔥
1. Install Ollama and run model
Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
Run and pull manifest of your preferred Llama3 model
ollama run llama3 'Hey!'
- And/or Llama3 70b
ollama run llama3:70b 'Hey!'
You can find more LLM's here, adjust app.py accordingly.
python3 -m venv ~/.venvs/aienv
source ~/.venvs/aienv/bin/activate
pip install -r package.txt
Install docker desktop
- Run using this script
docker run -d \
-e POSTGRES_DB=ai \
-e POSTGRES_USER=ai \
-e POSTGRES_PASSWORD=ai \
-e PGDATA=/var/lib/postgresql/data/pgdata \
-v pgvolume:/var/lib/postgresql/data \
-p 5532:5432 \
--name pgvector \
phidata/pgvector:16
streamlit run app.py
- Open localhost:8501 to view your local RAG app.