Skip to content

DajosPatryk/AI-Llama3-Ollama-Localrag

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Local LLM with Ollama & PgVector 🤖

Credits go to Phidata for providing local rigging scripts and repository.
Link to Llama-3-8B 🦙
Link to Llama-3-70B 🦙🔥

1. Install Ollama and run model

Install Ollama

curl -fsSL https://ollama.com/install.sh | sh

Run and pull manifest of your preferred Llama3 model

ollama run llama3 'Hey!'
  • And/or Llama3 70b
ollama run llama3:70b 'Hey!'

You can find more LLM's here, adjust app.py accordingly.

2. Create a virtual environment

python3 -m venv ~/.venvs/aienv
source ~/.venvs/aienv/bin/activate

3. Install libraries

pip install -r package.txt

4. Run PgVector

Install docker desktop

  • Run using this script
docker run -d \
  -e POSTGRES_DB=ai \
  -e POSTGRES_USER=ai \
  -e POSTGRES_PASSWORD=ai \
  -e PGDATA=/var/lib/postgresql/data/pgdata \
  -v pgvolume:/var/lib/postgresql/data \
  -p 5532:5432 \
  --name pgvector \
   phidata/pgvector:16

5. Run RAG app

streamlit run app.py

view.png

About

Local LLM with Ollama & PgVector made for Llama3.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published