Skip to content
/ local-llm Public template

Running large language models (LLMs) locally using Langchain, Ollama and Docker.

Notifications You must be signed in to change notification settings

radical-data/local-llm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Local LLM with Langchain, Ollama and Docker

A proof-of-concept for running large language models (LLMs) locally using Langchain, Ollama and Docker.

Requirements

  • Docker
  • Docker Compose

Quickstart

  1. Build and run the services with Docker Compose: docker compose up --build
  2. Create a .env file in the root of the project based on .env.example: cp .env.example .env.
  3. (Optional) You can change the chosen model in the .env file. Refer to Ollama's model library for available models.
  4. The service will be available at:
    1. As a SvelteKit frontend at http://localhost:8080
    2. As LangChain's UI at http://localhost:8000/chain/playground
    3. In the terminal, e.g.: curl 'http://localhost:8000/chain/invoke' --data-raw '{"input":{"text":"hi"}}'

API Endpoints

  • /chain/playground: Provides an interactive UI to test the model.
  • /chain/invoke: A REST API endpoint for programmatic interaction with the model.

About

Running large language models (LLMs) locally using Langchain, Ollama and Docker.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published