-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(model): support ollama as an optional llm & embedding proxy #1475
Conversation
Signed-off-by: shanhaikang.shk <shanhaikang.shk@oceanbase.com>
hi, @GITHUBear, thanks for your contribution, ollama is a good llm management tool. @fangyinc have a check please. |
Test passed. Install ollamaIf your system is linux. curl -fsSL https://ollama.com/install.sh | sh Pull models.
ollama pull qwen:0.5b
ollama pull nomic-embed-text
pip install ollama Use ollama proxy model in DB-GPTLLM_MODEL=ollama_proxyllm \
PROXY_SERVER_URL=http://127.0.0.1:11434 \
PROXYLLM_BACKEND="qwen:0.5b" \
PROXY_API_KEY=not_used \
EMBEDDING_MODEL=proxy_ollama \
proxy_ollama_proxy_server_url=http://127.0.0.1:11434 \
proxy_ollama_proxy_backend="nomic-embed-text:latest" \
dbgpt start webserver |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
Use ollama with python code. import asyncio
from dbgpt.core import ModelRequest
from dbgpt.model.proxy import OllamaLLMClient
client=OllamaLLMClient()
print(asyncio.run(client.generate(ModelRequest._build("qwen:0.5b", "你是谁?")))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
r+
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
…phoros-ai#1475) Signed-off-by: shanhaikang.shk <shanhaikang.shk@oceanbase.com> Co-authored-by: Fangyin Cheng <staneyffer@gmail.com>
Description
Support Ollama as an optional LLM and Embedding proxy via lib ollama-python
Use Ollama Proxy LLM by modifying the following
.env
configurations:Use Ollama Proxy Embedding by modifying the following
.env
configurations:How Has This Been Tested?
Snapshots:
Checklist: