-
Notifications
You must be signed in to change notification settings - Fork 590
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Enhancement]: Integrate OLLAMA API Support in AI Handlers #836
Conversation
PR Description updated to latest commit (501b059)
|
PR Review(Review updated until commit 501b059)
✨ Review tool usage guide:Overview: The tool can be triggered automatically every time a new PR is opened, or can be invoked manually by commenting on any PR.
See the review usage page for a comprehensive guide on using this tool. |
PR Code Suggestions
✨ Improve tool usage guide:Overview:
See the improve usage page for a comprehensive guide on using this tool. |
@gregoryboue is the guide here still accurate: |
Yes the guide still accurate |
@mrT23 , I did a mistake in my previous answer. With my PR change, if i do a /review command, then here it's writed : So the right configuration to use ollama model locally hosted is : [config] # in configuration.toml
model = "ollama/llama2"
model_turbo = "ollama/llama2" If i didn't do that the command uses fallback model and never try with So is it normal to you ? Is the code must be changed or only documentation ? |
@gregoryboue |
/review |
Persistent review updated to latest commit 501b059 |
@@ -61,6 +61,9 @@ def __init__(self): | |||
if get_settings().get("HUGGINGFACE.API_BASE", None) and 'huggingface' in get_settings().config.model: | |||
litellm.api_base = get_settings().huggingface.api_base | |||
self.api_base = get_settings().huggingface.api_base | |||
if get_settings().get("OLLAMA.API_BASE", None) : |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider adding error handling or a default value for OLLAMA.API_BASE
to ensure robustness in cases where the configuration might be missing or incorrect. This could prevent runtime errors and improve the application's stability. [important]
@@ -61,6 +61,9 @@ def __init__(self): | |||
if get_settings().get("HUGGINGFACE.API_BASE", None) and 'huggingface' in get_settings().config.model: | |||
litellm.api_base = get_settings().huggingface.api_base | |||
self.api_base = get_settings().huggingface.api_base | |||
if get_settings().get("OLLAMA.API_BASE", None) : | |||
litellm.api_base = get_settings().ollama.api_base | |||
self.api_base = get_settings().ollama.api_base |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure consistency in accessing settings for different APIs by using a similar pattern or method. This can improve code maintainability and readability. For instance, you might consider abstracting the settings access logic into a method if the pattern becomes more complex with additional APIs. [medium]
/describe |
User description
Fix #657
Type
enhancement
Description
litellm_ai_handler.py
, allowing for expanded model support.api_base
for both Huggingface and OLLAMA, enhancing the flexibility of AI model integrations.Changes walkthrough
litellm_ai_handler.py
Integrate OLLAMA API Base Configuration Support
pr_agent/algo/ai_handlers/litellm_ai_handler.py
api_base
is set correctly for both Huggingface and OLLAMAconfigurations.
and VertexAI.