-
Notifications
You must be signed in to change notification settings - Fork 590
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add and document abilty to use LiteLLM Logging Observability tools #1145
Add and document abilty to use LiteLLM Logging Observability tools #1145
Conversation
PR-Agent was enabled for this repository. To continue using it, please link your git user with your CodiumAI identity here. PR Reviewer Guide 🔍
|
PR-Agent was enabled for this repository. To continue using it, please link your git user with your CodiumAI identity here. PR Code Suggestions ✨
|
@CodiumAI-Agent /review |
PR Reviewer Guide 🔍
|
@CodiumAI-Agent /analyze |
@MarkRx (1) Remove any modification whatsoever to the (2) the instructions should move to: (3) By default, everything should be turned off (4) as a general guideline, aim for the smallest need footprint. This, for example, is a no-go. Its the default |
I need place to get contextual traceability information about the request without having to pass it around everywhere that also works for the CLI. If I don't put it in the middleware request context where could I put it? It could be pulled off of the loguru context ("extra field") but that seems awkward. I could maintain some kind of command context with I don't have a clean way of handling the different Observability platforms. For example,LangFuse, Helicone, and LangSmith use different fields and others are either not documented or not mature in LiteLLM. I figured I could just send them all. It's important to set these fields though as without them observability records don't provide enough information to be useful. I suppose I could build the metadata dictionary based on which callbacks are being used? Also do there need to be 3 separate ai handlers? The additional abstraction is making this tricky. |
@MarkRx I also don't like the context changes that you made. They are fragile, and require hacks for cli. you could have passed parameters using But all my concerns above need to be addressed. If you prefer your way, then its suitable for a fork |
e2e463b
to
9e1bf3a
Compare
Updated with a different approach. Metadata is a dictionary to allow for different metadata per command. Metadata won't be passed to LiteLLM unless a callback is defined. |
… and metadata capture
To address all the issues, you can start from this template with your branch, and edit the With the template above, the entire code impact on the default flow is: That is how such changes should be made. Inside the |
9e1bf3a
to
260b22b
Compare
260b22b
to
8aa76a0
Compare
looks good And from now on can easily edit |
User description
Adds out-of-the-box support for setting up a logging and observability tool using LiteLLM by setting up callbacks. Should work with any of the documented callbacks.
Code uses the starlette context to track PR information to avoid complexity of passing it around everywhere. It will use a basic context when running with the CLI. Doing so allows for code simplification.
Sample LangSmith run:
#1105
See also BerriAI/litellm#5179
PR Type
Enhancement, Documentation
Description
Changes walkthrough 📝
pr_agent.py
Add context information to PR handling
pr_agent/agent/pr_agent.py
context
fromstarlette_context
litellm_ai_handler.py
Implement LiteLLM logging and observability
pr_agent/algo/ai_handlers/litellm_ai_handler.py
platforms
cli.py
Enhance CLI with request cycle context
pr_agent/cli.py
request_cycle_context
to handle CLI mode__init__.py
Streamline git provider context management
pr_agent/git_providers/init.py
utils.py
Simplify repo settings context management
pr_agent/git_providers/utils.py
index.md
Document logging observability integration
docs/docs/installation/index.md
platforms
configuration.toml
Add LiteLLM callback configuration options
pr_agent/settings/configuration.toml