Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

All model_group_alias should show up in /models, /model/info , /model_group/info #5539

Merged

Conversation

krrishdholakia
Copy link
Contributor

@krrishdholakia krrishdholakia commented Sep 5, 2024

Title

All model_group_alias should show up in /models, /model/info , /model_group/info s/o @taralika

Relevant issues

Closes #5524

Type

🆕 New Feature

Changes

  • updates router.get_model_list, router.get_model_names, and router.get_model_group_info to return model_group_alias

[REQUIRED] Testing - Attach a screenshot of any new tests passing locall

If UI changes, send a screenshot/GIF of working UI fixes

Screenshot 2024-09-05 at 12 22 39 PM

On this config:

model_list:
  - model_name: gpt-3.5-turbo
    litellm_params:
      model: gpt-3.5-turbo

router_settings:
  model_group_alias: {"gpt-4": "gpt-3.5-turbo"}

Copy link

vercel bot commented Sep 5, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
litellm ✅ Ready (Inspect) Visit Preview 💬 Add feedback Sep 6, 2024 3:52pm

@krrishdholakia krrishdholakia changed the base branch from main to litellm_dev_06_08_2024 September 6, 2024 15:50
@krrishdholakia krrishdholakia merged commit ee48a59 into litellm_dev_06_08_2024 Sep 6, 2024
8 checks passed
@krrishdholakia krrishdholakia deleted the litellm_return_router_model_alias_2 branch September 6, 2024 16:39
krrishdholakia added a commit that referenced this pull request Sep 7, 2024
* fix(utils.py): return citations for perplexity streaming

Fixes #5535

* fix(anthropic/chat.py): support fallbacks for anthropic streaming (#5542)

* fix(anthropic/chat.py): support fallbacks for anthropic streaming

Fixes #5512

* fix(anthropic/chat.py): use module level http client if none given (prevents early client closure)

* fix: fix linting errors

* fix(http_handler.py): fix raise_for_status error handling

* test: retry flaky test

* fix otel type

* fix(bedrock/embed): fix error raising

* test(test_openai_batches_and_files.py): skip azure batches test (for now) quota exceeded

* fix(test_router.py): skip azure batch route test (for now) - hit batch quota limits

---------

Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>

* All `model_group_alias` should show up in `/models`, `/model/info` , `/model_group/info` (#5539)

* fix(router.py): support returning model_alias model names in `/v1/models`

* fix(proxy_server.py): support returning model alias'es on `/model/info`

* feat(router.py): support returning model group alias for `/model_group/info`

* fix(proxy_server.py): fix linting errors

* fix(proxy_server.py): fix linting errors

* build(model_prices_and_context_window.json): add amazon titan text premier pricing information

Closes #5560

* feat(litellm_logging.py): log standard logging response object for pass through endpoints. Allows bedrock /invoke agent calls to be correctly logged to langfuse + s3

* fix(success_handler.py): fix linting error

* fix(success_handler.py): fix linting errors

* fix(team_endpoints.py): Allows admin to update team member budgets

---------

Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature]: All model_group_alias should show up in /models,/model/info,/model_group/info
1 participant