Skip to content

Commit

Permalink
fixed
Browse files Browse the repository at this point in the history
  • Loading branch information
biobootloader committed Mar 31, 2024
1 parent ae3d9e1 commit f766d80
Showing 1 changed file with 0 additions and 5 deletions.
5 changes: 0 additions & 5 deletions mentat/llm_api_handler.py
Original file line number Diff line number Diff line change
Expand Up @@ -362,11 +362,6 @@ async def call_llm_api(
with sentry_sdk.start_span(description="LLM Call") as span:
span.set_tag("model", model)

# TODO: handle this for gpt-4-vision-preview in spice?
# OpenAI's API is bugged; when gpt-4-vision-preview is used, including the response format
# at all returns a 400 error. Additionally, gpt-4-vision-preview has a max response of 30 tokens by default.
# Until this is fixed, we have to use this workaround.

response = await self.spice_client.call_llm(
model=model,
messages=messages,
Expand Down

0 comments on commit f766d80

Please sign in to comment.