-
Notifications
You must be signed in to change notification settings - Fork 253
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore(wren-ai-service): minor updates #833
Conversation
cyyeh
commented
Oct 28, 2024
•
edited
Loading
edited
- add timezone support to sql expansion and ask apis
- add timeout(30 seconds) to the engine execute_sql method
- fix timeout issue for openai llm
- add language support for sql2answer
- running sql correction concurrently
03c239e
to
88cab64
Compare
88cab64
to
a5c5ded
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
overall LGTM. having a few comment.
wren-ai-service/src/__main__.py
Outdated
@@ -100,6 +100,7 @@ async def exception_handler(request, exc: Exception): | |||
|
|||
@app.exception_handler(RequestValidationError) | |||
async def request_exception_handler(request, exc: Exception): | |||
print(str(exc)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we could remove printing the exception here, FastAPI handle the exception and also leave the log in console
replies: List[str] | List[List[str]], | ||
project_id: str | None = None, | ||
) -> dict: | ||
try: | ||
cleaned_generation_result = orjson.loads( | ||
clean_generation_result(replies[0]) | ||
)["results"] | ||
if isinstance(replies[0], dict): | ||
cleaned_generation_result = [ | ||
orjson.loads(clean_generation_result(reply["replies"][0]))[ | ||
"results" | ||
][0] | ||
for reply in replies | ||
] | ||
else: | ||
cleaned_generation_result = orjson.loads( | ||
clean_generation_result(replies[0]) | ||
)["results"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm wondering to know which scenario we encountered to do the change
@@ -57,7 +57,7 @@ def __init__( | |||
) | |||
|
|||
@component.output_types(embedding=List[float], meta=Dict[str, Any]) | |||
@backoff.on_exception(backoff.expo, openai.RateLimitError, max_time=60, max_tries=3) | |||
@backoff.on_exception(backoff.expo, openai.APIError, max_time=60.0, max_tries=3) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
im curious about why we change the backoff error
configurations: Optional[AskConfigurations] = AskConfigurations( | ||
language="English", | ||
timezone=AskConfigurations.Timezone(name="Asia/Taipei", utc_offset="+8:00"), | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nitpick: i think we give the default value in class, thus skipping here can make code more clear.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM