-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable azure openai engines #20
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks great! Just left one comment. Really appreciate the contribution 🙏
py/autoevals/llm.py
Outdated
for m in SUPPORTED_MODELS: | ||
# Prefixes are ok, because they are just time snapshots | ||
if model.startswith(m): | ||
found = True | ||
break | ||
if not found: | ||
raise ValueError(f"Unsupported model: {model}. Currently only supports OpenAI chat models.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We may want to just remove this check altogether — we definitely support more than the listed models, and "non-chat" flavored models are all deprecated at this point.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed
One more thing -- can you make the equivalent change in the JS implementation? |
JS isn't really my forte - I took a brief look - but chance I would do more harm than good :) |
No worries! I can follow up and do that. |
Hopefully last thing -- I ran the linter on your change and it appears to be failing (missing a comma after |
Done - but it looked like the |
Ah yep -- I think the way we cache parameters will be invalidated everytime a new one is added. Let me update the cached test results and push to your branch |
Love the library!
I had to make modifications to get it to work with the Azure instances of OpenAI models that I use.
Primarily, I made it possible to pass an
engine
parameter to the LLMClassifier as well/instead of themodel
parameter which is what the Azure instance requires.I tried to do it in a way that minimized the # of edits on the existing codebase - if this is something you'd be interested in including, happy to make any changes to fit with your design principles.