-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add cohere LLM #46
Add cohere LLM #46
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
❌ Changes requested.
- Reviewed the entire pull request up to 4bd3e9f
- Looked at
167
lines of code in3
files - Took 1 minute and 26 seconds to review
More info
- Skipped
2
files when reviewing. - Skipped posting
1
additional comments because they didn't meet confidence threshold of50%
.
1. continuous_eval/generators/simple.py:13
:
- Assessed confidence :
80%
- Grade:
0%
- Comment:
Thelogging.basicConfig
should be called only once in the main function and not in a module. This is because thebasicConfig
does nothing if the root logger already has handlers configured for it. It's also not clear why the logging level is set to WARNING. Consider removing this line or moving it to the main function. - Reasoning:
The PR adds support for the Cohere language model in the LLMFactory class. It also adds a progress bar to the SimpleDatasetGenerator's generate method. The changes seem to be correctly implemented. However, there's a minor issue in the logging setup. The logging.basicConfig should be called only once in the main function and not in a module. This is because the basicConfig does nothing if the root logger already has handlers configured for it. It's also not clear why the logging level is set to WARNING.
Workflow ID: wflow_5afwPDW8eG3UJ9f9
Want Ellipsis to fix these issues? Tag @ellipsis-dev
in a comment. We'll respond in a few minutes. Learn more here.
response = self.client.generate(model="command", prompt=prompt, temperature=temperature, max_tokens=1024) # type: ignore | ||
try: | ||
content = response.generations[0].text | ||
except: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider raising an exception instead of returning an empty string when an error occurs in generating content from Cohere. This would allow the calling code to decide how to handle the error. For example:
except: | |
except Exception as e: | |
raise RuntimeError(f'Failed to generate content from Cohere: {e}') |
Looks great! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary:
This PR integrates the Cohere language model into the
LLMFactory
class, adds a progress bar to thegenerate
function inSimpleDatasetGenerator
, and includes a test for the Cohere model.Key points:
LLMFactory
class.generate
function inSimpleDatasetGenerator
.llm_factory_test.py
.Generated with ❤️ by ellipsis.dev