Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add max_tokens field for llm interface #66

Merged
merged 1 commit into from
May 27, 2024
Merged

Conversation

yisz
Copy link
Contributor

@yisz yisz commented May 25, 2024

🚀 This description was created by Ellipsis for commit a9e8583

Summary:

This PR introduces a max_tokens parameter to the LLM interface methods, allowing control over the number of tokens generated, and adds Cohere API key support in the environment configuration.

Key points:

  • Added max_tokens parameter to LLMInterface.run and LLMFactory.run with a default of 1024.
  • Updated _llm_response in LLMFactory to pass max_tokens to LLM client calls.
  • Added COHERE_API_KEY to .env.example.

Generated with ❤️ by ellipsis.dev

@yisz yisz requested a review from pantonante May 25, 2024 16:17
Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 Looks good to me! Reviewed everything up to a9e8583 in 33 seconds

More details
  • Looked at 112 lines of code in 2 files
  • Skipped 0 files when reviewing.
  • Skipped posting 1 drafted comments based on config settings.
1. continuous_eval/llm_factory.py:42
  • Draft comment:
    It's important to include a description in the PR to explain the reason for these changes and how they improve the functionality or fix an issue. This helps reviewers understand the context and the intent behind the changes.
  • Reason this comment was not posted:
    Confidence changes required: 33%
    The PR description is empty, which is not ideal because it doesn't provide context or reasoning behind the changes. However, the code changes suggest that the max_tokens parameter is being added to the run method of the LLMInterface and its implementations. This parameter controls the maximum number of tokens the model should generate in response to a prompt. The changes are consistent across the file, and the default value for max_tokens is set to 1024, which is a reasonable default for many language models.

Workflow ID: wflow_NABLkaixgi063xd4


You can customize Ellipsis with 👍 / 👎 feedback, review rules, user-specific overrides, quiet mode, and more.

@pantonante pantonante merged commit 9607fc0 into main May 27, 2024
@pantonante pantonante deleted the feature/llm-max-token branch May 27, 2024 07:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants