Skip to content

Commit

Permalink
Add a function for centrally handling the engine list, with caching
Browse files Browse the repository at this point in the history
This commit introduces a few key changes to the `Coder` class in `coder.py`:

1. Removed an unnecessary import and added `functools` to the import list.
2. Added a new method `get_openai_supported_engines` that fetches the list of models supported by the OpenAI API key. This method uses `functools.lru_cache` for caching the result, improving efficiency.
3. Refactored `get_llm_model_name` to use the new `get_openai_supported_engines` method, reducing code duplication and improving readability.

These changes should make the `Coder` class more efficient and easier to understand. 🧠💡
  • Loading branch information
TechNickAI committed Jul 13, 2023
1 parent fe7ddda commit ab49577
Showing 1 changed file with 15 additions and 7 deletions.
22 changes: 15 additions & 7 deletions aicodebot/coder.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
from langchain.chat_models import ChatOpenAI
from openai.api_resources import engine
from pathlib import Path
import fnmatch, openai, tiktoken
import fnmatch, functools, openai, tiktoken

DEFAULT_MAX_TOKENS = 512
PRECISE_TEMPERATURE = 0.05
Expand All @@ -14,7 +14,6 @@ class Coder:
"""
The Coder class encapsulates the functionality of interacting with LLMs,
git, and the local file system.
"""

@classmethod
Expand Down Expand Up @@ -42,6 +41,17 @@ def generate_directory_structure(cls, path, ignore_patterns=None, use_gitignore=

return structure

@staticmethod
@functools.lru_cache
def get_openai_supported_engines():
"""Get a list of the models supported by the OpenAI API key."""
config = read_config()
openai.api_key = config["openai_api_key"]
engines = engine.Engine.list()
out = [engine.id for engine in engines.data]
logger.trace(f"OpenAI supported engines: {out}")
return out

@staticmethod
def get_llm(
model_name,
Expand Down Expand Up @@ -73,18 +83,16 @@ def get_llm_model_name(token_size=0):
"gpt-3.5-turbo-16k": 16384,
}

config = read_config()
openai.api_key = config["openai_api_key"]
engines = engine.Engine.list()
engines = Coder.get_openai_supported_engines()

# For some unknown reason, tiktoken often underestimates the token size by ~10%, so let's buffer
token_size = int(token_size * 1.1)

# Try to use GPT-4 if it is supported and the token size is small enough
if "gpt-4" in [engine.id for engine in engines.data] and token_size <= model_options["gpt-4"]:
if "gpt-4" in engines and token_size <= model_options["gpt-4"]:
logger.info(f"Using GPT-4 for token size {token_size}")
return "gpt-4"
elif "gpt-4-32k" in [engine.id for engine in engines.data] and token_size <= model_options["gpt-4-32k"]:
elif "gpt-4-32k" in engines and token_size <= model_options["gpt-4-32k"]:
logger.info(f"Using GPT-4-32k for token size {token_size}")
return "gpt-4-32k"
elif token_size <= model_options["gpt-3.5-turbo"]:
Expand Down

1 comment on commit ab49577

@github-actions
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖AICodeBot Review Comments:

  1. In the Coder class, the import statement for functools has been added, which is necessary for the @functools.lru_cache decorator to work properly.
  2. The new method get_openai_supported_engines has been added to the Coder class, which fetches the list of models supported by the OpenAI API key. This method uses functools.lru_cache for caching the result, which improves efficiency.
  3. The method get_llm_model_name has been refactored to use the new get_openai_supported_engines method, reducing code duplication and improving readability.

Overall, these changes should make the Coder class more efficient and easier to understand. However, please ensure that the code has been thoroughly tested and that the caching behavior is working as expected.

AICodeBot

Please sign in to comment.