Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unnecessary Openai Adapter? #3528

Open
1 of 2 tasks
xihuai18 opened this issue Dec 25, 2024 · 0 comments
Open
1 of 2 tasks

Unnecessary Openai Adapter? #3528

xihuai18 opened this issue Dec 25, 2024 · 0 comments
Assignees
Labels
kind:enhancement Indicates a new feature request, imrovement, or extension "needs-triage"

Comments

@xihuai18
Copy link

Validations

  • I believe this is a way to improve. I'll try to join the Continue Discord for questions
  • I'm not able to find an open issue that requests the same enhancement

Problem

continue/core/llm/index.ts

Lines 693 to 785 in af396fa

async *streamChat(
_messages: ChatMessage[],
signal: AbortSignal,
options: LLMFullCompletionOptions = {},
): AsyncGenerator<ChatMessage, PromptLog> {
const { completionOptions, log, raw } =
this._parseCompletionOptions(options);
const messages = this._compileChatMessages(completionOptions, _messages);
const prompt = this.templateMessages
? this.templateMessages(messages)
: this._formatChatMessages(messages);
if (log) {
if (this.writeLog) {
await this.writeLog(this._compileLogMessage(prompt, completionOptions));
}
if (this.llmRequestHook) {
this.llmRequestHook(completionOptions.model, prompt);
}
}
let completion = "";
try {
if (this.templateMessages) {
for await (const chunk of this._streamComplete(
prompt,
signal,
completionOptions,
)) {
completion += chunk;
yield { role: "assistant", content: chunk };
}
} else {
if (this.shouldUseOpenAIAdapter("streamChat") && this.openaiAdapter) {
let body = toChatBody(messages, completionOptions);
body = this.modifyChatBody(body);
if (completionOptions.stream === false) {
// Stream false
const response = await this.openaiAdapter.chatCompletionNonStream(
{ ...body, stream: false },
signal,
);
const msg = fromChatResponse(response);
yield msg;
completion = renderChatMessage(msg);
} else {
// Stream true
const stream = this.openaiAdapter.chatCompletionStream(
{
...body,
stream: true,
},
signal,
);
for await (const chunk of stream) {
const result = fromChatCompletionChunk(chunk);
if (result) {
yield result;
}
}
}
} else {
for await (const chunk of this._streamChat(
messages,
signal,
completionOptions,
)) {
completion += chunk.content;
yield chunk;
}
}
}
} catch (error) {
console.log(error);
throw error;
}
this._logTokensGenerated(completionOptions.model, prompt, completion);
if (log && this.writeLog) {
await this.writeLog(`Completion:\n${completion}\n\n`);
}
return {
modelTitle: this.title ?? completionOptions.model,
prompt,
completion,
completionOptions,
};
}

It seems the openaiAdapter is unnecessary since _streamChat in OpenAI.ts should handle everything.

Solution

Move the logic from 728 to 757 into OpenAI.ts

@sestinj sestinj self-assigned this Dec 25, 2024
@dosubot dosubot bot added the kind:enhancement Indicates a new feature request, imrovement, or extension label Dec 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind:enhancement Indicates a new feature request, imrovement, or extension "needs-triage"
Projects
None yet
Development

No branches or pull requests

2 participants