Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a possibility to distinguish between chat and complete LM requests #14543

Open
Tracked by #14119
dhuebner opened this issue Nov 27, 2024 · 0 comments
Open
Tracked by #14119
Assignees
Labels

Comments

@dhuebner
Copy link
Member

Feature Description:

Sometimes it is important to know if a LanguageModelRequest is called from the chat or from the editor. For example for completion requests one would normally don't use streams as response, also sometimes a post-process step is requiered in some cases.

Possible solutions:

  • additional property requestKind or kind
  • new interface (may benefit from additional completion related properties; don't need messages property)

It would probably be even more beneficial to have an additional method completeionRequest(...) in the LanguageModel interface as the request and response are often differs a lot.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants