-
-
Notifications
You must be signed in to change notification settings - Fork 329
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Custom inference endpoints #306
Comments
Thank you for opening your first issue in this project! Engagement like this is essential for open source projects! 🤗 |
Are you asking about locally hosted models? If so, this feature request is being discussed in #190. |
Yes, enterprises have been building and deploying custom llm models in on-premises. We therefore want to use these custom endpoints in the jupyter lab instead of chatgpt, etc. |
We have our own inference endpoint, with model already deployed. Is it possible to configure the extension to point to the endpoint or any other way to use the endpoint directly? Does the model have to come from |
@c3-viral-lakhani In #322, @dlqqq added support for OpenAI proxies. GPT4All is another way to use local models. If you have another locally deployed model, I recommend filing an issue and/or opening a pull request to add support for it to Jupyter AI, to ensure that the magic commands and chat UI work with your own endpoint. Thanks for your interest! |
Problem
A lot of enterprises are building their own llm models, can we use them instead of chatgpt/hugging face, etc. Sagemaker is one option but I should be able to provide just an inference endpoint to use this for prompts in jupyter lab!Proposed Solution
A lot of enterprises are building their own llm models, can we use them instead of chatgpt/hugging face, etc. Sagemaker is one option but I should be able to provide just an inference endpoint to use this for prompts in jupyter lab!Additional context
The text was updated successfully, but these errors were encountered: