Replies: 1 comment
-
We do want to go multi modal, but our experience has shown that the entire prompt engineering needs to be different depending on the model you are using, you would have to rewrite system prompts and probably more. Additionally we are using different models depending on the ttask, so its not just replacing gpt4 with ollama, it's replacing multiple openai different models with alternatives. That being said we want to get to the point where at least one setup works really well before adding more models. Generally our code is already designed for multi modal and adding more models is possible, but its not plug-n-play yet. You have to dig into the codebase to do that. |
Beta Was this translation helpful? Give feedback.
-
I think it will be a really killer-feature if you could use ollama for example and choose any of local LLM when running dev builder. I see that there are OpenAI option only.
Maybe i missed something where i can insert ollama host instead of openai, but at the first glance there is nothing like that, only auth and org.
Beta Was this translation helpful? Give feedback.
All reactions