You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello! I'm new to the project and it wasn't quite clear to me how to see what models I have installed locally after installing them via gpt4all. After researching I see they are listed in ~/.cache/gpt4all
It would be nice to be able to run llm models --installed and/or if there was an asterisk, [I], or some other indicator in the standard llm models listing that denoted installed models
Maybe this functionality is better left to the plugin? I'm not sure how the plugins interface with the main application and if they are able to implement subcommands.
The text was updated successfully, but these errors were encountered:
Hello! I'm new to the project and it wasn't quite clear to me how to see what models I have installed locally after installing them via gpt4all. After researching I see they are listed in
~/.cache/gpt4all
It would be nice to be able to run
llm models --installed
and/or if there was an asterisk,[I]
, or some other indicator in the standardllm models
listing that denoted installed modelsMaybe this functionality is better left to the plugin? I'm not sure how the plugins interface with the main application and if they are able to implement subcommands.
The text was updated successfully, but these errors were encountered: