Chat with LLM models directly from the command line.
Screen Recording
llm-term.mov
pipx install llm-term
You can install llm-term with extra dependencies for different providers:
pipx install "llm-term[anthropic]"
pipx install "llm-term[mistralai]"
Or, you can install all the extras:
pipx install "llm-term[all]"
Then, you can chat with the model directly from the command line:
llm-term
llm-term
works with multiple LLM providers, but by default it uses OpenAI.
Most providers require extra packages to be installed, so make sure you
read the Providers section below. To use a different provider, you
can set the --provider
/ -p
flag:
llm-term --provider anthropic
If needed, make sure you have your LLM's API key set as an environment variable
(this can also set via the --api-key
/ -k
flag in the CLI). If your LLM uses
a particular environment variable for its API key, such as OPENAI_API_KEY
,
that will be detected automatically.
export LLM_API_KEY="xxxxxxxxxxxxxx"
Optionally, you can set a custom model. llm-term defaults
to gpt-4o
(this can also set via the --model
/ -m
flag in the CLI):
export LLM_MODEL="gpt-4o-mini"
Want to start the conversion directly from the command line? No problem,
just pass your prompt to llm-term
:
llm-term show me python code to detect a palindrome
You can also set a custom system prompt. llm-term defaults to a reasonable
prompt for chatting with the model, but you can set your own prompt (this
can also set via the --system
/ -s
flag in the CLI):
export LLM_SYSTEM_MESSAGE="You are a helpful assistant who talks like a pirate."
By default, llm-term uses OpenAI as your LLM provider. The default model is
gpt-4o
and you can also use the OPENAI_API_KEY
environment variable
to set your API key.
You can request access to Anthropic here. The
default model is claude-3-5-sonnet-20240620
, and you can use the ANTHROPIC_API_KEY
environment
variable. To use anthropic
as your provider you must install the anthropic
extra.
pipx install "llm-term[anthropic]"
llm-term --provider anthropic
You can request access to the MistralAI
here. The default model is
mistral-small-latest
, and you can use the MISTRAL_API_KEY
environment variable.
pipx install "llm-term[mistralai]"
llm-term --provider mistralai
Ollama is a an open source LLM provider. These models run locally on your
machine, so you don't need to worry about API keys or rate limits. The default
model is llama3
, and you can see what models are available on the Ollama
Website. Make sure to
download Ollama first.
ollama pull llama3
llm-term --provider ollama --model llama3