Skip to content

Latest commit

 

History

History
70 lines (53 loc) · 2.39 KB

README.md

File metadata and controls

70 lines (53 loc) · 2.39 KB

gpt_cli

The most terminal friendly ChatGPT application

gpt_cli is a flexible, easily scriptable ChatGPT interface for the UNIX/Linux terminal.

  • Type your question
  • Get an answer right away without ever having to leave your terminal
  • ✨Magic ✨

Features

  • Ability to change settings such as LLM model selection (gpt-3.5-turbo, gpt-4, etc.)
  • Export chat history as OpenAI API compatible JSON
  • All data is stored either as plain text or as JSON

The overriding design goal for gpt_cli is to be as friendly to the UNIX/Linux shell environment as possible.

Tech

gpt_cli uses a couple of open source libraries to work properly:

  • libcurl
  • libjson-c

And of course gpt_cli requires use of OpenAI's ChatGPT API to function

Installation

  • First install the dependencies using the appropriate package manager for your system
  • The dependencies are libcurl and libjson-c. In Debian and Ubuntu run sudo apt install libcurl4 libcurl4-gnutls-dev and sudo apt install libjson-c5 libjson-c-dev to install the required dependencies. In MacOS, you will need to install brew and run brew install json-c. In Android, first install the Termux app. Inside of the Termux app run apt install make clang curl libcurl json-c.
  • Next use the make command in this git repository folder to build the application

How to use

Send a basic prompt to ChatGPT with default settings where you are not saving the chat history

gpt -u 'This is my first prompt'

To enter interactive mode, ise the -r or --repl option

gpt -r
> Hello!
Hello, how can I assist you today?
>

Send a prompt to ChatGPT with default settings where you are saving the chat history

gpt -u 'This is my first prompt where I am saving the chat history' -j 'my_chat.json'

Send a prompt to ChatGPT where you change the model to gpt-4 and set the temperature to 0 with a system prompt

gpt -u 'Hello!' -s 'You are a dog' -m 'gpt-4' -t '0'

You can also pipe into this command, use input redirection, and output redirection:

echo 'Hello!' | gpt
# Sends the output text of the echo command to the LLM

gpt < prompt.txt
# Sends the prompt from the text file to the LLM

gpt -u 'Hello!' > response.txt
# Outputs the response text to a file

The command's options are listed below:

gpt [-m --model] [-t --temperature] [-s --system_prompt] [-u --user_prompt] [-j --json_file] [-h --help] [-r --repl]