What would you like to do?
- Learn about the project
- Install the wrapper
- Learn more about configuration/features
- Learn how to use it
- Troubleshoot common issues
- Upgrade the wrapper
- Using GPT4
- Report a bug
- Get support
ChatGPT Wrapper is an open-source unofficial Power CLI, Python API and Flask API that lets you interact programmatically with ChatGPT/GPT4.
🤖 The ChatGPT Wrapper lets you use the powerful ChatGPT/GPT4 bot from the command line.
💬 Runs in Shell. You can call and interact with ChatGPT/GPT4 in the terminal.
💻 Supports official ChatGPT API. Make API calls directly to the OpenAI ChatGPT endpoint (all supported models accessible by your OpenAI account)
🔌 Simple plugin architecture. Extend the wrapper with custom functionality
🗣 Supports multiple LLM providers. Provider plugins allow interacting with other LLMs (GPT-3, Cohere, Hugginface, etc.)
🐍 Python API. The ChatGPT Wrapper also has a Python library that lets you use ChatGPT/GPT4 in your Python scripts.
🐳 Docker image. The ChatGPT Wrapper is also available as a docker image. (experimental)
🧪 Flask API. You can use the ChatGPT Wrapper as an API. (experimental)
Run an interactive CLI in the terminal:
Or just get a quick response for one question:
See below for details on using ChatGPT as an API from Python.
To use this repository, you need setuptools
installed. You can install it using pip install setuptools
. Make sure that you have the last version of pip: pip install --upgrade pip
To use the 'api' backend (the default), you need a database backend (SQLite by default, any configurable in SQLAlchemy allowed).
Install the latest version of this software directly from github with pip:
pip install git+https://github.com/mmabrouk/chatgpt-wrapper
- Install the latest version of this software directly from git:
git clone https://github.com/mmabrouk/chatgpt-wrapper.git
- Install the the development package:
cd chatgpt-wrapper pip install -e .
The wrapper works with several differnt backends to connect to the ChatGPT models, and installation is different for each backend.
- Pros:
- Fast (many operations run locally for speed)
- Simple API authentication
- Full model customizations
- You control your data
- Cons:
- Only paid version available (as of this writing)
- More complex setup suitable for technical users
Grab an API key from https://platform.openai.com/account/api-keys
Export the key into your local environment:
export OPENAI_API_KEY=<API_KEY>
Windows users, see here for how to edit environment variables.
To tweak the configuration for the current profile, see Configuration
The API backend requires a database server to store conversation data. The wrapper leverages SQLAlchemy for this.
The simplest supported database is SQLite (which is already installed on most modern operating systems), but you can use any database that is supported by SQLAlchemy.
Check the database
setting from the config
command above, which will show you the currently configured connection string for a default SQLite database.
If you're happy with that setting, nothing else needs to be done -- the database will be created automatically in that location when you run the program.
Once the database is configured, run the program with no arguments:
chatgpt
It will recognize no users have been created, and prompt you to create the first user:
- Username: Required, no spaces or special characters
- Email: Optional
- Password: Optional, if not provided the user can log in without a password
Once the user is created, execute the /login
command with the username:
/login [username]
Once you're logged in, you have full access to all commands.
IMPORTANT NOTE: The user authorization system from the command line is 'admin party' -- meaning every logged in user has admin privileges, including editing and deleting other users.
The API backend supports configuring a preset per user.
To do so, run /user-edit
and selecting a default preset will be one of the options.
See Presets for more information on configuring presets.
This backend is deprecated, and may be removed in a future release.
Support will not be provided for using the ChatGPT
class of this backend directly.
- Pros:
- Free or paid version available (as of this writing)
- Fairly easy to set up for non-technical users
- Access to ChatGPT plugins (alpha, requires account with access)
- Cons:
- Slow (runs a full browser session)
- Clunky authentication method
- No model customizations
- Third party controls your data
In your profile configuration file, you'll want to make sure the backend is set to the following in order to use the browser backend:
backend: 'browser'
To tweak the configuration for the current profile, see Configuration
Install a browser in playwright (if you haven't already). The program will use firefox by default.
playwright install firefox
Start up the program in install
mode:
chatgpt install
This opens up a browser window. Log in to ChatGPT in the browser window, walk through all the intro screens, then exit program.
1> /exit
Restart the program without the install
parameter to begin using it.
chatgpt
Officially approved ChatGPT plugins can be configured for use with the browser backend.
NOTE: This requires your OpenAI login account to have access to ChatGPT plugins.
To use plugins:
- You must use a model that supports plugins:
/model model_name gpt-4-plugins
- Browse the plugins:
/plugins
, or a filter the full list by a phrase,/plugins youtube
- To enable the plugin by default, add the plugin ID to the
browser.plugins
list in your configuration file:browser: plugins: - plugin-d1d6eb04-3375-40aa-940a-c2fc57ce0f51
- You can also dynamically enable/disable plugins, see the help for
/plugin-enable
and/plugin-disable
NOTE: This requires your OpenAI login account to have access to ChatGPT with browsing.
To use ChatGPT with browsing, you must use a model that supports browsing: /model model_name gpt-4-browsing
Most other operating systems come with SQLite (the default database choice) installed, Windows may not.
If not, you can grab the 32-bit or 64-bit DLL file from https://www.sqlite.org/download.html, then place the DLL in C:\Windows\System32
directory.
You also may need to install Python, if so grab the latest stable package from https://www.python.org/downloads/windows/ -- make sure to select the install option to Add Python to PATH
.
For the /editor
command to work, you'll need a command line editor installed and in your path. You can control which editor is used by setting the EDITOR
environment variable to the name of the editor executable, e.g. nano
or vim
.
Run the program with the 'config' command:
chatgpt config
This will show all the current configuration settings, the most important ones for installation are:
- Config dir: Where configuration files are stored
- Current profile: (shown in the 'Profile configuration' section)
- Config file: The configuration file current being used
- Data dir: The data storage directory
From a running chatgpt
instance, execute /config
to view the current configuration.
Configuration is optional, default values will be used if no configuration profile is provided. The default configuation settings can be seen in config.sample.yaml -- the file is commented with descriptions of the settings -- DON'T just copy this file as your configuration! Instead, use it as a reference to tweak the configuration to your liking.
NOTE: Not all settings are available on all backends. See the example config for more information.
Command line arguments overrride custom configuration settings, which override default configuration settings.
- Start the program:
chatgpt
- Open the profile's configuration file in an editor:
/config edit
- Edit file to taste and save
- Restart the program
To change the properties of a particular LLM model, use the /model
command:
/model model_name gpt-3.5-turbo
/model temperature 1.0
The /model
command works within the models of the currently loaded provider.
NOTE: The attributes that a particular model accepts are beyond the scope of this document. While some attributes can be displayed via command completion in the shell, you are advissed to consult the API documentation for the specific provider for a full list of available attributes and their values.
Presets allow you to conveniently manage various provider/model configurations.
As you use the CLI, you can execute a combination of /provider
and /model
commands to set up a provider/model configuration to your liking.
Once you have the configuration set up, you can 'capture' it by saving it as a preset.
To save an existing configuration as a preset:
/preset-save mypresetname
Later, to load that configuration for use:
/preset-load mypresetname
See /help
for the various other preset commands.
The wrapper comes with a full template management system.
Templates allow storing text in template files, and quickly leveraging the contents as your user input.
Features:
- Per-profile templates
- Create/edit templates
{{ variable }}
syntax substitution- Five different workflows for collecting variable values, editing, and running
See the various /help template
commands for more information.
The wrapper exposes some builtin variables that can be used in templates:
{{ clipboard }}
- Insert the contents of the clipboard
Templates may include front matter (see examples).
These front matter attributes have special functionality:
- title: Sets the title of new conversations to this value
- description: Displayed in the output of
/templates
- request_overrides: A hash of model customizations to apply when the template is run:
- preset: An existing preset for the provider/model configuration to use when running the template (see Presets)
All other attributes will be passed to the template as variable substitutions.
- Place the plugin file in either:
- The main
plugins
directory of this module - A
plugins
directory in your profile
-
Enable the plugin in your configuration:
plugins: enabled: # This is a list of plugins to enable, each list item should be the name of a plugin file, without the extension. - test
Note that setting
plugins.enabled
will overwrite the default enabled plugins. see/config
for a list of default enabled plugins.
- test: Test plugin, echos back the command you give it
- awesome: Use a prompt from Awesome ChatGPT Prompts: https://github.com/f/awesome-chatgpt-prompts
- database: Send natural language commands to a database WARNING: POTENTIALLY DANGEROUS -- DATA INTEGRITY CANNOT BE GUARANTEED.
- data_query: Send natural language commands to a loaded file of structured data
- shell: Transform natural language into a shell command, and optionally execute it WARNING: POTENTIALLY DANGEROUS -- YOU ARE RESPONSIBLE FOR VALIDATING THE COMMAND RETURNED BY THE LLM, AND THE OUTCOME OF ITS EXECUTION.
- zap: Send natural language commands to Zapier actions: https://nla.zapier.com/get-started/
NOTE: Most provider plugins are not chat-based, and instead return a single response to any text input. These inputs and responses are still managed as 'conversations' for storage purposes, using the same storage mechanism the chat-based providers use.
NOTE: While these provider integrations are working, none have been well-tested yet.
- provider_ai21: Access to AI21 models
- provider_cohere: Access to Cohere models
- provider_huggingface_hub: Access to Huggingface Hub models
- provider_openai: Access to non-chat OpenAI models (GPT-3, etc.)
To enable a supported provider, add it to plugins.enabled
list in your configuration.
plugins:
enabled:
- provider_openai
See /help providers
for a list of currently enabled providers.
See /help provider
for how to switch providers/models on the fly.
There is currently no developer documentation for writing plugins.
The plugins
directory has some default plugins, examining those will give a good idea for how to design a new one.
Currently, plugins for the shell can only add new commands. An instantiated plugin has access to these resources:
self.config
: The current instantiated Config objectself.log
: The instantiated Logger objectself.backend
: The instantiated backendself.shell
: The instantiated shell
To write new provider plugins, investigate the existing provider plugins as examples.
- Newest Youtube video: ChatCPT intro, walkthrough of features
- Youtube Tutorial: How To Use ChatGPT With Unity: Python And API Setup #2 includes a step by step guide to installing this repository on a windows machine
- This Blog post provides a visual step-by-step guide for installing this library.
Run chatgpt --help
To run the CLI in one-shot mode, simply follow the command with the prompt you want to send to ChatGPT:
chatgpt Hello World!
To run the CLI in interactive mode, execute it with no additional arguments:
chatgpt
Once the interactive shell is running, you can see a list of all commands with:
/help
...or get help for a specific command with:
/help <command>
IMPORTANT: Use of browser backend's ChatGPT
class has been deprectated, no support will be provided for this usage.
You can use the API backend's ApiBackend
class to interact directly with the chat LLM.
Create an instance of the class and use the ask
method to send a message to OpenAI and receive the response. For example:
from chatgpt_wrapper import ApiBackend
bot = ApiBackend()
success, response, message = bot.ask("Hello, world!")
if success:
print(response)
else:
raise RuntimeError(message)
The ask method takes a string argument representing the message to send to the API, and returns a string representing the response received.
You may also stream the response as it comes in from the API in chunks using the ask_stream
generator.
To pass custom configuration to ChatGPT, use the Config class:
from chatgpt_wrapper import ApiBackend
from chatgpt_wrapper.core.config import Config
config = Config()
config.set('browser.debug', True)
bot = ApiBackend(config)
success, response, message = bot.ask("Hello, world!")
if success:
print(response)
else:
raise RuntimeError(message)
- Run
python chatgpt_wrapper/gpt_api.py --port 5000
(default port is 5000) to start the server - Install pytest:
pip install pytest
- Test whether it is working using
pytest tests/integration/api_test.py
- See an example of interaction with api in
tests/integration/example_api_call.py
Build a docker image for testing chatgpt-wrapper
:
Make sure your OpenAI key has been exported into your host environment as OPENAI_API_KEY
Run the following commands:
docker-compose build && docker-compose up -d
docker exec -it chatgpt-wrapper-container /bin/bash -c "chatgpt"
Follow the instructions to create the first user.
Enjoy the chat!
The project uses Pytest.
pip install pytest
To run all tests:
pytest
Oftentimes issues are related to upstream service problems with OpenAI, so please check https://status.openai.com before concluding there's an issue with this codebase!
It's possible that:
- Your session has gone stale: Try issuing a
/session
command to refresh it - Your browser session information is corrupted: Try
chatgpt reinstall
and go through the login process again - You're running an outdated version of this project, or one of its dependencies: Completely reinstall the project and its dependencies
- You're running into geolocation restrictions in OpenAI's security systems: Try proxying your requests through a VPN server in the US.
- This is a pre-release project
- Breaking changes are happening regularly
- Before you upgrade and before you file any issues related to upgrading, refer to the
Breaking Changes
section for all releases since your last upgrade.
- Back up your database
- Some releases include changes to the database schema
- If you care about any data stored by this project, back it up before upgrading
- Common upgrade scenarios are tested with the default database (SQLite), but data integrity is not guaranteed
- If any database errors occur during an upgrade, roll back to an earlier release and file an issue
Until an official release exists, you'll need to uninstall and reinstall:
pip uninstall -y chatGPT
pip install chatGPT
If the package was installed via pip install -e
, simply pull in the latest changes from the repository:
git pull
To use GPT-4 with this backend, you must have been granted access to the model in your OpenAI account.
NOTE: If you have not been granted access, you'll probably see an error like this:
InvalidRequestError(message='The model: `gpt-4` does not exist', param=None, code='model_not_found', http_status=404, request_id=None)
There is nothing this project can do to fix the error for you -- contact OpenAI and request GPT-4 access.
Follow one of the methods below to utilize GPT-4 in this backend:
See Presets above to configure a preset using GPT-4
Add the preset to the config file as the default preset on startup:
# This assumes you created a preset named 'gpt-4'
model:
default_preset: gpt-4
From within the shell, execute this command:
/model model_name gpt-4
...or... if you're not currently using the 'chat_openai' provider:
/provider chat_openai gpt-4
To use GPT-4 with this backend, you must have a ChatGPT-Plus subscription.
Follow one of the methods below to utilize GPT-4 in this backend:
Enter the following command in your shell:
chatgpt --model=gpt4
Update your config.yaml
file to include the following line:
chat:
model: gpt4
Then start the program normally:
chatgpt
From within the shell, execute this command:
/model gpt4
To use GPT-4 within your Python code, follow the template below:
from chatgpt_wrapper import ApiBackend
from chatgpt_wrapper.core.config import Config
config = Config()
config.set('chat.model', 'gpt4')
bot = ApiBackend(config)
success, response, message = bot.ask("Hello, world!")
- bookast: ChatGPT Podcast Generator For Books
- ChatGPT.el: ChatGPT in Emacs
- ChatGPT Reddit Bot
- Smarty GPT
- ChatGPTify
- selection-to-chatgpt
We welcome contributions to ChatGPT Wrapper! If you have an idea for a new feature or have found a bug, please open an issue on the GitHub repository.
This project is licensed under the MIT License - see the LICENSE file for details.
- The original 'browser' backend is a modification from Taranjeet code which is a modification of Daniel Gross code.