-
Notifications
You must be signed in to change notification settings - Fork 44.4k
FAQ
Benny van der Lans edited this page May 28, 2023
·
40 revisions
This page contains solutions to a lot of common issues and errors, try to find your problem here before posting on discord or filing an issue on github.
Where does Auto-GPT save the files it creates?
If you have not changed anything to the workspace variables, Auto-GPT saves its files to:linux : ../Auto-GPT/autogpt/auto_gpt_workspace
windows : ..\Auto-GPT\autogpt\auto_gpt_workspace
Mac : ../\Auto-GPT\autogpt\auto_gpt_workspace
I have a paid chatGPT account, why does my Auto-GPT not work?
A paid openAI chatGPT account is not the same as an openAI API account. Go to OpenAI Platform and make sure you have a valid billing method set. You will likely also want to join the gpt 4 waitlist which can be done hereI changed my .env file and saved it, but why does Auto-GPT still not work?
Double check your.env file and make sure that the lines you are using do not contain a # and a space at the beginning of the line. It should look like this :#########
### LLM PROVIDER
#########
OPENAI_API_KEY=your-key-here-no-quotes
TEMPERATURE=.2
# USE_AZURE=False
### AZURE
# moved to azure.yaml.template
Auto-GPT does not see the file it created:
Enable the view file extensions option in your OS.Why does the Auto-GPT say that I don't have my OpenAI key set?
Please double check that you have set your OpenAI API Key correctly in the .env file, and not the .env.template file. It should look similar to this:####
### LLM PROVIDER
####
OPENAI_API_KEY=your-key-here-no-quotes
TEMPERATURE=.2
# USE_AZURE=False
openai.error.InvalidRequestError: The model: gpt-4 does not exist:
You do not have api access to GPT-4. Set your smart_LLM_model to gpt-3.5-turbo and your token_limit to 4000. Also you will need to join the gpt4 waitlist here : https://openai.com/waitlist/gpt-4-api. Your .env should look like:####
### LLM MODELS
####
## SMART_LLM_MODEL - Smart language model (Default: gpt-4)
## FAST_LLM_MODEL - Fast language model (Default: gpt-3.5-turbo)
SMART_LLM_MODEL=gpt-3.5-turbo
FAST_LLM_MODEL=gpt-3.5-turbo
### LLM MODEL SETTINGS
## FAST_TOKEN_LIMIT - Fast token limit for OpenAI (Default: 4000)
## SMART_TOKEN_LIMIT - Smart token limit for OpenAI (Default: 8000)
## When using --gpt3only this needs to be set to 4000.
FAST_TOKEN_LIMIT=4000
SMART_TOKEN_LIMIT=4000
ImportError DLL load failed while importing:
Make sure you have the latest Microsoft Visual C++ Redistributable installed.Command write_to_file returned: Error: 'PosixPath' object has no attribute 'is_relative_to':
Your python version is not recent enough. Update to Python 3.10. You may also need to take the old python out of your PATH. How this is done depends on the OS you're using and can vary by preference. Look for information on your specific OS and version.This model's maximum context length is "X" tokens, however you requested "larger_than_X tokens":
Check that BROWSE_CHUNK_MAX_LENGTH is set correctly in the .env file. The default is 3000. Set it lower if you are having the error persist.This is an example of how the section of the .env should look :
####
### WEB BROWSING
####
### BROWSER
## HEADLESS_BROWSER - Whether to run the browser in headless mode (default: True)
## USE_WEB_BROWSER - Sets the web-browser driver to use with selenium (default: chrome).
## Note: set this to either 'chrome', 'firefox', 'safari' or 'edge' depending on your current browser
HEADLESS_BROWSER=False
USE_WEB_BROWSER=firefox
## BROWSE_CHUNK_MAX_LENGTH - When browsing website, define the length of chunks to summarize (in number of tokens, excluding the response. 75 % of # FAST_TOKEN_LIMIT is usually wise )
BROWSE_CHUNK_MAX_LENGTH=2000
## BROWSE_SPACY_LANGUAGE_MODEL is used to split sentences. Install additional languages via pip, and set the model name here.
Example Chinese: # python -m spacy download zh_core_web_sm
BROWSE_SPACY_LANGUAGE_MODEL=en_core_web_sm
We are working back towards a more accessible and inclusive developer experience. As long as this notice is here, beware that there may be things on this wiki that still need updating.
~ Pwuts, 2024-06-13