Skip to content

Commit

Permalink
Merge branch 'master' into fix_agent_key_error
Browse files Browse the repository at this point in the history
  • Loading branch information
jacobtohahn committed Apr 15, 2023
2 parents 870c594 + 793ea4d commit 26b3126
Show file tree
Hide file tree
Showing 75 changed files with 2,611 additions and 1,921 deletions.
23 changes: 12 additions & 11 deletions .env.template
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,9 @@ BROWSE_SUMMARY_MAX_TOKEN=300
# USER_AGENT="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36"
# AI_SETTINGS_FILE - Specifies which AI Settings file to use (defaults to ai_settings.yaml)
AI_SETTINGS_FILE=ai_settings.yaml
# USE_WEB_BROWSER - Sets the web-browser drivers to use with selenium (defaults to chrome).
# Note: set this to either 'chrome', 'firefox', or 'safari' depending on your current browser
# USE_WEB_BROWSER=chrome

################################################################################
### LLM PROVIDER
Expand All @@ -21,20 +24,11 @@ AI_SETTINGS_FILE=ai_settings.yaml
# TEMPERATURE - Sets temperature in OpenAI (Default: 1)
# USE_AZURE - Use Azure OpenAI or not (Default: False)
OPENAI_API_KEY=your-openai-api-key
TEMPERATURE=1
TEMPERATURE=0
USE_AZURE=False

### AZURE
# OPENAI_AZURE_API_BASE - OpenAI API base URL for Azure (Example: https://my-azure-openai-url.com)
# OPENAI_AZURE_API_VERSION - OpenAI API version for Azure (Example: v1)
# OPENAI_AZURE_DEPLOYMENT_ID - OpenAI deployment ID for Azure (Example: my-deployment-id)
# OPENAI_AZURE_CHAT_DEPLOYMENT_ID - OpenAI deployment ID for Azure Chat (Example: my-deployment-id-for-azure-chat)
# OPENAI_AZURE_EMBEDDINGS_DEPLOYMENT_ID - OpenAI deployment ID for Embedding (Example: my-deployment-id-for-azure-embeddigs)
OPENAI_AZURE_API_BASE=your-base-url-for-azure
OPENAI_AZURE_API_VERSION=api-version-for-azure
OPENAI_AZURE_DEPLOYMENT_ID=deployment-id-for-azure
OPENAI_AZURE_CHAT_DEPLOYMENT_ID=deployment-id-for-azure-chat
OPENAI_AZURE_EMBEDDINGS_DEPLOYMENT_ID=deployment-id-for-azure-embeddigs
# cleanup azure env as already moved to `azure.yaml.template`

################################################################################
### LLM MODELS
Expand Down Expand Up @@ -77,6 +71,13 @@ REDIS_PASSWORD=
WIPE_REDIS_ON_START=False
MEMORY_INDEX=auto-gpt

### MILVUS
# MILVUS_ADDR - Milvus remote address (e.g. localhost:19530)
# MILVUS_COLLECTION - Milvus collection,
# change it if you want to start a new memory and retain the old memory.
MILVUS_ADDR=your-milvus-cluster-host-port
MILVUS_COLLECTION=autogpt

################################################################################
### IMAGE GENERATION PROVIDER
################################################################################
Expand Down
7 changes: 6 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,8 @@ auto-gpt.json
log.txt
log-ingestion.txt
logs
*.log
*.mp3

# Byte-compiled / optimized / DLL files
__pycache__/
Expand Down Expand Up @@ -151,4 +153,7 @@ dmypy.json
# Pyre type checker
.pyre/
llama-*
vicuna-*
vicuna-*

# mac
.DS_Store
6 changes: 3 additions & 3 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -13,11 +13,11 @@ RUN chown appuser:appuser /home/appuser
USER appuser

# Copy the requirements.txt file and install the requirements
COPY --chown=appuser:appuser requirements.txt .
RUN pip install --no-cache-dir --user -r requirements.txt
COPY --chown=appuser:appuser requirements-docker.txt .
RUN pip install --no-cache-dir --user -r requirements-docker.txt

# Copy the application files
COPY --chown=appuser:appuser autogpt/ .
COPY --chown=appuser:appuser autogpt/ ./autogpt

# Set the entrypoint
ENTRYPOINT ["python", "-m", "autogpt"]
57 changes: 47 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,22 +35,27 @@ Your support is greatly appreciated
## Table of Contents

- [Auto-GPT: An Autonomous GPT-4 Experiment](#auto-gpt-an-autonomous-gpt-4-experiment)
- [Demo (30/03/2023):](#demo-30032023)
- [🔴 🔴 🔴 Urgent: USE `stable` not `master` 🔴 🔴 🔴](#----urgent-use-stable-not-master----)
- [Demo (30/03/2023):](#demo-30032023)
- [Table of Contents](#table-of-contents)
- [🚀 Features](#-features)
- [📋 Requirements](#-requirements)
- [💾 Installation](#-installation)
- [🔧 Usage](#-usage)
- [Logs](#logs)
- [Docker](#docker)
- [Command Line Arguments](#command-line-arguments)
- [🗣️ Speech Mode](#️-speech-mode)
- [🔍 Google API Keys Configuration](#-google-api-keys-configuration)
- [Setting up environment variables](#setting-up-environment-variables)
- [Redis Setup](#redis-setup)
- [🌲 Pinecone API Key Setup](#-pinecone-api-key-setup)
- [Memory Backend Setup](#memory-backend-setup)
- [Redis Setup](#redis-setup)
- [🌲 Pinecone API Key Setup](#-pinecone-api-key-setup)
- [Milvus Setup](#milvus-setup)
- [Setting up environment variables](#setting-up-environment-variables-1)
- [Setting Your Cache Type](#setting-your-cache-type)
- [View Memory Usage](#view-memory-usage)
- [🧠 Memory pre-seeding](#memory-pre-seeding)
- [🧠 Memory pre-seeding](#-memory-pre-seeding)
- [💀 Continuous Mode ⚠️](#-continuous-mode-️)
- [GPT3.5 ONLY Mode](#gpt35-only-mode)
- [🖼 Image Generation](#-image-generation)
Expand All @@ -75,10 +80,11 @@ Your support is greatly appreciated
- [Python 3.8 or later](https://www.tutorialspoint.com/how-to-install-python-in-windows)
- [OpenAI API key](https://platform.openai.com/account/api-keys)


Optional:

- [PINECONE API key](https://www.pinecone.io/) (If you want Pinecone backed memory)
- Memory backend
- [PINECONE API key](https://www.pinecone.io/) (If you want Pinecone backed memory)
- [Milvus](https://milvus.io/) (If you want Milvus as memory backend)
- ElevenLabs Key (If you want the AI to speak)

## 💾 Installation
Expand Down Expand Up @@ -111,7 +117,7 @@ pip install -r requirements.txt
```

5. Rename `.env.template` to `.env` and fill in your `OPENAI_API_KEY`. If you plan to use Speech Mode, fill in your `ELEVEN_LABS_API_KEY` as well.
- Obtain your OpenAI API key from: https://platform.openai.com/account/api-keys.
- See [OpenAI API Keys Configuration](#openai-api-keys-configuration) to obtain your OpenAI API key.
- Obtain your ElevenLabs API key from: https://elevenlabs.io. You can view your xi-api-key using the "Profile" tab on the website.
- If you want to use GPT on an Azure instance, set `USE_AZURE` to `True` and then:
- Rename `azure.yaml.template` to `azure.yaml` and provide the relevant `azure_api_base`, `azure_api_version` and all of the deployment ids for the relevant models in the `azure_model_map` section:
Expand Down Expand Up @@ -173,6 +179,17 @@ Use this to use TTS for Auto-GPT
python -m autogpt --speak
```

## OpenAI API Keys Configuration

Obtain your OpenAI API key from: https://platform.openai.com/account/api-keys.

To use OpenAI API key for Auto-GPT, you NEED to have billing set up (AKA paid account).

You can set up paid account at https://platform.openai.com/account/billing/overview.

![For OpenAI API key to work, set up paid account at OpenAI API > Billing](./docs/imgs/openai-api-key-billing-paid-account.png)


## 🔍 Google API Keys Configuration

This section is optional, use the official google api if you are having issues with error 429 when running a google search.
Expand Down Expand Up @@ -209,7 +226,11 @@ export CUSTOM_SEARCH_ENGINE_ID="YOUR_CUSTOM_SEARCH_ENGINE_ID"
```

## Redis Setup
## Memory Backend Setup

Setup any one backend to persist memory.

### Redis Setup

Install docker desktop.

Expand Down Expand Up @@ -246,14 +267,26 @@ You can specify the memory index for redis using the following:
MEMORY_INDEX=whatever
```

## 🌲 Pinecone API Key Setup
### 🌲 Pinecone API Key Setup

Pinecone enables the storage of vast amounts of vector-based memory, allowing for only relevant memories to be loaded for the agent at any given time.

1. Go to [pinecone](https://app.pinecone.io/) and make an account if you don't already have one.
2. Choose the `Starter` plan to avoid being charged.
3. Find your API key and region under the default project in the left sidebar.

### Milvus Setup

[Milvus](https://milvus.io/) is a open-source, high scalable vector database to storage huge amount of vector-based memory and provide fast relevant search.

- setup milvus database, keep your pymilvus version and milvus version same to avoid compatible issues.
- setup by open source [Install Milvus](https://milvus.io/docs/install_standalone-operator.md)
- or setup by [Zilliz Cloud](https://zilliz.com/cloud)
- set `MILVUS_ADDR` in `.env` to your milvus address `host:ip`.
- set `MEMORY_BACKEND` in `.env` to `milvus` to enable milvus as backend.
- optional
- set `MILVUS_COLLECTION` in `.env` to change milvus collection name as you want, `autogpt` is the default name.

### Setting up environment variables

In the `.env` file set:
Expand Down Expand Up @@ -333,7 +366,7 @@ Memories will be available to the AI immediately as they are ingested, even if i
In the example above, the script initializes the memory, ingests all files within the seed_data directory into memory with an overlap between chunks of 200 and a maximum length of each chunk of 4000.
Note that you can also use the --file argument to ingest a single file into memory and that the script will only ingest files within the auto_gpt_workspace directory.

You can adjust the max_length and overlap parameters to fine-tune the way the docuents are presented to the AI when it "recall" that memory:
You can adjust the max_length and overlap parameters to fine-tune the way the documents are presented to the AI when it "recall" that memory:

- Adjusting the overlap value allows the AI to access more contextual information from each chunk when recalling information, but will result in more chunks being created and therefore increase memory backend usage and OpenAI API requests.
- Reducing the max_length value will create more chunks, which can save prompt tokens by allowing for more message history in the context, but will also increase the number of chunks.
Expand Down Expand Up @@ -376,6 +409,10 @@ IMAGE_PROVIDER=sd
HUGGINGFACE_API_TOKEN="YOUR_HUGGINGFACE_API_TOKEN"
```

## Selenium

sudo Xvfb :10 -ac -screen 0 1024x768x24 &
DISPLAY=:10 your-client
## ⚠️ Limitations

This experiment aims to showcase the potential of GPT-4 but comes with some limitations:
Expand Down
Loading

0 comments on commit 26b3126

Please sign in to comment.