From 51d005b2308fd2471aa6898d007cb32afe6b6acc Mon Sep 17 00:00:00 2001
From: Qingyun Wu
@@ -119,13 +113,13 @@ The easiest way to start playing is
## [Installation](https://ag2labs.github.io/autogen/docs/Installation) -### Option 1. Install and Run AutoGen in Docker +### Option 1. Install and Run AG2 in Docker Find detailed instructions for users [here](https://ag2labs.github.io/autogen/docs/installation/Docker#step-1-install-docker), and for developers [here](https://ag2labs.github.io/autogen/docs/Contribute#docker-for-development). -### Option 2. Install AutoGen Locally +### Option 2. Install AG2 Locally -AutoGen requires **Python version >= 3.8, < 3.13**. It can be installed from pip: +AG2 requires **Python version >= 3.8, < 3.13**. It can be installed from pip: ```bash pip install autogen @@ -142,7 +136,7 @@ Find more options in [Installation](https://ag2labs.github.io/autogen/docs/Insta -Even if you are installing and running AutoGen locally outside of docker, the recommendation and default behavior of agents is to perform [code execution](https://ag2labs.github.io/autogen/docs/FAQ/#code-execution) in docker. Find more instructions and how to change the default behaviour [here](https://ag2labs.github.io/autogen/docs/Installation#code-execution-with-docker-(default)). +Even if you are installing and running AG2 locally outside of docker, the recommendation and default behavior of agents is to perform [code execution](https://ag2labs.github.io/autogen/docs/FAQ/#code-execution) in docker. Find more instructions and how to change the default behaviour [here](https://ag2labs.github.io/autogen/docs/Installation#code-execution-with-docker-(default)). For LLM inference configurations, check the [FAQs](https://ag2labs.github.io/autogen/docs/FAQ#set-your-api-endpoints). @@ -154,14 +148,14 @@ For LLM inference configurations, check the [FAQs](https://ag2labs.github.io/aut ## Multi-Agent Conversation Framework -Autogen enables the next-gen LLM applications with a generic [multi-agent conversation](https://ag2labs.github.io/autogen/docs/Use-Cases/agent_chat) framework. It offers customizable and conversable agents that integrate LLMs, tools, and humans. +AG2 enables the next-gen LLM applications with a generic [multi-agent conversation](https://ag2labs.github.io/autogen/docs/Use-Cases/agent_chat) framework. It offers customizable and conversable agents that integrate LLMs, tools, and humans. By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code. Features of this use case include: -- **Multi-agent conversations**: AutoGen agents can communicate with each other to solve tasks. This allows for more complex and sophisticated applications than would be possible with a single LLM. -- **Customization**: AutoGen agents can be customized to meet the specific needs of an application. This includes the ability to choose the LLMs to use, the types of human input to allow, and the tools to employ. -- **Human participation**: AutoGen seamlessly allows human participation. This means that humans can provide input and feedback to the agents as needed. +- **Multi-agent conversations**: AG2 agents can communicate with each other to solve tasks. This allows for more complex and sophisticated applications than would be possible with a single LLM. +- **Customization**: AG2 agents can be customized to meet the specific needs of an application. This includes the ability to choose the LLMs to use, the types of human input to allow, and the tools to employ. +- **Human participation**: AG2 seamlessly allows human participation. This means that humans can provide input and feedback to the agents as needed. For [example](https://github.com/ag2labs/ag2/blob/main/test/twoagent.py), @@ -185,10 +179,10 @@ python test/twoagent.py ``` After the repo is cloned. -The figure below shows an example conversation flow with AutoGen. +The figure below shows an example conversation flow with AG2. ![Agent Chat Example](https://github.com/ag2labs/ag2/blob/main/website/static/img/chat_example.png) -Alternatively, the [sample code](https://github.com/ag2labs/build-with-autogen/blob/main/samples/simple_chat.py) here allows a user to chat with an AutoGen agent in ChatGPT style. +Alternatively, the [sample code](https://github.com/ag2labs/build-with-autogen/blob/main/samples/simple_chat.py) here allows a user to chat with an AG2 agent in ChatGPT style. Please find more [code examples](https://ag2labs.github.io/autogen/docs/Examples#automated-multi-agent-chat) for this feature.
@@ -199,7 +193,7 @@ Please find more [code examples](https://ag2labs.github.io/autogen/docs/Examples
## Enhanced LLM Inferences
-Autogen also helps maximize the utility out of the expensive LLMs such as ChatGPT and GPT-4. It offers [enhanced LLM inference](https://ag2labs.github.io/autogen/docs/Use-Cases/enhanced_inference#api-unification) with powerful functionalities like caching, error handling, multi-config inference and templating.
+AG2 also helps maximize the utility out of the expensive LLMs such as ChatGPT and GPT-4. It offers [enhanced LLM inference](https://ag2labs.github.io/autogen/docs/Use-Cases/enhanced_inference#api-unification) with powerful functionalities like caching, error handling, multi-config inference and templating.
@@ -15,14 +15,23 @@
@@ -156,7 +158,7 @@ For LLM inference configurations, check the [FAQs](https://ag2labs.github.io/aut
## Multi-Agent Conversation Framework
-AG2 enables the next-gen LLM applications with a generic [multi-agent conversation](https://ag2labs.github.io/autogen/docs/Use-Cases/agent_chat) framework. It offers customizable and conversable agents that integrate LLMs, tools, and humans.
+AG2 enables the next-gen LLM applications with a generic [multi-agent conversation](https://ag2labs.github.io/ag2/docs/Use-Cases/agent_chat) framework. It offers customizable and conversable agents that integrate LLMs, tools, and humans.
By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code.
Features of this use case include:
@@ -170,7 +172,7 @@ For [example](https://github.com/ag2labs/ag2/blob/main/test/twoagent.py),
```python
from autogen import AssistantAgent, UserProxyAgent, config_list_from_json
# Load LLM inference endpoints from an env variable or a file
-# See https://ag2labs.github.io/autogen/docs/FAQ#set-your-api-endpoints
+# See https://ag2labs.github.io/ag2/docs/FAQ#set-your-api-endpoints
# and OAI_CONFIG_LIST_sample
config_list = config_list_from_json(env_or_file="OAI_CONFIG_LIST")
# You can also set config_list directly as a list, for example, config_list = [{'model': 'gpt-4', 'api_key': '
@@ -201,7 +203,7 @@ Please find more [code examples](https://ag2labs.github.io/autogen/docs/Examples
## Enhanced LLM Inferences
-AG2 also helps maximize the utility out of the expensive LLMs such as ChatGPT and GPT-4. It offers [enhanced LLM inference](https://ag2labs.github.io/autogen/docs/Use-Cases/enhanced_inference#api-unification) with powerful functionalities like caching, error handling, multi-config inference and templating.
+AG2 also helps maximize the utility out of the expensive LLMs such as ChatGPT and GPT-4. It offers [enhanced LLM inference](https://ag2labs.github.io/ag2/docs/Use-Cases/enhanced_inference#api-unification) with powerful functionalities like caching, error handling, multi-config inference and templating.
+Please find more [code examples](https://ag2labs.github.io/ag2/docs/Examples#tune-gpt-models) for this feature. -->
@@ -230,15 +232,15 @@ Please find more [code examples](https://ag2labs.github.io/autogen/docs/Examples
## Documentation
-You can find detailed documentation about AG2 [here](https://ag2labs.github.io/autogen/).
+You can find detailed documentation about AG2 [here](https://ag2labs.github.io/ag2/).
In addition, you can find:
-- [Research](https://ag2labs.github.io/autogen/docs/Research), [blogposts](https://ag2labs.github.io/autogen/blog) around AG2, and [Transparency FAQs](https://github.com/ag2labs/ag2/blob/main/TRANSPARENCY_FAQS.md)
+- [Research](https://ag2labs.github.io/ag2/docs/Research), [blogposts](https://ag2labs.github.io/ag2/blog) around AG2, and [Transparency FAQs](https://github.com/ag2labs/ag2/blob/main/TRANSPARENCY_FAQS.md)
- [Discord](https://discord.gg/pAbnFJrkgZ)
-- [Contributing guide](https://ag2labs.github.io/autogen/docs/Contribute)
+- [Contributing guide](https://ag2labs.github.io/ag2/docs/Contribute)
From bd4db5e74ee46638aa73e4b4de519b131b61804c Mon Sep 17 00:00:00 2001
From: Chi Wang <4250911+sonichi@users.noreply.github.com>
Date: Tue, 12 Nov 2024 00:30:05 +0000
Subject: [PATCH 4/6] update oai config list
---
OAI_CONFIG_LIST_sample | 13 +++----------
1 file changed, 3 insertions(+), 10 deletions(-)
diff --git a/OAI_CONFIG_LIST_sample b/OAI_CONFIG_LIST_sample
index c1711acd7c..8f2ed6397b 100644
--- a/OAI_CONFIG_LIST_sample
+++ b/OAI_CONFIG_LIST_sample
@@ -1,19 +1,12 @@
// Please modify the content, remove these four lines of comment and rename this file to OAI_CONFIG_LIST to run the sample code.
// If using pyautogen v0.1.x with Azure OpenAI, please replace "base_url" with "api_base" (line 14 and line 21 below). Use "pip list" to check version of pyautogen installed.
//
-// NOTE: This configuration lists GPT-4 as the default model, as this represents our current recommendation, and is known to work well with AutoGen. If you use a model other than GPT-4, you may need to revise various system prompts (especially if using weaker models like GPT-3.5-turbo). Moreover, if you use models other than those hosted by OpenAI or Azure, you may incur additional risks related to alignment and safety. Proceed with caution if updating this default.
+// NOTE: This configuration lists gpt-4o as the default model. If you use a different model, you may need to revise various system prompts (especially if using weaker models like gpt-4o-mini). Proceed with caution when updating this default and be aware of additional risks related to alignment and safety.
[
{
- "model": "gpt-4",
+ "model": "gpt-4o",
"api_key": "
From 89d787cd51382445ccab5b595af271440509717c Mon Sep 17 00:00:00 2001
From: Chi Wang <4250911+sonichi@users.noreply.github.com>
Date: Tue, 12 Nov 2024 00:50:20 +0000
Subject: [PATCH 6/6] update name
---
.devcontainer/Dockerfile | 4 +-
.devcontainer/README.md | 8 +--
.devcontainer/dev/Dockerfile | 13 +++--
LICENSE | 2 +-
MAINTAINERS.md | 2 +-
NOTICE.md | 4 +-
README.md | 65 ++++++++++++------------
setup.py | 4 +-
website/docs/contributor-guide/docker.md | 20 ++++----
9 files changed, 61 insertions(+), 61 deletions(-)
diff --git a/.devcontainer/Dockerfile b/.devcontainer/Dockerfile
index b7e43c8208..56b2e2f4db 100644
--- a/.devcontainer/Dockerfile
+++ b/.devcontainer/Dockerfile
@@ -1,7 +1,7 @@
#-------------------------------------------------------------------------------------------------------------
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
# SPDX-License-Identifier: Apache-2.0
-# Contributions to this project, i.e., https://github.com/ag2labs/ag2, are licensed under the Apache License, Version 2.0 (Apache-2.0).
+# Contributions to this project, i.e., https://github.com/ag2ai/ag2, are licensed under the Apache License, Version 2.0 (Apache-2.0).
# Portions derived from https://github.com/microsoft/autogen under the MIT License.
# SPDX-License-Identifier: MIT
diff --git a/.devcontainer/README.md b/.devcontainer/README.md
index cd17a52597..9e8b918582 100644
--- a/.devcontainer/README.md
+++ b/.devcontainer/README.md
@@ -10,23 +10,23 @@ These configurations can be used with Codespaces and locally.
- **Purpose**: This Dockerfile, i.e., `./Dockerfile`, is designed for basic setups. It includes common Python libraries and essential dependencies required for general usage of AutoGen.
- **Usage**: Ideal for those just starting with AutoGen or for general-purpose applications.
-- **Building the Image**: Run `docker build -f ./Dockerfile -t ag2labs_base_img .` in this directory.
+- **Building the Image**: Run `docker build -f ./Dockerfile -t ag2_base_img .` in this directory.
- **Using with Codespaces**: `Code > Codespaces > Click on +` By default + creates a Codespace on the current branch.
### full
- **Purpose**: This Dockerfile, i.e., `./full/Dockerfile` is for advanced features. It includes additional dependencies and is configured for more complex or feature-rich AutoGen applications.
- **Usage**: Suited for advanced users who need the full range of AutoGen's capabilities.
-- **Building the Image**: Execute `docker build -f full/Dockerfile -t ag2labs_full_img .`.
+- **Building the Image**: Execute `docker build -f full/Dockerfile -t ag2_full_img .`.
- **Using with Codespaces**: `Code > Codespaces > Click on ...> New with options > Choose "full" as devcontainer configuration`. This image may require a Codespace with at least 64GB of disk space.
### dev
- **Purpose**: Tailored for AutoGen project developers, this Dockerfile, i.e., `./dev/Dockerfile` includes tools and configurations aiding in development and contribution.
- **Usage**: Recommended for developers who are contributing to the AutoGen project.
-- **Building the Image**: Run `docker build -f dev/Dockerfile -t ag2labs_dev_img .`.
+- **Building the Image**: Run `docker build -f dev/Dockerfile -t ag2_dev_img .`.
- **Using with Codespaces**: `Code > Codespaces > Click on ...> New with options > Choose "dev" as devcontainer configuration`. This image may require a Codespace with at least 64GB of disk space.
-- **Before using**: We highly encourage all potential contributors to read the [AutoGen Contributing](https://ag2labs.github.io/autogen/docs/Contribute) page prior to submitting any pull requests.
+- **Before using**: We highly encourage all potential contributors to read the [AutoGen Contributing](https://ag2ai.github.io/autogen/docs/Contribute) page prior to submitting any pull requests.
## Customizing Dockerfiles
diff --git a/.devcontainer/dev/Dockerfile b/.devcontainer/dev/Dockerfile
index 9774307068..7c3891023b 100644
--- a/.devcontainer/dev/Dockerfile
+++ b/.devcontainer/dev/Dockerfile
@@ -10,18 +10,17 @@ RUN apt-get update && apt-get -y update
RUN apt-get install -y sudo git npm vim nano curl wget git-lfs
# Setup a non-root user 'autogen' with sudo access
-RUN adduser --home /home/ag2labs --disabled-password --gecos '' autogen
+RUN adduser --home /home/autogen --disabled-password --gecos '' autogen
RUN adduser autogen sudo
RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
USER autogen
-WORKDIR /home/ag2labs
# Set environment variable
# ENV OPENAI_API_KEY="{OpenAI-API-Key}"
# Clone the AutoGen repository
-RUN git clone https://github.com/ag2labs/ag2.git /home/ag2labs/ag2
-WORKDIR /home/ag2labs/ag2
+RUN git clone https://github.com/ag2ai/ag2.git /home/autogen/ag2
+WORKDIR /home/autogen/ag2
# Install AutoGen in editable mode with extra components
RUN sudo pip install --upgrade pip && \
@@ -39,11 +38,11 @@ RUN yarn install --frozen-lockfile --ignore-engines
RUN arch=$(arch | sed s/aarch64/arm64/ | sed s/x86_64/amd64/) && \
wget -q https://github.com/quarto-dev/quarto-cli/releases/download/v1.5.23/quarto-1.5.23-linux-${arch}.tar.gz && \
- mkdir -p /home/ag2labs/quarto/ && \
- tar -xzf quarto-1.5.23-linux-${arch}.tar.gz --directory /home/ag2labs/quarto/ && \
+ mkdir -p /home/autogen/quarto/ && \
+ tar -xzf quarto-1.5.23-linux-${arch}.tar.gz --directory /home/autogen/quarto/ && \
rm quarto-1.5.23-linux-${arch}.tar.gz
-ENV PATH="${PATH}:/home/ag2labs/quarto/quarto-1.5.23/bin/"
+ENV PATH="${PATH}:/home/autogen/quarto/quarto-1.5.23/bin/"
# Exposes the Yarn port for Docusaurus
EXPOSE 3000
diff --git a/LICENSE b/LICENSE
index 748136c23f..a40e7d03d7 100644
--- a/LICENSE
+++ b/LICENSE
@@ -186,7 +186,7 @@
same "printed page" as the copyright notice for easier
identification within third-party archives.
- Copyright ag2labs organization, i.e., https://github.com/ag2labs, owners.
+ Copyright ag2ai organization, i.e., https://github.com/ag2ai, owners.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
diff --git a/MAINTAINERS.md b/MAINTAINERS.md
index 31b5ab7340..60a6a017b5 100644
--- a/MAINTAINERS.md
+++ b/MAINTAINERS.md
@@ -27,7 +27,7 @@
| Evan David * | [evandavid1](https://github.com/evandavid1) | - | gpt assistant, group chat, rag, autobuild |
## I would like to join this list. How can I help the project?
-> We're always looking for new contributors to join our team and help improve the project. For more information, please refer to our [CONTRIBUTING](https://ag2labs.github.io/autogen/docs/contributor-guide/contributing) guide.
+> We're always looking for new contributors to join our team and help improve the project. For more information, please refer to our [CONTRIBUTING](https://ag2ai.github.io/autogen/docs/contributor-guide/contributing) guide.
## Are you missing from this list?
diff --git a/NOTICE.md b/NOTICE.md
index 7a21b7c87b..0065be8674 100644
--- a/NOTICE.md
+++ b/NOTICE.md
@@ -1,13 +1,13 @@
## NOTICE
-Copyright (c) 2023-2024, Owners of https://github.com/ag2labs
+Copyright (c) 2023-2024, Owners of https://github.com/ag2ai
This project is a fork of https://github.com/microsoft/autogen.
The [original project](https://github.com/microsoft/autogen) is licensed under the MIT License as detailed in [LICENSE_original_MIT](./license_original/LICENSE_original_MIT). The fork was created from version v0.2.35 of the original project.
-This project, i.e., https://github.com/ag2labs/ag2, is licensed under the Apache License, Version 2.0 as detailed in [LICENSE](./LICENSE)
+This project, i.e., https://github.com/ag2ai/ag2, is licensed under the Apache License, Version 2.0 as detailed in [LICENSE](./LICENSE)
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
diff --git a/README.md b/README.md
index 569753ed73..1bf60695af 100644
--- a/README.md
+++ b/README.md
@@ -1,24 +1,24 @@
[![PyPI version](https://badge.fury.io/py/autogen.svg)](https://badge.fury.io/py/autogen)
-[![Build](https://github.com/ag2labs/ag2/actions/workflows/python-package.yml/badge.svg)](https://github.com/ag2labs/ag2/actions/workflows/python-package.yml)
+[![Build](https://github.com/ag2ai/ag2/actions/workflows/python-package.yml/badge.svg)](https://github.com/ag2ai/ag2/actions/workflows/python-package.yml)
![Python Version](https://img.shields.io/badge/3.8%20%7C%203.9%20%7C%203.10%20%7C%203.11%20%7C%203.12-blue)
[![Discord](https://img.shields.io/discord/1153072414184452236?logo=discord&style=flat)](https://discord.gg/pAbnFJrkgZ)
-[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/cloudposse.svg?style=social&label=Follow%20%40ag2labs)](https://x.com/ag2labs)
+[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/cloudposse.svg?style=social&label=Follow%20%40ag2ai)](https://x.com/ag2ai)
-# [AG2](https://github.com/ag2labs/ag2)
+# [AG2](https://github.com/ag2ai/ag2)
[📚 Cite paper](#related-papers).
> [!IMPORTANT]
>
-> :fire: :tada: Nov 11, 2024: We are evolving AutoGen into AG2! A new organization [ag2labs](https://github.com/ag2labs) is created to host the development of AG2 and related projects with open governance. We invite collaborators from all organizations and individuals to join the development.
+> :fire: :tada: Nov 11, 2024: We are evolving AutoGen into AG2! A new organization [ag2ai](https://github.com/ag2ai) is created to host the development of AG2 and related projects with open governance. We invite collaborators from all organizations and individuals to join the development.
:fire: :tada: Sep 06, 2024: AG2 is available via `ag2` (or its alias `autogen` or `pyautogen`) on PyPI! Starting with version 0.3.3, you can now install AG2 using:
@@ -47,11 +47,11 @@ We adopt the Apache 2.0 license from v0.3. This enhances our commitment to open-
:tada: May 11, 2024: [AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation](https://openreview.net/pdf?id=uAjxFFing2) received the best paper award at the [ICLR 2024 LLM Agents Workshop](https://llmagents.github.io/).
-
+
:tada: Apr 17, 2024: Andrew Ng cited AutoGen in [The Batch newsletter](https://www.deeplearning.ai/the-batch/issue-245/) and [What's next for AI agentic workflows](https://youtu.be/sal78ACtGTc?si=JduUzN_1kDnMq0vF) at Sequoia Capital's AI Ascent (Mar 26).
-:tada: Mar 3, 2024: What's new in AutoGen? 📰[Blog](https://ag2labs.github.io/ag2/blog/2024/03/03/AutoGen-Update); 📺[Youtube](https://www.youtube.com/watch?v=j_mtwQiaLGU).
+:tada: Mar 3, 2024: What's new in AutoGen? 📰[Blog](https://ag2ai.github.io/ag2/blog/2024/03/03/AutoGen-Update); 📺[Youtube](https://www.youtube.com/watch?v=j_mtwQiaLGU).
@@ -59,9 +59,9 @@ We adopt the Apache 2.0 license from v0.3. This enhances our commitment to open-
:tada: Dec 31, 2023: [AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework](https://arxiv.org/abs/2308.08155) is selected by [TheSequence: My Five Favorite AI Papers of 2023](https://thesequence.substack.com/p/my-five-favorite-ai-papers-of-2023).
-
+
-
+
:tada: Nov 8, 2023: AutoGen is selected into [Open100: Top 100 Open Source achievements](https://www.benchcouncil.org/evaluation/opencs/annual.html) 35 days after spinoff from [FLAML](https://github.com/microsoft/FLAML).
@@ -78,7 +78,7 @@ We adopt the Apache 2.0 license from v0.3. This enhances our commitment to open-
@@ -91,7 +91,7 @@ AG2 (formally AutoGen) is an open-source programming framework for building AI a
The project is currently maintained by a [dynamic group of volunteers](MAINTAINERS.md) from several organizations. Contact project administrators Chi Wang and Qingyun Wu via auto-gen@outlook.com if you are interested in becoming a maintainer.
-![AutoGen Overview](https://github.com/ag2labs/ag2/blob/main/website/static/img/autogen_agentchat.png)
+![AutoGen Overview](https://github.com/ag2ai/ag2/blob/main/website/static/img/autogen_agentchat.png)
@@ -109,7 +109,7 @@ The project is currently maintained by a [dynamic group of volunteers](MAINTAINE
The easiest way to start playing is
1. Click below to use the GitHub Codespace
- [![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/ag2labs/ag2?quickstart=1)
+ [![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/ag2ai/ag2?quickstart=1)
2. Copy OAI_CONFIG_LIST_sample to ./notebook folder, name to OAI_CONFIG_LIST, and set the correct configuration.
3. Start playing with the notebooks!
@@ -122,10 +122,11 @@ The easiest way to start playing is
@@ -158,7 +159,7 @@ For LLM inference configurations, check the [FAQs](https://ag2labs.github.io/ag2
## Multi-Agent Conversation Framework
-AG2 enables the next-gen LLM applications with a generic [multi-agent conversation](https://ag2labs.github.io/ag2/docs/Use-Cases/agent_chat) framework. It offers customizable and conversable agents that integrate LLMs, tools, and humans.
+AG2 enables the next-gen LLM applications with a generic [multi-agent conversation](https://ag2ai.github.io/ag2/docs/Use-Cases/agent_chat) framework. It offers customizable and conversable agents that integrate LLMs, tools, and humans.
By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code.
Features of this use case include:
@@ -167,12 +168,12 @@ Features of this use case include:
- **Customization**: AG2 agents can be customized to meet the specific needs of an application. This includes the ability to choose the LLMs to use, the types of human input to allow, and the tools to employ.
- **Human participation**: AG2 seamlessly allows human participation. This means that humans can provide input and feedback to the agents as needed.
-For [example](https://github.com/ag2labs/ag2/blob/main/test/twoagent.py),
+For [example](https://github.com/ag2ai/ag2/blob/main/test/twoagent.py),
```python
from autogen import AssistantAgent, UserProxyAgent, config_list_from_json
# Load LLM inference endpoints from an env variable or a file
-# See https://ag2labs.github.io/ag2/docs/FAQ#set-your-api-endpoints
+# See https://ag2ai.github.io/ag2/docs/FAQ#set-your-api-endpoints
# and OAI_CONFIG_LIST_sample
config_list = config_list_from_json(env_or_file="OAI_CONFIG_LIST")
# You can also set config_list directly as a list, for example, config_list = [{'model': 'gpt-4', 'api_key': '
@@ -203,7 +204,7 @@ Please find more [code examples](https://ag2labs.github.io/ag2/docs/Examples#aut
## Enhanced LLM Inferences
-AG2 also helps maximize the utility out of the expensive LLMs such as ChatGPT and GPT-4. It offers [enhanced LLM inference](https://ag2labs.github.io/ag2/docs/Use-Cases/enhanced_inference#api-unification) with powerful functionalities like caching, error handling, multi-config inference and templating.
+AG2 also helps maximize the utility out of the expensive LLMs such as ChatGPT and GPT-4. It offers [enhanced LLM inference](https://ag2ai.github.io/ag2/docs/Use-Cases/enhanced_inference#api-unification) with powerful functionalities like caching, error handling, multi-config inference and templating.
+Please find more [code examples](https://ag2ai.github.io/ag2/docs/Examples#tune-gpt-models) for this feature. -->
@@ -232,15 +233,15 @@ Please find more [code examples](https://ag2labs.github.io/ag2/docs/Examples#tun
## Documentation
-You can find detailed documentation about AG2 [here](https://ag2labs.github.io/ag2/).
+You can find detailed documentation about AG2 [here](https://ag2ai.github.io/ag2/).
In addition, you can find:
-- [Research](https://ag2labs.github.io/ag2/docs/Research), [blogposts](https://ag2labs.github.io/ag2/blog) around AG2, and [Transparency FAQs](https://github.com/ag2labs/ag2/blob/main/TRANSPARENCY_FAQS.md)
+- [Research](https://ag2ai.github.io/ag2/docs/Research), [blogposts](https://ag2ai.github.io/ag2/blog) around AG2, and [Transparency FAQs](https://github.com/ag2ai/ag2/blob/main/TRANSPARENCY_FAQS.md)
- [Discord](https://discord.gg/pAbnFJrkgZ)
-- [Contributing guide](https://ag2labs.github.io/ag2/docs/Contribute)
+- [Contributing guide](https://ag2ai.github.io/ag2/docs/Contribute)
## Contributors Wall
-
-
+
+
diff --git a/setup.py b/setup.py
index 41b96747c0..c53adfc77e 100644
--- a/setup.py
+++ b/setup.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
@@ -125,7 +125,7 @@
description="A programming framework for agentic AI",
long_description=long_description,
long_description_content_type="text/markdown",
- url="https://github.com/ag2labs/ag2",
+ url="https://github.com/ag2ai/ag2",
packages=setuptools.find_packages(include=["autogen*"], exclude=["test"]),
install_requires=install_requires,
extras_require=extra_require,
diff --git a/website/docs/contributor-guide/docker.md b/website/docs/contributor-guide/docker.md
index 3c041bec22..9e895ef53e 100644
--- a/website/docs/contributor-guide/docker.md
+++ b/website/docs/contributor-guide/docker.md
@@ -2,27 +2,27 @@
For developers contributing to the AutoGen project, we offer a specialized Docker environment. This setup is designed to streamline the development process, ensuring that all contributors work within a consistent and well-equipped environment.
-## Autogen Developer Image (ag2labs_dev_img)
+## Autogen Developer Image (ag2_dev_img)
-- **Purpose**: The `ag2labs_dev_img` is tailored for contributors to the AutoGen project. It includes a suite of tools and configurations that aid in the development and testing of new features or fixes.
+- **Purpose**: The `ag2_dev_img` is tailored for contributors to the AutoGen project. It includes a suite of tools and configurations that aid in the development and testing of new features or fixes.
- **Usage**: This image is recommended for developers who intend to contribute code or documentation to AutoGen.
- **Forking the Project**: It's advisable to fork the AutoGen GitHub project to your own repository. This allows you to make changes in a separate environment without affecting the main project.
- **Updating Dockerfile**: Modify your copy of `Dockerfile` in the `dev` folder as needed for your development work.
-- **Submitting Pull Requests**: Once your changes are ready, submit a pull request from your branch to the upstream AutoGen GitHub project for review and integration. For more details on contributing, see the [AutoGen Contributing](https://ag2labs.github.io/autogen/docs/Contribute) page.
+- **Submitting Pull Requests**: Once your changes are ready, submit a pull request from your branch to the upstream AutoGen GitHub project for review and integration. For more details on contributing, see the [AutoGen Contributing](https://ag2ai.github.io/autogen/docs/Contribute) page.
## Building the Developer Docker Image
-- To build the developer Docker image (`ag2labs_dev_img`), use the following commands:
+- To build the developer Docker image (`ag2_dev_img`), use the following commands:
```bash
- docker build -f .devcontainer/dev/Dockerfile -t ag2labs_dev_img https://github.com/ag2labs/ag2.git#main
+ docker build -f .devcontainer/dev/Dockerfile -t ag2_dev_img https://github.com/ag2ai/ag2.git#main
```
- For building the developer image built from a specific Dockerfile in a branch other than main/master
```bash
# clone the branch you want to work out of
- git clone --branch {branch-name} https://github.com/ag2labs/ag2.git
+ git clone --branch {branch-name} https://github.com/ag2ai/ag2.git
# cd to your new directory
cd autogen
@@ -33,19 +33,19 @@ For developers contributing to the AutoGen project, we offer a specialized Docke
## Using the Developer Docker Image
-Once you have built the `ag2labs_dev_img`, you can run it using the standard Docker commands. This will place you inside the containerized development environment where you can run tests, develop code, and ensure everything is functioning as expected before submitting your contributions.
+Once you have built the `ag2_dev_img`, you can run it using the standard Docker commands. This will place you inside the containerized development environment where you can run tests, develop code, and ensure everything is functioning as expected before submitting your contributions.
```bash
-docker run -it -p 8081:3000 -v `pwd`/autogen-newcode:newstuff/ ag2labs_dev_img bash
+docker run -it -p 8081:3000 -v `pwd`/autogen-newcode:newstuff/ ag2_dev_img bash
```
- Note that the `pwd` is shorthand for present working directory. Thus, any path after the pwd is relative to that. If you want a more verbose method you could remove the "`pwd`/autogen-newcode" and replace it with the full path to your directory
```bash
-docker run -it -p 8081:3000 -v /home/AutoGenDeveloper/autogen-newcode:newstuff/ ag2labs_dev_img bash
+docker run -it -p 8081:3000 -v /home/AutoGenDeveloper/autogen-newcode:newstuff/ ag2_dev_img bash
```
## Develop in Remote Container
If you use vscode, you can open the autogen folder in a [Container](https://code.visualstudio.com/docs/remote/containers).
-We have provided the configuration in [devcontainer](https://github.com/ag2labs/ag2/blob/main/.devcontainer). They can be used in GitHub codespace too. Developing AutoGen in dev containers is recommended.
+We have provided the configuration in [devcontainer](https://github.com/ag2ai/ag2/blob/main/.devcontainer). They can be used in GitHub codespace too. Developing AutoGen in dev containers is recommended.