Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhance vertexai integration (safety settings, authentication...) #3067

Closed
wants to merge 16 commits into from

Conversation

luxzoli
Copy link
Contributor

@luxzoli luxzoli commented Jul 3, 2024

Why are these changes needed?

  • Enhance VertexAI authentication by supporting credentials objects in the llm_config for more flexibility:
import google.auth

credentials, project = google.auth.default()

from google.auth import impersonated_credentials

target_scopes = [
    'https://www.googleapis.com/auth/cloud-platform']

target_credentials = impersonated_credentials.Credentials(
  source_credentials=credentials,
  target_principal='autogen@autogen-with-gemini.iam.gserviceaccount.com',
  target_scopes = target_scopes,
  lifetime=500
)

llm_config = {
        "config_list": [
            { 
                "model": "gemini-1.5-pro",
                "api_type": "google",
                "credentials": target_credentials,
                "project": "autogen-with-gemini"
            }
         ],
}
assistant = AssistantAgent(
    "assistant", llm_config=llm_config, max_consecutive_auto_reply=5
)
  • Adding safety settings conversion to VertexAI format from the OAI_CONFIG_LIST

  • Support system_message, which are called system_instructions with Gemini

  • Consolidate message handling in send_message to use the officially supported PartsType for the content argument

  • Use project instead of project_id argument for the GeminiClient to follow the same naming convention as the vertexai.init() method does

Related issue number

Relates to #2387

Checks

@luxzoli
Copy link
Contributor Author

luxzoli commented Jul 3, 2024

The changes from the commit fix errors with gemini message format in chats are to fix an issue I have encountered when running the following code:

import os
from typing import Any, Callable, Dict, List, Optional, Tuple, Type, Union

import autogen
from autogen import Agent, AssistantAgent, ConversableAgent, UserProxyAgent
import google.auth
from google.auth import impersonated_credentials

credentials, project = google.auth.default()


target_scopes = [
    'https://www.googleapis.com/auth/cloud-platform']

target_credentials = impersonated_credentials.Credentials(
  source_credentials=credentials,
  target_principal='autogen@autogen-with-gemini.iam.gserviceaccount.com',
  target_scopes = target_scopes,
  lifetime=500)

llm_config = {
        "config_list": [{"model": "gemini-1.5-pro", "api_type": "google", "credentials": target_credentials, "project": "autogen-with-gemini"}],
}

financial_tasks = [
    """What are the current stock prices of NVDA and TESLA, and how is the performance over the past month in terms of percentage change?""",
    """Investigate possible reasons of the stock performance.""",
    """Plot a graph comparing the stock prices over the past month.""",
]

writing_tasks = ["""Develop an engaging blog post using any information provided."""]

financial_assistant = autogen.AssistantAgent(
    name="Financial_assistant",
    llm_config=llm_config,
)
research_assistant = autogen.AssistantAgent(
    name="Researcher",
    llm_config=llm_config,
)
writer = autogen.AssistantAgent(
    name="writer",
    llm_config=llm_config,
    system_message="""
        You are a professional writer, known for
        your insightful and engaging articles.
        You transform complex concepts into compelling narratives.
        Reply "TERMINATE" in the end when everything is done.
        """,
)

user = autogen.UserProxyAgent(
    name="User",
    human_input_mode="NEVER",
    is_termination_msg=lambda x: x.get("content", "") and x.get("content", "").rstrip().endswith("TERMINATE"),
    code_execution_config={
        "last_n_messages": 1,
        "work_dir": "tasks",
        "use_docker": False,
    },  # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.
)

chat_results = user.initiate_chats(
    [
        {
            "recipient": financial_assistant,
            "message": financial_tasks[0],
            "clear_history": True,
            "silent": False,
            "summary_method": "last_msg",
        },
        {
            "recipient": research_assistant,
            "message": financial_tasks[1],
            "summary_method": "reflection_with_llm",
        },
        {
            "recipient": writer,
            "message": writing_tasks[0],
            "carryover": "I want to include a figure or a table of data in the blogpost.",
        },
    ]
)

Details abbout the issue, which the commit fixes:

Traceback (most recent call last):
  File "/home/lux_zoltan_andras/autogen_testing/autogen_google_auth/chat_sequence_gemini.py", line 80, in <module>
    chat_results = user.initiate_chats(
  File "/home/lux_zoltan_andras/.local/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py", line 1250, in initiate_chats
    self._finished_chats = initiate_chats(_chat_queue)
  File "/home/lux_zoltan_andras/.local/lib/python3.10/site-packages/autogen/agentchat/chat.py", line 199, in initiate_chats
    __post_carryover_processing(chat_info)
  File "/home/lux_zoltan_andras/.local/lib/python3.10/site-packages/autogen/agentchat/chat.py", line 119, in __post_carryover_processing
    ("\n").join([t for t in chat_info["carryover"]])
TypeError: sequence item 2: expected str instance, dict found

@luxzoli luxzoli force-pushed the enhance-vertexai-integration branch from 7e83c59 to ee6890f Compare July 3, 2024 23:24
@sonichi sonichi added integration models Pertains to using alternate, non-GPT, models (e.g., local models, llama, etc.) labels Jul 4, 2024
@luxzoli luxzoli force-pushed the enhance-vertexai-integration branch 2 times, most recently from a741515 to 8c30ad3 Compare July 5, 2024 00:19
@sonichi
Copy link
Contributor

sonichi commented Jul 5, 2024

Could you make the PR from a branch in the upstream repo so that the openai CI could be run? Thanks.

@luxzoli
Copy link
Contributor Author

luxzoli commented Jul 5, 2024

Could you make the PR from a branch in the upstream repo so that the openai CI could be run? Thanks.

Sure, I will do it :)

Copy link

gitguardian bot commented Jul 6, 2024

️✅ There are no secrets present in this pull request anymore.

If these secrets were true positive and are still valid, we highly recommend you to revoke them.
Once a secret has been leaked into a git repository, you should consider it compromised, even if it was deleted immediately.
Find here more information about risks.


🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.

@luxzoli
Copy link
Contributor Author

luxzoli commented Jul 6, 2024

Could you make the PR from a branch in the upstream repo so that the openai CI could be run? Thanks.

The new PR from the upstream repo is #3086

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
models Pertains to using alternate, non-GPT, models (e.g., local models, llama, etc.)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants