-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: update assistants components and add integrations tests #3887
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -10,6 +10,7 @@ class AssistantsCreateAssistant(Component): | |
icon = "bot" | ||
display_name = "Create Assistant" | ||
description = "Creates an Assistant and returns it's id" | ||
client = patch(OpenAI()) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Are there potential issues that may arise from using a class variable for the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Previously we were doing one per invocation so I actually think a client per class is less problematic. If they were async clients we could have races but these are synchronous. One potential future improvement would be to do a shared client for all the assistants. Not sure what a good pattern for this would be, just a singleton? |
||
|
||
inputs = [ | ||
StrInput( | ||
|
@@ -45,8 +46,7 @@ class AssistantsCreateAssistant(Component): | |
|
||
def process_inputs(self) -> Message: | ||
print(f"env_set is {self.env_set}") | ||
client = patch(OpenAI()) | ||
assistant = client.beta.assistants.create( | ||
assistant = self.client.beta.assistants.create( | ||
name=self.assistant_name, | ||
instructions=self.instructions, | ||
model=self.model, | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,72 @@ | ||
import pytest | ||
|
||
from tests.integration.utils import run_single_component | ||
|
||
|
||
async def test_list_assistants(): | ||
from langflow.components.astra_assistants import AssistantsListAssistants | ||
|
||
results = await run_single_component( | ||
AssistantsListAssistants, | ||
inputs={}, | ||
) | ||
assert results["assistants"].text is not None | ||
|
||
|
||
@pytest.mark.api_key_required | ||
@pytest.mark.asyncio | ||
async def test_create_assistants(): | ||
from langflow.components.astra_assistants import AssistantsCreateAssistant | ||
|
||
results = await run_single_component( | ||
AssistantsCreateAssistant, | ||
inputs={ | ||
"assistant_name": "artist-bot", | ||
"instructions": "reply only with ascii art", | ||
"model": "gpt-4o-mini", | ||
}, | ||
) | ||
assistant_id = results["assistant_id"].text | ||
assert assistant_id is not None | ||
await test_list_assistants() | ||
await get_assistant_name(assistant_id) | ||
thread_id = await test_create_thread() | ||
await run_assistant(assistant_id, thread_id) | ||
|
||
|
||
async def test_create_thread(): | ||
from langflow.components.astra_assistants import AssistantsCreateThread | ||
|
||
results = await run_single_component( | ||
AssistantsCreateThread, | ||
inputs={}, | ||
) | ||
thread_id = results["thread_id"].text | ||
assert thread_id is not None | ||
return thread_id | ||
|
||
|
||
async def get_assistant_name(assistant_id): | ||
from langflow.components.astra_assistants import AssistantsGetAssistantName | ||
|
||
results = await run_single_component( | ||
AssistantsGetAssistantName, | ||
inputs={ | ||
"assistant_id": assistant_id, | ||
}, | ||
) | ||
assert results["assistant_name"].text is not None | ||
|
||
|
||
async def run_assistant(assistant_id, thread_id): | ||
from langflow.components.astra_assistants import AssistantsRun | ||
|
||
results = await run_single_component( | ||
AssistantsRun, | ||
inputs={ | ||
"assistant_id": assistant_id, | ||
"user_message": "hello", | ||
"thread_id": thread_id, | ||
}, | ||
) | ||
assert results["assistant_response"].text is not None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I looked over at the
Dotenv
component, and curious how that works.Q) Why does it need an
output
method?Q) If I have a
.env
file I've loaded up during mylangflow run --env .env
, and I use this component with a separate env file, will it replace that entire set, or union the two?I see why this is needed though - if I can't do
langflow run
myself (in Astra), we need an easy way to pass env vars.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah assistants supports loads of LLM providers so it's very convenient to allow the user to add them all in one go. If there are existing env vars that are set they do not get unset but they could get overwritten.