Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem with intent hitting multiple entities #35

Closed
richardsorensson opened this issue Nov 27, 2023 · 6 comments
Closed

Problem with intent hitting multiple entities #35

richardsorensson opened this issue Nov 27, 2023 · 6 comments

Comments

@richardsorensson
Copy link

richardsorensson commented Nov 27, 2023

I'm running into a response "Unexpected error during intent recognition" when running wider questions such as "turn of all lights", while a more narrow line as "turn of all office lights" does work.

I've not done any wider troubleshooting yet, but i've started with removing any groups (light / cover) to assure no custom groupings are causing the issues.

Any one running into the same issue and found a solution?

Here's the RAW log details.

Logger: homeassistant.components.assist_pipeline.pipeline
Source: components/assist_pipeline/pipeline.py:938
Integration: Assist pipeline (documentation, issues)
First occurred: 16:50:11 (2 occurrences)
Last logged: 16:53:23

Unexpected error during intent recognition
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/components/assist_pipeline/pipeline.py", line 938, in recognize_intent
conversation_result = await conversation.async_converse(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/components/conversation/init.py", line 467, in async_converse
result = await agent.async_process(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/extended_openai_conversation/init.py", line 157, in async_process
response = await self.query(user_input, messages, exposed_entities, 0)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/extended_openai_conversation/init.py", line 280, in query
message = await self.execute_function_call(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/extended_openai_conversation/init.py", line 319, in execute_function
arguments = json.loads(message["function_call"]["arguments"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/json/init.py", line 346, in loads
return _default_decoder.decode(s)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/json/decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
^^^^^^^^^^^^^^^^^^^^^^
json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 26 column 6 (char 482)

@jekalmin
Copy link
Owner

jekalmin commented Nov 27, 2023

You can try increasing response token.
This happens because response json is truncated by limit of 150 tokens by default.

Keep in mind that more tokens cost more.

@richardsorensson
Copy link
Author

That did the trick, thanks alot!

@richardsorensson
Copy link
Author

Did some further testing, with maxed token 4096.

It does not manage to change state of some devices, but while waiting for the chat response "..." after roughly 20-30 seconds, the respons is "Unexpected error during intent recognition", but the entities states are changed first (after just a second or two).

error below

Logger: homeassistant.components.assist_pipeline.pipeline
Source: components/assist_pipeline/pipeline.py:938
Integration: Assist pipeline (documentation, issues)
First occurred: 22:38:15 (1 occurrences)
Last logged: 22:38:15

Unexpected error during intent recognition
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/components/assist_pipeline/pipeline.py", line 938, in recognize_intent
conversation_result = await conversation.async_converse(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/components/conversation/init.py", line 467, in async_converse
result = await agent.async_process(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/extended_openai_conversation/init.py", line 157, in async_process
response = await self.query(user_input, messages, exposed_entities, 0)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/extended_openai_conversation/init.py", line 280, in query
message = await self.execute_function_call(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/extended_openai_conversation/init.py", line 332, in execute_function
return await self.query(user_input, messages, exposed_entities, n_requests)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/extended_openai_conversation/init.py", line 280, in query
message = await self.execute_function_call(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/extended_openai_conversation/init.py", line 332, in execute_function
return await self.query(user_input, messages, exposed_entities, n_requests)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/extended_openai_conversation/init.py", line 280, in query
message = await self.execute_function_call(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/extended_openai_conversation/init.py", line 332, in execute_function
return await self.query(user_input, messages, exposed_entities, n_requests)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/extended_openai_conversation/init.py", line 280, in query
message = await self.execute_function_call(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/extended_openai_conversation/init.py", line 319, in execute_function
arguments = json.loads(message["function_call"]["arguments"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/json/init.py", line 346, in loads
return _default_decoder.decode(s)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/json/decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
^^^^^^^^^^^^^^^^^^^^^^
json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 2 column 3 (char 4)

@richardsorensson
Copy link
Author

Should mention i ran with ggpt-4-1106-preview and changed to gpt-3.5-turbo-1106.

Now it works without errors again, and proper response.

@jekalmin
Copy link
Owner

It seems like the same error (token limit) with above, but functions are called multiple times.

  • Add logging and see what functions with arguments are called.
  • If you want to limit the number of function calls, set Maximum function calls to small number such as 1.
    1
  • The reason why many functions are called depends on how you asked and how model would behave. If you want to change how model behaves on a particular question, tell model to do so in prompt.

jekalmin pushed a commit that referenced this issue Dec 3, 2023
jekalmin pushed a commit that referenced this issue Dec 6, 2023
@jekalmin
Copy link
Owner

Closing the issue.
Feel free to reopen it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants