-
Notifications
You must be signed in to change notification settings - Fork 44.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Invalid JSON #21
Comments
is it a gpt-4 only modell? |
Interesting... Would you mind providing me with some more information? The current version does use GPT4, but if you didn't have access I'd expect you to get a different error than that, unless you modified the code? If not, how many times have you seen this error, was it just a freak event or does it happen every time? |
I am getting the exact same error every time. Does not work. |
I installed it today. The requirements.txt had problems and wouldn’t
install all the dependencies, there is a conflict with docker and request.
It also requested a “six” module, but maybe because I played with it.
After getting it installed, I changed the model to gpt3.5. And kept getting
the error message.
The only file I changed are the requirements and the references to model
gpt-4.
On Sun, 2 Apr 2023 at 14:03, sck-at-ucy ***@***.***> wrote:
I am getting the exact same error every time. Does not work.
—
Reply to this email directly, view it on GitHub
<#21 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABPP4HT47JC6YOZCYEZYMMDW7FTIDANCNFSM6AAAAAAWQGNCSU>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
--
Jaume Balust
|
@jaumebalust You may be experiencing this error because you wrote "For Example: " in your AI Role input. This is directly injected into the prompt so may cause the AI to act up and not respond with valid JSON. |
Ok. I will try to “harden” the prompt so that the model remembers to reply
with a JSON
On Sun, 2 Apr 2023 at 14:57, Toran Bruce Richards ***@***.***> wrote:
@jaumebalust <https://github.com/jaumebalust> You may be experiencing
this error because you wrote "For Example: " in your AI Role input.
This is directly injected into the prompt so may cause the AI to act up
and not respond with valid JSON.
—
Reply to this email directly, view it on GitHub
<#21 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABPP4HWCNIPORCOV63HOITTW7FZVBANCNFSM6AAAAAAWQGNCSU>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
--
Jaume Balust
|
Oh! This is definitely the cause. I haven't implemented GPT3.5 support yet, Auto-GPT currently only works with gpt4. |
I fixed this issue. Just change the starting prompt to "Begin. note only respond in JSON. Come up with your own campaign that follows your goals." instead of "NEXT COMMAND" or something to that effect. And harden the prompt a bit should get the results you need. |
some problems with escaping in string values. looks like due to prompts(initial or generated ones). we should add checks and escaping for strings |
can you elaborate how what you did exactly? |
The load_prompt function reads the prompt.txt file from the data subdirectory. If the script is not able to access the file, the most likely reason is that the working directory of the script execution is not set correctly. To resolve this issue, you can modify the load_prompt function to use an absolute path instead of a relative path. You can achieve this by constructing the absolute path using the os module:
This modification should fix the issue with accessing the prompt.txt file. The os.path.dirname(os.path.realpath(file)) line retrieves the directory of the data.py script, and the os.path.join(script_dir, "data", "prompt.txt") line constructs the absolute path to the prompt.txt file. |
It’s the data.py module.
_ _ _ _ _
which file is that in? —
|
This JSON parsing is being fixed in #45 |
I also got an
|
Got an apparently unrelated error this time:
|
I'm experiencing the JSON error as well. Though it doesn't seem to conflict with continuing to interpret properly, and still works fine as expected. Trying to improve how I communicate to avoid the error. |
I have this problem as well. The first answer is ok and then I have the JSON problem everytime. |
@Louvivien for the AI's main prompt write
|
No when I add this at the beginning of my input, it does not solve the problem
I think it is related to this: openai/openai-python#332 I get this error only with some AI. If I change the role and goals, I do not get the error. But for some AI and role it does not work. |
Perhaps "only respond in JSON" could be added to the system message? |
An other instance of this error: |
Got the same error |
Got the same error |
I am getting the same error as #2229 but that has been linked to this one and closed down so will add here. Error log,
was working ok and chugging away nicely then this just appeared and just keeps getting stuck in a rut about it. |
Not sure exactly where this issue stands, but wanted to pop in here to show how langchain is using this in their pedantic parser: https://github.com/hwchase17/langchain/blob/master/langchain/output_parsers/pydantic.py The The The Overall, this class provides a way to parse text output into Pydantic objects, which can then be used for data validation and settings management. This is useful in the larger project because it allows for consistent handling of output data and ensures that the data conforms to a specific schema. Here is an example of how this class might be used:
In this example, |
As of v0.2.2 the issue still persists, with a twist. JSON is fixed, or maybe not!
|
still get the issue with the latest |
Error: The following AI output couldn't be converted to a JSON:
Please use the 'google' command to search for more information on the so yes, the problem is still there.`` |
The problem here is not just that it fails, it's that when it fails in this way, it usually gets stuck in a loop and there's no way to rescue it. when it gets an error at this level it doesn't appear to go back to the ai endpoint for advice, it just keeps trying over and over again without any new input. |
Any news about |
confirming, saw this today when letting it generate C++ code - at some point, the engine bailed out complaining that the JSON wasn't valid. |
If you using gpt3-only command, the invalid json error most will happen when you send something and the response sometimes won't using standard JSON syntax (like add irregular commas), I try to modify the prompt at the last line (prompts.generator.py |
…Dev-Container-Fix Bill-Auto-GPT-Dev-Container-Fix
Maybe it makes sense to have a more lenient JSON parser that allows for things like dangling commas, comments, or other such oddities. I think for now theres always going to be a chance that the model responds with something that isn't JSON and as such it should be possible to account for that (to an extent). |
Could try parsing it with a JS parser instead of JSON. Actual JS objects have more flexibility than JSON, or maybe using a JSON5/6 parser? An example solution written in PowerShell (with NodeJS as the JS parser): $tmpJSONcontents = ConvertFr-Json( Get-Content $jsonFile )
@"
let obj = $tmpJSONcontents
console.log( obj )
"@ | Out-File "temp.js"
node temp.js |
Hi fellow contributors, I've been following the discussion on the "Error: Invalid JSON" error when using ChatGPT in the Auto-GPT project for chat competitions. I'd like to suggest using LMQL (Large Model Query Language) to address this issue. LMQL is designed specifically for interacting with large language models (LLMs) like ChatGPT and combines the benefits of natural language prompting with the expressiveness of Python. By utilizing LMQL, we can create advanced, multi-part, and tool-augmented queries with just a few lines of code. This will allow us to construct more precise instructions for ChatGPT, helping it generate correct JSON responses, and consequently, resolving the "Error: Invalid JSON" issue we're currently experiencing. Here's an example of how we can use LMQL to guide ChatGPT for generating valid JSON:
By implementing LMQL, the runtime optimizes the LLM decoding loop, which could lead to better performance and more accurate responses from ChatGPT. This could significantly improve the user experience for the chat competition feature in our Auto-GPT project. Let me know your thoughts on this approach, and if you have any questions or concerns, I'd be happy to help. |
What was helping me was comment the following lines in .env:
|
Closing as duplicate of #1407 |
It doesn't work?
The text was updated successfully, but these errors were encountered: