-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
String deallocated before use #248
Comments
Maybe Otherwise, use Also you can use I still think that it makes sense though to rearrange the order in which the values are accessed. It just never occured to me before that the parameter to |
Since it is a mistake that could easily happen and there is no warning about it, I consider this a bug. And because of the use-after-free it can potentially have security implications. I will fix this in a new bugfix release as soon as possible. |
If the `string` property of the item that is added is an alias to the `string` parameter of `add_item_to_object`, and `constant` is false, `cJSON_strdup` would access the string after it has been freed. Thanks @hhallen for reporting this in DaveGamble#248.
The use after free is now fixed in 1.7.4. This can be closed as soon as your use case of merging multiple JSON objects is figured out. |
Thanks for the tips in your first reply, and for fixing the issue so quickly. Unfortunately the 'merge patch' method doesn't seem to quite do what I need (the actual use case is a bit more complex than the example I gave). But the tips about cJSON_DetachItemViaPointer and cJSON_AddItemToArray were useful. |
Maybe I'm doing things in a dumb way, but given the method I'm using, there's a problem with the function add_item_to_object(). What I want to do is to parse a number of separate .json files and merge their contents into a single cJSON object. The files may partially duplicate each other. Objects and array items that are already in the result that is being constructed should be skipped during the processing of a .json file, i.e. no duplicate objects or array elements are permitted in the result output.
So, at the top level each file contains one object (always with the same name, "stuff" in the simplified examples below), which in turn contains other objects, which contain arrays of objects. A simple example will illustrate what I want to achieve.
.json file 1:
.json file 2:
I want the result of the merge operation to be a cJSON object that represents the following .json:
First, I use cJSON_CreateObject() to start my result object. Then I process each .json file in turn, using cJSON_Parse() to create a cJSON object that represents the file. Then this cJSON object is examined, and if any "stuff" is found, it is moved to the result object, except such objects/array elements as are already present there.
Here is a code snippet that shows how I handle the merge operation for an object at the level directly under "stuff" that is not yet present at all in the result:
newGroup is a cJSON object from the current .json file, representing for example "BAZ" from the .json files above. I want to move that object from the cJSON object representing the .json file (newStuff) to the cJSON result object under construction (resultStuff).
The problem is that the internal function add_item_to_object() in cJSON assumes that its string argument is valid throughout the function. But when I use newGroup->string, that assumption becomes false, because that string is deallocated before a copy is made and inserted in the item before it's added to the target object. The result (at best) is that the item string will contain garbage.
Perhaps the order of actions in add_item_to_object() could be rearranged, so that a copy of the string argument is made before any deallocations?
Of course, I could allocate memory and make a copy of the string in my own code, but that seems a bit silly. Maybe my method is naive, and a there is a better way of doing what I want that does not run into this kind of problem? If so, please let me know. I'm relatively new to cJSON.
The text was updated successfully, but these errors were encountered: