Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUG: Inconsistent results using pd.json_normalize() on a generator object versus list (off by one) #35923

Closed
ldacey opened this issue Aug 27, 2020 · 6 comments · Fixed by #38698
Assignees
Labels
Bug IO JSON read_json, to_json, json_normalize
Milestone

Comments

@ldacey
Copy link

ldacey commented Aug 27, 2020

  • [ x] I have checked that this issue has not already been reported.

  • [ x] I have confirmed this bug exists on the latest version of pandas.


Code Sample, a copy-pastable example

Only one value is returned with this:

def gen():
    test = [{'created_at': '2020-08-24T09:30:05Z',
             '_id': '5f43889de6a98fd57afce7be'},
            {'created_at': '2020-08-23T11:16:09Z',
             '_id': '5f44b03799944352493d9317'},
           ]
    for val in test:
        yield val
        
results = gen()
pd.json_normalize(results)

image

This returns all values though:

results = gen()
list_ = [x for x in results]
pd.json_normalize(list_)

And so does this:

def list_():
    final = []
    test = [{'created_at': '2020-08-24T09:30:05Z',
             '_id': '5f43889de6a98fd57afce7be'},
            {'created_at': '2020-08-23T11:16:09Z',
             '_id': '5f44b03799944352493d9317'},
           ]
    for val in test:
        final.append(val)
    return final

results = list_()
pd.json_normalize(results)

image

Problem description

Using pd.json_normalize() on a generator always seems to reduce the expected results by 1. I first noticed this on a REST API where a column informed me that I should expect 901 results but I kept getting 900 results each time. When I tried to append the results to a list and normalize that, I got the expected 901 results.

Expected Output

Perhaps this is an expected output. It just caused me some headaches earlier and it was not immediately obvious that I was missing one record. I would expect that my example above would result in the same 2 row DataFrame.

Output of pd.show_versions()

INSTALLED VERSIONS

commit : d9fff27
python : 3.8.5.final.0
python-bits : 64
OS : Linux
OS-release : 5.3.0-1028-azure
Version : #29~18.04.1-Ubuntu SMP Fri Jun 5 14:32:34 UTC 2020
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : en_US.UTF-8
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8

pandas : 1.1.0
numpy : 1.19.1
pytz : 2020.1
dateutil : 2.8.1
pip : 20.2.2
setuptools : 49.6.0.post20200814
Cython : 0.29.21
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : 2.8.5 (dt dec pq3 ext lo64)
jinja2 : 2.10.3
IPython : 7.17.0
pandas_datareader: None
bs4 : 4.9.1
bottleneck : 1.3.2
fsspec : 0.7.4
fastparquet : None
gcsfs : None
matplotlib : 3.3.0
numexpr : 2.7.1
odfpy : None
openpyxl : 3.0.4
pandas_gbq : None
pyarrow : 1.0.0
pytables : None
pyxlsb : None
s3fs : None
scipy : 1.5.2
sqlalchemy : 1.3.19
tables : 3.6.1
tabulate : 0.8.7
xarray : None
xlrd : 1.2.0
xlwt : None
numba : 0.48.0

@ldacey ldacey added Bug Needs Triage Issue that has not been reviewed by a pandas team member labels Aug 27, 2020
@asishm
Copy link
Contributor

asishm commented Aug 27, 2020

I don't believe the documentation states that a generator is an accepted value for json_normalize
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.json_normalize.html

data : dict or list of dicts
    Unserialized JSON objects

that said the issue is here:

https://github.com/pandas-dev/pandas/blob/v1.1.1/pandas/io/json/_normalize.py#L269-L279 specifically

if any([isinstance(x, dict) for x in y.values()] for y in data):

this consumes the first yieded result from the generator

@ldacey
Copy link
Author

ldacey commented Aug 27, 2020

Ahh, I see. This was my first time passing a generator to json_normalize and it seemed like it worked since I had many records. Perhaps a warning or error could be raised if a generator is passed to this method. Shall I close this now?

@jbrockmendel jbrockmendel added IO JSON read_json, to_json, json_normalize and removed Needs Triage Issue that has not been reviewed by a pandas team member labels Sep 3, 2020
@jbrockmendel
Copy link
Member

@WillAyd is this intended to be supported?

@WillAyd
Copy link
Member

WillAyd commented Sep 3, 2020

Losing the first record is a nasty surprise - would take a patch here for sure

@simonjayhawkins simonjayhawkins added this to the Contributions Welcome milestone Sep 9, 2020
@avinashpancham
Copy link
Contributor

avinashpancham commented Dec 20, 2020

@WillAyd did some debugging and found out the issue is caused by this line https://github.com/pandas-dev/pandas/blob/master/pandas/io/json/_normalize.py#L270

The for loop with any is run on data. After the first iteration the condition is already satisfied and the for loop stops, but then it has already consumed the first element ofdata. This causes the disappearance of the first element. I can solve this issue but I am doubting between 2 approaches, what do you think?

  • Addtional memory usage: consume the generator before this line e.g. list(data) making data a list and solving this issue.

  • Additional runtime: not performing the any check on generators, but performing this check only for generators on a element level when we iterate over it. This results in slower execution since we have to perform this check N times instead of 1.

I am favouring the additional runtime option, since I think the user will only provide a generator if there are memory constraints. But LMK if you see this differently

@avinashpancham
Copy link
Contributor

take

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug IO JSON read_json, to_json, json_normalize
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants