Handling "Chunk too big" Error in Python readuntil Function #8394
Unanswered
talhaanwarch
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I'm encountering a "Chunk too big" error in my Python code while using the readuntil function. This function reads data until there is end of file. However, when processing large data streams, the cumulative size of the chunks exceeds a threshold of _high_water (131072 bytes), triggering the error.
aiohttp/aiohttp/streams.py
Line 340 in 0ba6cf2
Upon investigating the code, I understand that the error arises from accumulating the data in chunks until it exceeds the threshold. Simply reading data in smaller chunks and summing their lengths doesn't seem to solve the issue, as the cumulative size remains the same. Am i correct?
I'd like to know the recommended approach for handling this issue. How can I modify the readuntil function to process data in a streaming fashion without accumulating it all at once? Are there any best practices or alternative solutions to consider?
Any insights or guidance would be greatly appreciated.
Beta Was this translation helpful? Give feedback.
All reactions