[CH1. Content Transfer-Encoding] No end-of-stream while collecting chunks #512
-
On chapter 1 we are asked to implement content and transfer encoding. I'm handling gzip for content-encoding and chunked for transfer-encoding. My implementation seems to work fine for the pages I've tested against. If
After that, if From my understanding, that means that somehow I'm not getting or not adding the end-of-stream indication to the byte array. I'm trying to understand why. |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 1 reply
-
I couldn't find anything suspicious in your code. I also checked your repository and found that the error still happens even without the chunked encoding (tried with HTTP/1.0). I guess it's something to do with the website's gzip encoding and how it's decoded by gzip.decompress, but not sure... |
Beta Was this translation helpful? Give feedback.
-
I don't know why, but |
Beta Was this translation helpful? Give feedback.
-
Thank you for your help! I found an interesting thread and the reason that |
Beta Was this translation helpful? Give feedback.
I don't know why, but
zlib.decompressobj(16 + zlib.MAX_WBITS).decompress(data)
seems to work whilezlib.decompress(data, 16 + zlib.MAX_WBITS)
doesn't 😅 . (Found at urllib3)