-
Notifications
You must be signed in to change notification settings - Fork 5.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TextDecoderStream leaks a native decoder resource if its stream errors #13142
Comments
I played around with the Streams API a bit and came up with a fairly straightforward way to implement a
Would you be interested in a PR for Or alternatively, it seems reasonable to me to suggest/request a change to the WHATWG Streams spec to give the controller of a What do you think? |
|
As I understand it, flush is only called when the stream closes normally, it's not called when the stream aborts: https://streams.spec.whatwg.org/#transform-stream-error |
I'll bring it up with the Streams spec team. |
A TextDecoderStream that works around the Deno bug: denoland/deno#13142
In case this helps anyone hitting this issue when breaking out of async iteration, I was able to work around it with a custom decoder transform stream:
|
Alternative solution with manual encoding from duplicated #19074 Deno.test("working alternative", async () => {
const res = await fetch(
"https://deno.land/std@0.186.0/json/testdata/test.jsonl"
)
const textDecoder = new TextDecoder()
const reader = res.body!.getReader()
const b = await reader.read()
const t = await textDecoder.decode(b.value!)
await reader.cancel()
}) |
…llation (denoland#21074) This PR uses the new `cancel` method of `TransformStream` to properly clean up the internal `TextDecoder` used in `TextDecoderStream` if the stream is cancelled. Fixes denoland#13142 Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
I've come across what seems to be a bug in
TextDecoderStream
which allows it to leak the native decoder used by itsTextDecoder
.I've made a test module to demonstrate the issue (repro code is at the bottom of that page, under the output): https://gist.github.com/h4l/0199ab7cc24dd13536e01c5ea98b3ae7
The 3 tests trigger the test runner's resource leak detection (a really nice feature!).
What seems to be happening is:
TextDecoderStream
uses aTextDecoder
to decode its chunksTextDecoder.decode()
creates a native decoder resource, which it holds open when used in streaming mode. It closes the resource when a non-streamingdecode()
call is made.TextDecoderStream
makes streamingdecode()
calls in itstransform()
method, and makes a final non-streamingdecode()
call in itsflush()
method.flush()
method of anyTransformer
in aTransformStream
is not called, so in the case ofTextDecoderStream
it has no way to know its no longer in use, and keeps open its decoder.I was looking through the streams spec when I encountered the leak (before I worked out the cause) to try to work out if I was misusing the streams in some way. It seems to me like an oversight in the spec that
Transformer
s have no way to be told to close/clean up when a stream doesn't close cleanly.flush()
is only called when the stream closes normally if I've not missed something, and there are no other lifecycle methods available forTransformer
s.I can't see any idiomatic way to tell the
Transformer
to close, but one approach could be to wrap the readable and writable streams of theTransformerStream
to watch forclose
/cancel
/abort
calls.I've not made any previous contributions, but I'd be happy to help with a PR to fix this if it'd be useful.
The text was updated successfully, but these errors were encountered: