-
Notifications
You must be signed in to change notification settings - Fork 30k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
stream.finished behaviour change #29699
Comments
/cc @mcollina |
I'm unsure what the appropriate course of action here is. I'd consider emitting
That sounds wrong though in general and is probably already broken in other more subtle ways. If our strategy is to try to fix the ecosystem, I'm more than happy to try and help e.g. feross/multistream sort out these kind of bugs/issues (e.g. fix PRs). |
The problem is that they call https://github.com/feross/multistream/blob/master/index.js#L98 Also they are using a custom |
@mcollina This goes back a little bit with going around the streams framework. In this particular case:
I would suggest we update the documentation to more explicitly discourage this as it very easily breaks stream invariants and have consequences for the ecosystem. |
Custom destroy methods are everywhere in the ecosystem though, including using this pattern. If we are going down the path of only supporting the latest stream ways we should remove all the legacy from the function, i.e. the request stream stuff https://github.com/nodejs/node/blob/master/lib/internal/streams/end-of-stream.js#L95 |
I do think we should support (but discourage) them within reason. We can't support every possible scenario. Especially not when it comes to what we otherwise consider as invalid. Emitting
I think we need to have a more balanced approach than that though. Those legacy stuff does not in any way hinder otherwise "correct" behaviour. |
My first thought is that breaking the stream ecosystem is not good, and we shouldn't do it. But the specific thing above... I don't know, I might agree with @ronag on it. I ran into similar issues in the TLS testcases when updating to TLS1.3, because it changed the timing of the underlying stream activity. As I understand what is happening in https://github.com/feross/multistream/blob/d63508c55e87d55a2ed6445ef79ba1c68881caad/index.js#L98-L99, push is async, its not at all guaranteed that the push will complete if delete() is called. The essential quality of delete() is that it does not wait, it doesn't participate in a deterministic event ordering, its just "as soon as possible". Despite that its order isn't defined, with specific stream implementations, its behaviour will be predictable enough some code is relying on it, which is problematic. |
I think we should revert that change. |
Reverting the change will make other bugs possible such as: const stream = require('stream')
const assert = require('assert')
const src = new stream.Readable()
src.push('a')
src.push(null)
let data = ''
const dst = new stream.Writable({
write (chunk, encoding, cb) {
setTimeout(() => {
data += chunk
cb()
}, 1000)
}
})
stream.pipeline(
src,
dst,
err => {
if (!err) {
assert.strictEqual(data, 'a') // fails
}
}
)
process.nextTick(() => {
dst.destroy()
}) We might already have such bugs in the ecosystem... but nobody notices it |
The behaviour of example code above does not surprise me, I regard whether that data was written or not as undefined in the presence of a The behaviour can be made defined if |
Another example: http.createServer((req, res) => {
stream.pipeline(
fs.createReadStream(getPath(req)),
res, // res does not emit 'error' if prematurely closed, instead it emits 'aborted'.
err => {
if (!err) {
// Everything ok? Probably... but maybe not... depending on timing
}
}
)
}).listen(0) Or the corresponding example for client side (which is probably worse). |
I can definitely see the argument @ronag is making here but I'm also super sensitive to breakage in the streams code. That is to say, I prefer the new behavior but don't want to break stuff. It's a bit wonky, but perhaps reverting the change, then adding it back behind a configuration option on the stream object would be a good approach. New implementations can make use of the new behavior by switching the option on, existing implementations can continue as they are. Later, we can make the choice of deprecating the existing behavior then eventually switching the default. Takes longer, but avoids breakage. |
@jasnell are you suggesting per-stream-instance config, or global to node.js process config? |
Per stream instance |
Alternatively just point people to Ie. @ronag's example above works with pump but it's also doing some funky things to keep backwards compat. |
I think an option e.g.
I prefer this option. Also, as I said I'm more than happy to help (as time allows) ecosystem modules resolve any related issues, e.g. multistream feross/multistream#47. |
We've consistently not pushed people off the node.js API and onto ecosystem modules with versioned APIs. Arguably, we've done the exact opposite. For example, https://www.npmjs.com/package/readable-stream is not mentioned anywhere in the Node.js docs, even though many heavy duty streams user suggests that everyone should be using it instead of node's streams API, and we froze https://www.npmjs.com/package/pump into node core as https://nodejs.org/api/all.html#stream_stream_pipeline_streams_callback, so we can't change it anymore without breaking our users. So, at this late point to break the node core API and tell people they should have been using stable APIs from npm packages like pump probably won't go over so well. The "strict" option just means that now we have to support and test yet another streams mode in addition to 2 and 3, with all the burden that implies. strict would be streams 3a, or 4? The examples of odd corner cases are pretty compelling examples of issues with current behaviour, but we've backed ourselves into a corner here by not developing a Node.js maintained/npm published set of APIs that can be versioned independently from core. The streams WG members will weigh in with more experience than me, but I don't think breaking the ecosystem and helping them adjust is much of a plan. There is a long tail of ecosystem modules, many not so actively maintained. We haven't had much success even with Buffer.from/alloc, which is a trivial change to detect the need of and to fix, this change is more subtle. Also, its not clear how packages can adapt to working across multiple node.js versions if they want to support all the LTS release lines, as I hope they would. |
Good points. I really don't know what we can do here then... Leaving the ecosystem subtly but permanently broken doesn't sound like a good path either... Fix it in |
I feel there is a fundamental problem here in terms of general strategy, not just this specific issue. We have bugs, incorrect, unfortunate behaviour that we would like to address but we cannot due to either incorrect, unfortunate, outdated assumptions and even bugs in the ecosystem. Often these issues cause very subtle and intermittent bugs that are difficult to spot and often find the cause for. Just because these bugs don't throw directly in the users face doesn't mean they don't exist. I would personally very much like to find some form of long term sustainable solution. Otherwise were stuck in a situation where it will never be possible for users to write "correct" code, only code that in different ways tries to compromise between different broken behaviours. If the only way forward is e.g. streams4 with proper semver that lives on npm. I would personally much prefer that over the status quo. |
I'm not familiar with the discussion around this. Maybe something to reconsider? |
Solution proposal to original issue #29724 |
The suggested fix is |
@Trott: I believe this can be closed for now? Since the referred commits have been reverted. |
Some broken streams might emit 'close' before 'end' or 'finish'. However, if they have actually been ended or finished, this should not result in a premature close error. Fixes: nodejs#29699
I believe the topics of this issue has been resolved. |
Some recent changes to
stream.finished
landed in Node master changes the behaviour of the function.Before (in Node 12) if a stream was about to emit end/finish but hadn't and emitted close it would not be treated as an error. In master this triggers a premature close.
It's due to the changes here.
https://github.com/nodejs/node/blob/master/lib/internal/streams/end-of-stream.js#L86
This breaks modules that emit
close
immediately after calling stream.end() / push(null), for example the popular multistream module, https://github.com/feross/multistream.A simple test case looks like this:
We might want to revert those changes or at least document them.
The text was updated successfully, but these errors were encountered: