-
Notifications
You must be signed in to change notification settings - Fork 230
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
'continue' event for Readable streams #111
Comments
FWIW here's what I'm currently having to do (to avoid having to touch everywhere I do function MyStream() {
// ...
this._needContinue = false;
// ...
}
// ...
MyStream.prototype.__read = Transform.prototype._read;
MyStream.prototype._read = function(n) {
if (this._needContinue) {
this._needContinue = false;
this.emit('continue');
}
return this.__read(n);
};
MyStream.prototype.__push = Transform.prototype.push;
MyStream.prototype.push = function(chunk, encoding) {
var ret = this.__push(chunk, encoding);
this._needContinue = (ret === false);
return ret;
}; |
Hmm, so the problem is that you want to know when a stream has started to read, whether this is the initial read or once it starts reading again after some of back pressure? |
This event could be internalised without it needing to have an addition to transform streams. What is the exact use-case that requires you to need to know these internal details? Normally if a stream is being consumed the consumer doesn't really concern itself with anything else other than receiving a chunk of data or knowing that the source has ended. |
My use-case is this:
This is solved for Writable streams via the |
I'm just trying to think if there any other use-cases where this would useful, the actually addition would be simple but i'm not sure how keen we are to add new events like this into streams. @iojs/streams any opinions? |
Well, the workaround I posted does work. But I'm not sure what (if any measurable) performance overhead it may have. Also I'd rather use a standard event name if possible. |
Nice 👍 we were talking about drain for readables. We ended up with something like function drainReadable(readable, cb) {
readable.on('readable', onReadable);
function onReadable() {
if (this._readableState.length < this._readableState.hwm) {
readable.removeListener('readable', onReadable);
cb();
}
}
} |
@mscdex because you dont check the hwm your logic is incorrect. you could push(2mb) then read(2) and you would emit |
@Raynos You mean that |
@mscdex Oh interesting, You make a good point! I was only using the I think your trick is correct. |
are there modules that already use the continue event? and will there be performance issues (we'd need a benchmark) |
probably can't call it continue |
@calvinmetcalf That's what I've been using for |
http might use that event On Wed, May 18, 2016 at 10:34 AM Brian White notifications@github.com
|
@calvinmetcalf Well it was only a suggestion and I had to pick something at the time. I can change my module(s). This is one reason why I wanted the streams WG to discuss it :-) |
👍 for having this in. Probably not with the @mscdex can you please define a little bit better the state machine of this event? As far as I understand the use case, this is when you are wrapping something else, and you need to know when to call |
@mcollina I'm not really wrapping anything. I have a custom Readable stream implementation. When I call As I previously described, my use case is for so-called protocol streams where I have one stream for parsing and writing for a particular protocol. Writing to the writable side is for incoming protocol data to be parsed and the writing to the readable side is for outgoing protocol data (e.g. a user sends a request -- it gets encoded as per the protocol and the raw bytes are |
I usually achieved the same by wrapping a Writable stream, see https://github.com/mqttjs/mqtt-packet/blob/master/writeToStream.js. But we should definitely support your approach as well, as it enables more api. Any other name for the event? BTW, I'm 👍 in adding it. |
@mcollina I used to do things like that in the past, but the reason I am more inclined to write protocol streams these days is that they bring simplicity and flexibility (you can do stuff like I don't have any (other) event name suggestions at the moment. |
@mscdex because of the missing event, i moved from your style to the "wrap" style, and not expose a Readable, but relying only on the Writable. I think we can just do a PR and test it with the new magic flag @calvinmetcalf put in. |
@mscdex this has been floating around here for some time, should we get to a PR to core? I'm happy to add it. |
@mcollina I'm still all for it, but we need to come up with a suitable name I guess. |
@mscdex how about |
I'm not particularly picky about the name, just as long as it's clear and unique enough, so |
Hi @mscdex and @mcollina, any progress on this? I have a custom Transform stream and I'd also like to know the proper way to handle when Context: this Transform stream is parsing a text file formatted like const { Transform } = require('stream')
class MyStream extends Transform {
constructor (opts) {
super(opts)
// initialize an empty buffer
this._buffer = Buffer.alloc(0)
}
_transform (data, encoding, callback) {
// append all incoming data to the internal buffer
this._buffer = Buffer.concat([this._buffer, data])
let finished = false
while (!finished) {
// upstream data is ndjson with a flat structure
// here's an easy way to parse each { json } chunk
const start = this._buffer.indexOf(123) // left bracket {
const end = this._buffer.indexOf(125) // right bracket }
// check if there are any complete { json } chunks in the internal buffer
if (end < 0) {
// no more complete { json } chunks in the internal buffer
finished = true
} else {
// isolate the { json } chunk in the buffer
const chunk = this._buffer.slice(start, end + 1)
// push the chunk downstream while capturing the backpressure flag
const backpressure = !this.push(chunk, encoding)
// remove the isolated { json } chunk from the internal buffer
this._buffer = this._buffer.slice(end + 1)
// handle downstream backpressure
if (backpressure) {
// now what??? <----------------------------------------------------
}
}
}
// let upstream know we are ready for more data
callback()
}
} How should I handle downstream backpressure? There still might be
Basically, any advice on this situation would be helpful. Thank you for providing your time and expertise to this project. |
Transform should handle all of this for you automatically, and you should not need to worry at all. Why isn't it doing that? Can you open an new issue with a clear example of why Transform is not handling backpressure correctly? |
Thank you @mcollina for the quick reply, and also thank you for making me dig deeper. Here are the questions I was originally asking. Questions (with answers)1. With a custom Transform stream; what happens to the
|
I think this is great feedback to improve our doc. Would you like to integrate https://github.com/nodejs/nodejs.org/blob/master/locale/en/docs/guides/backpressuring-in-streams.md with your finding, so it is clearer? Tag me in in the PR. |
@alextaujenis Sorry to reply too late, but your TL;DR description of the 'correct implementation' of the Transform stream does not seem to be a 'well-behaving' one, since that kind of Transform stream will hold 1000x chunks in memory, thus make big overhead for a Node.js process, while Node.js' stream is made just to avoid such a situation. It effectively nullifies the stream's purpose. |
@mcollina what was the final resolution was for this issue? I'm still trying to understand how I should handle when |
Currently Writables have a 'drain' event that fires when more writes can be done. It would be nice to have something like this for Readables when
push()
returns false. It's not just a matter of emitting on every call to_read()
since_read()
can be called when thehighWaterMark
hasn't been reached yet. It's especially more tricky for Transforms because you'd have to manually override_read()
in your subclass, which is not fun.I propose a 'continue' event for Readables.
The text was updated successfully, but these errors were encountered: