Skip to content

Commit

Permalink
doc: update writable.write return value
Browse files Browse the repository at this point in the history
stream.md is updated to explain the return value of
writable.write(chunk) precisely.

PR-URL: #9468
Fixes: #9247
Reviewed-By: James M Snell <jasnell@gmail.com>
Reviewed-By: Ron Korving <ron@ronkorving.nl>
Reviewed-By: Roman Reiss <me@silverwind.io>
  • Loading branch information
Tanuja-Sawant authored and silverwind committed Dec 21, 2016
1 parent 489970c commit f347dad
Showing 1 changed file with 6 additions and 4 deletions.
10 changes: 6 additions & 4 deletions doc/api/stream.md
Original file line number Diff line number Diff line change
Expand Up @@ -440,10 +440,12 @@ occurs, the `callback` *may or may not* be called with the error as its
first argument. To reliably detect write errors, add a listener for the
`'error'` event.

The return value indicates whether the written `chunk` was buffered internally
and the buffer has exceeded the `highWaterMark` configured when the stream was
created. If `false` is returned, further attempts to write data to the stream
should be paused until the [`'drain'`][] event is emitted.
The return value is `true` if the internal buffer does not exceed
`highWaterMark` configured when the stream was created after admitting `chunk`.
If `false` is returned, further attempts to write data to the stream should
stop until the [`'drain'`][] event is emitted. However, the `false` return
value is only advisory and the writable stream will unconditionally accept and
buffer `chunk` even if it has not not been allowed to drain.

This comment has been minimized.

Copy link
@mcollina

mcollina Jan 3, 2017

Member

Defining it advisory is not a good choice of word. IMHO advisory is a positive term, as "you can avoid doing this, it has no issues". However, if you keep writing when it return false, you are essentially creating a memory leak in the worst case, and in the nicer case you are creating unneeded pressure on the GC.

This comment has been minimized.

Copy link
@binki

binki Jan 3, 2017

Contributor

@mcollina It seems to me that a memory leak would only be created if the stream being written to had permanently stopped processing input. However, this would cause a memory leak even if the input buffer had not exceeded highWaterMark.

E.g., you have a stream with a highWaterMark of 8192. In the first scenario, you write 4096 bytes to it and call end(), but the stream never empties out its input buffer. In the second scenario, you write 8192 bytes to it and call end() and it never empties out its input buffer. You have the same memory leak in both scenarios even though write() returned true in the first scenario and false in the second scenario.

Writing past the end of the highWaterMark and ignoring the advice will cause unnecessary GC churning and even (for sufficiently large amounts of data) crash the process by exhausting memory. It is thus a bad idea in general. So adding some warning like that is probably a good idea. But the original text never had such a warning.

The point of this documentation change was to clarify what actually happens when you write past highWaterMark: the receiving stream will still buffer the data—it won’t discard it or truncate it. The questions I ask myself when reading the original documentation are like “But what if half of my chunk would go past the highWaterMark? How can I figure out what was and wasn’t buffered? Why don’t people re-call writable.write() with remaining data when it returns false?”. With the new text, I think the behavior is clearer.

Would you be willing to open a separate issue for your concern?

This comment has been minimized.

Copy link
@mcollina

mcollina Jan 4, 2017

Member

Writing past the end of the highWaterMark and ignoring the advice will cause unnecessary GC churning and even (for sufficiently large amounts of data) crash the process by exhausting memory. It is thus a bad idea in general. So adding some warning like that is probably a good idea. But the original text never had such a warning.

I disagree. The original test just said that an user should handle that condition, while in the new text a user should handle that condition, but it's advisory only.

Most of the problems with this sentence come with a Transform stream, which inherits from Writable. In a Transform, you can easily buffer up things indefinitely unless you  pipe()  it or add a 'data'  or 'readable' event handlers.
As most users u

The questions I ask myself when reading the original documentation are like “But what if half of my chunk would go past the highWaterMark? How can I figure out what was and wasn’t buffered? Why don’t people re-call writable.write() with remaining data when it returns false?”. With the new text, I think the behavior is clearer.

I agree with you that the behavior is now clearer, but it is pointing users in the wrong direction. The data is buffered to avoid losing it, but a user should not ignore when write() return false whenever possible. In some cases it's not possible to do so, and those needs to be watched with care.

I'll send a PR.


A Writable stream in object mode will always ignore the `encoding` argument.

Expand Down

2 comments on commit f347dad

@binki
Copy link
Contributor

@binki binki commented on f347dad Dec 22, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

internal buffer does not exceed should be internal buffer is less than. See #9247

@silverwind
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@binki care to file a pull request for that?

Please sign in to comment.