Skip to content

Commit

Permalink
doc: handle backpressure when write() return false
Browse files Browse the repository at this point in the history
The doc specified that writable.write() was advisory only. However,
ignoring that value might lead to memory leaks. This PR specifies that
behavior. Moreover, it adds an example on how to listen for the 'drain'
event correctly.

See: nodejs@f347dad
PR-URL: nodejs#10631
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Sam Roberts <vieuxtech@gmail.com>
Reviewed-By: Evan Lucas <evanlucas@me.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
Reviewed-By: Joyee Cheung <joyeec9h3@gmail.com>
  • Loading branch information
mcollina authored and italoacasas committed Jan 23, 2017
1 parent 5159f6c commit 0700468
Showing 1 changed file with 40 additions and 3 deletions.
43 changes: 40 additions & 3 deletions doc/api/stream.md
Original file line number Diff line number Diff line change
Expand Up @@ -443,9 +443,46 @@ first argument. To reliably detect write errors, add a listener for the
The return value is `true` if the internal buffer does not exceed
`highWaterMark` configured when the stream was created after admitting `chunk`.
If `false` is returned, further attempts to write data to the stream should
stop until the [`'drain'`][] event is emitted. However, the `false` return
value is only advisory and the writable stream will unconditionally accept and
buffer `chunk` even if it has not not been allowed to drain.
stop until the [`'drain'`][] event is emitted.

While a stream is not draining, calls to `write()` will buffer `chunk`, and
return false. Once all currently buffered chunks are drained (accepted for
delivery by the operating system), the `'drain'` event will be emitted.
It is recommended that once write() returns false, no more chunks be written
until the `'drain'` event is emitted. While calling `write()` on a stream that
is not draining is allowed, Node.js will buffer all written chunks until
maximum memory usage occurs, at which point it will abort unconditionally.
Even before it aborts, high memory usage will cause poor garbage collector
performance and high RSS (which is not typically released back to the system,
even after the memory is no longer required). Since TCP sockets may never
drain if the remote peer does not read the data, writing a socket that is
not draining may lead to a remotely exploitable vulnerability.

Writing data while the stream is not draining is particularly
problematic for a [Transform][], because the `Transform` streams are paused
by default until they are piped or an `'data'` or `'readable'` event handler
is added.

If the data to be written can be generated or fetched on demand, it is
recommended to encapsulate the logic into a [Readable][] and use
[`stream.pipe()`][]. However, if calling `write()` is preferred, it is
possible to respect backpressure and avoid memory issues using the
the [`'drain'`][] event:

```js
function write (data, cb) {
if (!stream.write(data)) {
stream.once('drain', cb)
} else {
process.nextTick(cb)
}
}

// Wait for cb to be called before doing any other write.
write('hello', () => {
console.log('write completed, do more writes now')
})
```

A Writable stream in object mode will always ignore the `encoding` argument.

Expand Down

0 comments on commit 0700468

Please sign in to comment.