Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

stream, net, http, http2: destroy while active race #30864

Closed
ronag opened this issue Dec 9, 2019 · 10 comments
Closed

stream, net, http, http2: destroy while active race #30864

ronag opened this issue Dec 9, 2019 · 10 comments

Comments

@ronag
Copy link
Member

ronag commented Dec 9, 2019

Given the context of #30837.

Where basically we have the problem of destroying a handle while it's being actively used during read/write causing "unexpected" errors.

  1. I believe we might have similar problems in net, http, http2 and also quic. Where we can destroy the handle while it's in active use. Is this something we need to address?

  2. Can we find a generic solution in streams as to avoid this "race"? e.g. in Writable we could defer calling _destroy until the active write has completed or failed. Readable would be a bit more tricky without changing or adding to the API.

I'm unsure how big of a problem this actually is.

@ronag ronag changed the title stream, net, http, http2: destroy while active stream, net, http, http2: destroy while active race Dec 9, 2019
@ronag
Copy link
Member Author

ronag commented Dec 9, 2019

ping @mcollina @addaleax @lpinca @jasnell

@mcollina
Copy link
Member

mcollina commented Dec 9, 2019

I would say this is expected. destroy() signifies abnormal termination, and possibly it could result in an error.

@ronag
Copy link
Member Author

ronag commented Dec 9, 2019

@mcollina: Hm, I think that also makes sense. Given that, what is your take on #30837?

@ronag
Copy link
Member Author

ronag commented Dec 11, 2019

Given @addaleax comment and reading a bit about fd I found the following paragraph from close

It is probably unwise to close file descriptors while they may be in use by system calls in other threads in the same process. Since a file descriptor may be reused, there are some obscure race conditions that may cause unintended side effects.

So basically, we should (must?) wait for any pending operations before closing (_destroy:ing) a file descriptor. The fix we applied in #30837 for fs should "probably" also apply in other places where file descriptor reuse is possible?

@addaleax
Copy link
Member

@ronag Yeah, I feel like it would be very tricky to get this right for anything that uses the threadpool. Network sockets luckily don’t (at least on Unix).

@ronag
Copy link
Member Author

ronag commented Dec 11, 2019

Yeah, I feel like it would be very tricky to get this right for anything that uses the threadpool.

Not sure I quite understand the implications here.

Network sockets luckily don’t (at least on Unix).

So I guess net would be a good next candidate to look into further applying your fix on?

@addaleax
Copy link
Member

Yeah, I feel like it would be very tricky to get this right for anything that uses the threadpool.

Not sure I quite understand the implications here.

Basically what the close(2) man page says – if it’s possible that the fd is being used for a syscall in one thread of the threadpool, we should not be calling close() concurrently.

Network sockets luckily don’t (at least on Unix).

So I guess net would be a good next candidate to look into further applying your fix on?

I’m not sure that they need it, because they don’t use the threadpool, unlike fs operations. Everything is single-threaded here, and once the handle is closed by uv_close(), we won’t access it anymore.

@ronag
Copy link
Member Author

ronag commented Dec 11, 2019

Basically what the close(2) man page says – if it’s possible that the fd is being used for a syscall in one thread of the threadpool, we should not be calling close() concurrently.

Ah, thank you! That makes things much more understandable for me.

@ronag
Copy link
Member Author

ronag commented Dec 11, 2019

@addaleax If fs is the only thing that uses the threadpool and can encounter this race I think we can close this issue?

@addaleax
Copy link
Member

@ronag It’s the only thing I could tell you right now that it’s affected by this issue, yes

Trott pushed a commit that referenced this issue Dec 17, 2019
Add notes to fs.close and fs.closeSync() about udnefined behavior.

PR-URL: #30966
Refs: #30864
Reviewed-By: Anna Henningsen <anna@addaleax.net>
Reviewed-By: Rich Trott <rtrott@gmail.com>
Reviewed-By: Ruben Bridgewater <ruben@bridgewater.de>
Reviewed-By: Luigi Pinca <luigipinca@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
MylesBorins pushed a commit that referenced this issue Dec 17, 2019
Add notes to fs.close and fs.closeSync() about udnefined behavior.

PR-URL: #30966
Refs: #30864
Reviewed-By: Anna Henningsen <anna@addaleax.net>
Reviewed-By: Rich Trott <rtrott@gmail.com>
Reviewed-By: Ruben Bridgewater <ruben@bridgewater.de>
Reviewed-By: Luigi Pinca <luigipinca@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
targos pushed a commit that referenced this issue Jan 14, 2020
Add notes to fs.close and fs.closeSync() about udnefined behavior.

PR-URL: #30966
Refs: #30864
Reviewed-By: Anna Henningsen <anna@addaleax.net>
Reviewed-By: Rich Trott <rtrott@gmail.com>
Reviewed-By: Ruben Bridgewater <ruben@bridgewater.de>
Reviewed-By: Luigi Pinca <luigipinca@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
BethGriggs pushed a commit that referenced this issue Feb 6, 2020
Add notes to fs.close and fs.closeSync() about udnefined behavior.

PR-URL: #30966
Refs: #30864
Reviewed-By: Anna Henningsen <anna@addaleax.net>
Reviewed-By: Rich Trott <rtrott@gmail.com>
Reviewed-By: Ruben Bridgewater <ruben@bridgewater.de>
Reviewed-By: Luigi Pinca <luigipinca@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants