-
-
Notifications
You must be signed in to change notification settings - Fork 491
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fixes #1827 Windows a deadlock on nng_close() #1828
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #1828 +/- ##
==========================================
+ Coverage 79.41% 79.48% +0.07%
==========================================
Files 95 95
Lines 21487 21484 -3
==========================================
+ Hits 17063 17076 +13
+ Misses 4424 4408 -16 ☔ View full report in Codecov by Sentry. |
Please have another go with this branch -- another commit made which I hope will help. |
Well, that didn't work as well as hoped. Seems that the read/write cbs are also here. |
Ah reaping is needed because we are in the callback when we fail. And its interesting that this happens consistently for IPC, so that suggests that I'm on the right path. |
993a237
to
1ab81ce
Compare
(Another go, restoring the reaping..) |
I think it does not do what is expected as I see in the debugger that |
there are two types of crashes here: one in from: https://en.wikipedia.org/wiki/Magic_number_(programming)#Debug_values
so it seems the |
from my observations, the problem occurs when |
@alzix thanks for the analysis. I will try to get to the bottom of this soon ... I've just been completely swamped with $dayjob. |
Definitely a use-after-free. |
This is very definitely windows specific. It may impact TCP as well, but the callback structure here is used with overlapped IO (a Windows thing.) |
When closing pipes, we defer them to be reaped, but also leave them in the match list where they might be picked up by ep_match, or leak. It's best to reap these proactively and ensure that they are not allowed to life longer once they have errored during the negotiation phase.
So I guess the send_cb is somehow still running. I'm still trying to get to the bottom of this, because I would not expect that there are any posted I/Os at that point. |
added some info in PR #1831 (comment) |
According to https://learn.microsoft.com/en-us/windows/win32/fileio/canceling-pending-i-o-operations
perhaps this is the case? |
Then the driver should continue to completion which would be fine. But Windows named pipes and TCP both support cancellation. The problem is a defect in my logic, not missing Windows functionality. I'm still working to get to the bottom of it -- I thought I had understood it but clearly I was missing something. |
We use overlapped I/O, so we don't need a separate hEvent.
1ab81ce
to
4ca2c61
Compare
The logic with overlapped structures was fragile as it used overlapped ios for the connections rather than a single common one for the listener. This changes it to be more like POSIX, and robust against this error.
4ca2c61
to
de9f39a
Compare
I've pushed another change... this fixes a bunch of problems. The IPC pipe still has a use after free... I will fix it tomorrow. (I'm out of steam tonight.) The TCP code seems rock solid (it had use-after-free bugs in it), and the listener code was brittle. It also includes the statistic crash. |
@gdamore thank you for looking into this and I hear you on the debugging challenge here :) |
Well, I think I've made some progress. It appears to be a very subtle data race in the aio framework. Essentially we can wind up modifying some pointers for the link list stuff, and those are done while not under the same lock that we used to test it, and that leads to a problem. I think a barrier is needed, because we cannot really share the lock that was used as the aio can move around. It might be safer to add an atomic variable to the aio, but I'm loathe to do so for fear of impacting performance. |
I've added a bunch more asserts, and I can confirm that his problem only affects Windows. It affects Windows x86 and ARM both. I think that the problem is that my logic around removing the object from the I/O completion port isn't adequate. It seems that are getting completions for I/Os that we should not, and I can't seem to understand why this is happening. Windows does not give an elegant way to just "detach" from the completion port, which means that there isn't a simple way to look to see if an operation is pending or not. Supposedly closing the handle should do it. But I'm still seeing some surprises. |
de9f39a
to
57b4650
Compare
Well I might have to eat my words. After half an hour of running tests in a loop, now a similar crash happened on macOS. |
This seems to alleviate the use after free crashes, although it does not seem like it should. Current theory is that this closes the handle ensuring that it is unregistered from the I/O subsystem, thus preventing callbacks from firing and referring to objects that have been freed.
I have made some changes to try to simplify and unify the code. This seems to have greatly reduced the crash incidence, but I have not completely solved the problem. It seems like there may still be some race somewhere, and it does seem that the io completion ports are giving completions for objects that I believe to have been removed. It almost makes me believe that there are duplication completion packets being submitted but that seems nonsensical. What's frustrating is that these problems seem to have only recently started happening -- older versions of NNG didn't suffer any of these problems. I'm going to run some tests -- because the other thing that has changed is ... well, Windows. So I wonder if some regression in Windows is in play here. (That's not where I go to first, but I'm really having a difficult time reasoning about the behavior that I'm observing.) Adding complexity is that I'm running this in Windows in parallels on a mac m1. It seems to work well, mostly, but I could be suffering from being on the bleeding edge. If anyone watching here can try an older version of Windows 10, or even Windows 8, that would be great. Also on real hardware. |
There is a distinct possibility that my local tests were impaired by ... "interesting" emulation. I'm not sure. |
My current theory is that for some reason that I don't yet fully understand, we have code waiting in the condition that didn't set the closing. (Possibly the failure is a synchronization since s_closing is changed while not protected by the global lock.)
At any rate, the attempt to avoid the cost of a wake up here is silly, as pthread_cond_broadcast (and one assumes other variants like the Windows implementation to which I don't have source) are nearly free when there are no waiters. (Pthreads uses a relaxed order memory read to look for waiters, so no barrier is involved.)
So we can just do the wake unconditionally.
I'd appreciate it if folks who are encountering the problem can tell me if this change resolves for them.