Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fixes #1827 Windows a deadlock on nng_close() #1828

Merged
merged 12 commits into from
May 30, 2024
Merged

Conversation

gdamore
Copy link
Contributor

@gdamore gdamore commented Apr 25, 2024

My current theory is that for some reason that I don't yet fully understand, we have code waiting in the condition that didn't set the closing. (Possibly the failure is a synchronization since s_closing is changed while not protected by the global lock.)

At any rate, the attempt to avoid the cost of a wake up here is silly, as pthread_cond_broadcast (and one assumes other variants like the Windows implementation to which I don't have source) are nearly free when there are no waiters. (Pthreads uses a relaxed order memory read to look for waiters, so no barrier is involved.)

So we can just do the wake unconditionally.

I'd appreciate it if folks who are encountering the problem can tell me if this change resolves for them.

Copy link

codecov bot commented Apr 25, 2024

Codecov Report

Attention: Patch coverage is 88.23529% with 2 lines in your changes are missing coverage. Please review.

Project coverage is 79.48%. Comparing base (e46b41a) to head (4c08f96).

Files Patch % Lines
src/sp/transport/tcp/tcp.c 50.00% 1 Missing ⚠️
src/sp/transport/tls/tls.c 50.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #1828      +/-   ##
==========================================
+ Coverage   79.41%   79.48%   +0.07%     
==========================================
  Files          95       95              
  Lines       21487    21484       -3     
==========================================
+ Hits        17063    17076      +13     
+ Misses       4424     4408      -16     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@alzix
Copy link
Contributor

alzix commented Apr 25, 2024

the issue is still reproducible on this branch.
image

@gdamore
Copy link
Contributor Author

gdamore commented Apr 27, 2024

Please have another go with this branch -- another commit made which I hope will help.

@gdamore
Copy link
Contributor Author

gdamore commented Apr 27, 2024

Well, that didn't work as well as hoped. Seems that the read/write cbs are also here.

@gdamore
Copy link
Contributor Author

gdamore commented Apr 27, 2024

Ah reaping is needed because we are in the callback when we fail. And its interesting that this happens consistently for IPC, so that suggests that I'm on the right path.

@gdamore
Copy link
Contributor Author

gdamore commented Apr 27, 2024

(Another go, restoring the reaping..)

@alzix
Copy link
Contributor

alzix commented Apr 28, 2024

i was not able to reproduce the original issue anymore, but I cannot get a decent number of iterations as the server is crashing on nni_list_node_remove as was previously reported
image

image

@alzix
Copy link
Contributor

alzix commented Apr 28, 2024

win_ipcconn.c:229 in ipc_send_cb there is a check:

if ((aio = nni_list_first(&c->send_aios)) == NULL) {
	// Should indicate that it was closed.
	nni_mtx_unlock(&c->mtx);
	return;
}

I think it does not do what is expected as I see in the debugger that c->closed == true

@alzix
Copy link
Contributor

alzix commented Apr 28, 2024

there are two types of crashes here: one in ipc_send_cb and the other in ipc_recv_cb. Both occur on close.
Based on my observation in these cases the aio object is malformed, and later leads to a crash.
image
either memory was not properly initialized or some other thread overwrote it.

from: https://en.wikipedia.org/wiki/Magic_number_(programming)#Debug_values

0xDDDDDDDD pattern is used by Microsoft's C/C++ debug free() function to mark freed heap memory

so it seems the aio contains dangling pointers...

@alzix
Copy link
Contributor

alzix commented Apr 29, 2024

from my observations, the problem occurs when ipc_recv_cb and/or ipc_send_cb are executed after nni_sock_shutdown

@gdamore
Copy link
Contributor Author

gdamore commented May 3, 2024

@alzix thanks for the analysis. I will try to get to the bottom of this soon ... I've just been completely swamped with $dayjob.

@gdamore
Copy link
Contributor Author

gdamore commented May 3, 2024

Definitely a use-after-free.

@gdamore
Copy link
Contributor Author

gdamore commented May 5, 2024

This is very definitely windows specific. It may impact TCP as well, but the callback structure here is used with overlapped IO (a Windows thing.)

When closing pipes, we defer them to be reaped, but also leave
them in the match list where they might be picked up by ep_match,
or leak.  It's best to reap these proactively and ensure that they
are not allowed to life longer once they have errored during the
negotiation phase.
@gdamore
Copy link
Contributor Author

gdamore commented May 22, 2024

So I guess the send_cb is somehow still running. I'm still trying to get to the bottom of this, because I would not expect that there are any posted I/Os at that point.

@itayzafrir
Copy link

added some info in PR #1831 (comment)

@alzix
Copy link
Contributor

alzix commented May 22, 2024

So I guess the send_cb is somehow still running. I'm still trying to get to the bottom of this, because I would not expect that there are any posted I/Os at that point.

According to https://learn.microsoft.com/en-us/windows/win32/fileio/canceling-pending-i-o-operations

There is no guarantee that underlying drivers correctly support cancellation.

perhaps this is the case?

@gdamore
Copy link
Contributor Author

gdamore commented May 25, 2024

So I guess the send_cb is somehow still running. I'm still trying to get to the bottom of this, because I would not expect that there are any posted I/Os at that point.

According to https://learn.microsoft.com/en-us/windows/win32/fileio/canceling-pending-i-o-operations

There is no guarantee that underlying drivers correctly support cancellation.

perhaps this is the case?

Then the driver should continue to completion which would be fine. But Windows named pipes and TCP both support cancellation. The problem is a defect in my logic, not missing Windows functionality. I'm still working to get to the bottom of it -- I thought I had understood it but clearly I was missing something.

We use overlapped I/O, so we don't need a separate hEvent.
The logic with overlapped structures was fragile as it used
overlapped ios for the connections rather than a single common
one for the listener.  This changes it to be more like POSIX, and
robust against this error.
@gdamore
Copy link
Contributor Author

gdamore commented May 27, 2024

I've pushed another change... this fixes a bunch of problems.

The IPC pipe still has a use after free... I will fix it tomorrow. (I'm out of steam tonight.)
Debugging this has been... challenging.

The TCP code seems rock solid (it had use-after-free bugs in it), and the listener code was brittle. It also includes the statistic crash.

@itayzafrir
Copy link

@gdamore thank you for looking into this and I hear you on the debugging challenge here :)
Looking forward for your updates.

@gdamore
Copy link
Contributor Author

gdamore commented May 27, 2024

Well, I think I've made some progress. It appears to be a very subtle data race in the aio framework. Essentially we can wind up modifying some pointers for the link list stuff, and those are done while not under the same lock that we used to test it, and that leads to a problem. I think a barrier is needed, because we cannot really share the lock that was used as the aio can move around.

It might be safer to add an atomic variable to the aio, but I'm loathe to do so for fear of impacting performance.

@gdamore
Copy link
Contributor Author

gdamore commented May 27, 2024

I've added a bunch more asserts, and I can confirm that his problem only affects Windows. It affects Windows x86 and ARM both. I think that the problem is that my logic around removing the object from the I/O completion port isn't adequate. It seems that are getting completions for I/Os that we should not, and I can't seem to understand why this is happening.

Windows does not give an elegant way to just "detach" from the completion port, which means that there isn't a simple way to look to see if an operation is pending or not. Supposedly closing the handle should do it. But I'm still seeing some surprises.

@gdamore
Copy link
Contributor Author

gdamore commented May 28, 2024

Well I might have to eat my words. After half an hour of running tests in a loop, now a similar crash happened on macOS.

This seems to alleviate the use after free crashes, although it
does not seem like it should.  Current theory is that this closes
the handle ensuring that it is unregistered from the I/O subsystem,
thus preventing callbacks from firing and referring to objects that
have been freed.
@gdamore
Copy link
Contributor Author

gdamore commented May 28, 2024

I have made some changes to try to simplify and unify the code. This seems to have greatly reduced the crash incidence, but I have not completely solved the problem. It seems like there may still be some race somewhere, and it does seem that the io completion ports are giving completions for objects that I believe to have been removed. It almost makes me believe that there are duplication completion packets being submitted but that seems nonsensical.

What's frustrating is that these problems seem to have only recently started happening -- older versions of NNG didn't suffer any of these problems. I'm going to run some tests -- because the other thing that has changed is ... well, Windows. So I wonder if some regression in Windows is in play here. (That's not where I go to first, but I'm really having a difficult time reasoning about the behavior that I'm observing.)

Adding complexity is that I'm running this in Windows in parallels on a mac m1. It seems to work well, mostly, but I could be suffering from being on the bleeding edge.

If anyone watching here can try an older version of Windows 10, or even Windows 8, that would be great. Also on real hardware.

@gdamore
Copy link
Contributor Author

gdamore commented May 28, 2024

There is a distinct possibility that my local tests were impaired by ... "interesting" emulation. I'm not sure.

@gdamore gdamore merged commit 8420a9d into master May 30, 2024
18 checks passed
@gdamore gdamore deleted the gdamore/missed-wakeup branch May 30, 2024 14:29
shikokuchuo added a commit to shikokuchuo/nng that referenced this pull request May 30, 2024
shikokuchuo added a commit to shikokuchuo/nng that referenced this pull request May 31, 2024
shikokuchuo added a commit to shikokuchuo/nng that referenced this pull request Oct 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants