-
-
Notifications
You must be signed in to change notification settings - Fork 21.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Net] Fix IP address resolution incorrectly locking the main thread. #51199
Conversation
This seems to be a pretty old bug, older then originally reported (at least under certain circumstances). The IP singleton uses a resolve queue so developers can queue hostnames for resolution in a separate while keeping the main thread unlocked (address-resolution OS functions are blocking, and could block for a long time in case of network disruption). In most places though, the address resolution function was called with the mutex locked, causing other functions (querying status, queueing another hostname, ecc) to block until that resolution ended. This commit ensures that all calls to OS address resolution are done with the mutex unlocked.
Wow so fast 🎉 |
Thanks! |
Cherry-picked for 3.4. |
Isn't this a regression from #49026? If so that would explain why it's present in If so, the |
https://github.com/godotengine/godot/blob/3.2/core/io/ip.cpp#L99-L106 |
This seems to be a pretty old bug, older then originally reported (at least under certain circumstances).
The IP singleton uses a resolve queue so developers can queue hostnames for resolution in a separate while keeping the main thread unlocked (address-resolution OS functions are blocking, and could block for a long time in case of network disruption).
In most places though, the address resolution function was called with the mutex locked, causing other functions (querying status, queueing another hostname, ecc) to block until that resolution ended.
This commit ensures that all calls to OS address resolution are done with the mutex unlocked.
Fixes #51181 . Note: wile in the issue this is described as a regression, this is a very old issue (according to my tests), according to my tests on Linux, and can be found even to
2.1
. I suspect testing it is just tricky due to OS/router DNS caches.3.x
version here