-
Notifications
You must be signed in to change notification settings - Fork 12.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Glibc lock elision allows a value to be locked twice #33770
Comments
This is... quite surprising! Why is this not a bug in pthread mutexes? Is it unspecified if a lock is reentrant by default? |
According to the |
@Amanieu at least on my system with glibc 2.23, |
@birkenfeld Those constants are the same for me as well, however any call to |
Discussed during libs triage yesterday, definitely something we should fix! |
Digging into this, we may get lucky and not have to worry about this for rwlocks. There's an interesting article on merging glibc lock elision in 2013 which states:
And indeed older standards do indeed indicate that wrlock and rdlock are undefined if a write lock is previously held. A more recent revision, however, has wrlock and rdlock descriptions which tone down wording along the lines of:
or
Also note that the most recent publication for mutex lock acquisition does indeed indicate that for a default mutex recursive locking is undefined behavior. A strict interpretation of all this to me would indicate that we could fix this by:
The change to mutexes should avoid undefined behavior entirely, and it seems like the rwlock behavior isn't undefined so we can go ahead and do the operation and then check after-the-fact. How's that sound to others? |
Apparently it's more complicated, this test is failing (not deadlocking): fn test_rwlock_rw() {
let m = RwLock::new(0);
let _g = m.read().unwrap();
let _g2 = m.write().unwrap();
} It seems that we need to hold a counter of all reader threads... this is going to be a pain to implement... |
We basically need to embed a |
Make sure Mutex and RwLock can't be re-locked on the same thread Fixes #33770 r? @alexcrichton
I'm unable to build src/libstd/sys/unix/mutex.rs:58:60: 58:86 error: unresolved name `libc::PTHREAD_MUTEX_NORMAL` [E0425]
src/libstd/sys/unix/mutex.rs:58 let r = libc::pthread_mutexattr_settype(&mut attr, libc::PTHREAD_MUTEX_NORMAL);
^~~~~~~~~~~~~~~~~~~~~~~~~~
/* Mutex types. */
enum
{
PTHREAD_MUTEX_TIMED_NP,
PTHREAD_MUTEX_RECURSIVE_NP,
PTHREAD_MUTEX_ERRORCHECK_NP,
PTHREAD_MUTEX_ADAPTIVE_NP
#if defined __USE_UNIX98 || defined __USE_XOPEN2K8
,
PTHREAD_MUTEX_NORMAL = PTHREAD_MUTEX_TIMED_NP,
PTHREAD_MUTEX_RECURSIVE = PTHREAD_MUTEX_RECURSIVE_NP,
PTHREAD_MUTEX_ERRORCHECK = PTHREAD_MUTEX_ERRORCHECK_NP,
PTHREAD_MUTEX_DEFAULT = PTHREAD_MUTEX_NORMAL
#endif
#ifdef __USE_GNU
/* For compatibility. */
, PTHREAD_MUTEX_FAST_NP = PTHREAD_MUTEX_TIMED_NP
#endif
}; |
I think a new version of the libc crate needs to be published on cargo. |
@petevine how are you hitting that error? This passed our CI which means that the constant should be defined for ARM Linux (that's something we gate on). If this is using the libc crate on crates.io how is that coming into play? The libstd in this repo should be using a pinned rev of libc, if you're using something else then that's not guaranteed to work. |
I did a |
Did you run |
No, cause that's done automatically by the script, right? At least it has worked flawlessly up to now it seems. |
@alexcrichton I've just done another |
When running on a processor which support Intel's Restricted Transactional Memory, glibc will not write to a lock (making it appear to be unlocked) and instead begin a transaction. The transaction will catch any conflicts from other threads and aborts, but this doesn't happen when re-locking from the same thread. Instead the lock gets acquired twice, which results in two mutable references to the same value.
Example code:
Output:
For
Mutex
, this can be solved by creating the mutex withpthread_mutexattr_settype(PTHREAD_MUTEX_NORMAL)
. However there is no way to disable this behavior for pthread rwlocks.The text was updated successfully, but these errors were encountered: