From 0211bb7fada4825e7ef5b5d55aa58bc58807f569 Mon Sep 17 00:00:00 2001 From: Matthew Parkinson Date: Wed, 2 Oct 2024 17:15:04 +0100 Subject: [PATCH 1/4] Add documentation for the combining lock This adds some documentation to make the combining lock easier to understand. This is working towards documenting the changes for the 0.7 release. --- docs/combininglock.md | 342 ++++++++++++++++++++++++++++++++ docs/combininglockperf.svg | 1 + src/snmalloc/ds/combininglock.h | 111 +++++------ 3 files changed, 396 insertions(+), 58 deletions(-) create mode 100644 docs/combininglock.md create mode 100644 docs/combininglockperf.svg diff --git a/docs/combininglock.md b/docs/combininglock.md new file mode 100644 index 000000000..ad7fe0a4b --- /dev/null +++ b/docs/combininglock.md @@ -0,0 +1,342 @@ +# The Combining Lock + +`snmalloc` tries very hard to avoid using locks. +However, there are a few core places where locks are required to manage the global state of the allocator. +These global operations are rare, so the overhead of a lock is not a significant issue. +However, when a thread starts it must acquire this lock to perform its first allocation. +If multiple threads are starting at the same time, then this lock can become contended. + +As part of `snmalloc` 0.7, we optimised the lock acquisition for the first allocation. +This optimisation involved a bunch of profiling and simple optimisations. +To further improve the startup time, we introduced a new [_combining lock_](../src/snmalloc/ds/combininglock.h). + +The core idea of the combining lock is that it holds a queue of operations that need to be executed while holding the lock, +and then the thread holding the lock can execute multiple threads' operations in a single lock acquisition. +This can reduce the amount of cache misses as a single thread performs multiple updates, +and thus overall the system will have fewer cache misses. + +The rest of this document will describe the combining lock in more detail. + +## Background: Flat Combining + +The core idea behind the combining lock is the flat combining technique of [Hendler et al.](https://dl.acm.org/doi/10.1145/1810479.1810540). +Their paper observes that cache misses in a contended scenario can be extremely costly. +Consider a case where you need to modify a shared data structure, and say that involves writing to several cache lines. +If there are 5 threads contending for the lock, then each thread will have to acquire the lock, and then acquire the caches line in exclusive mode. +This means that it will need to invalidate the exclusive mode cache lines in the previous thread. +These cache misses can result in a significant cost to the system. + +Flat combining reduces the number of cache misses in a contended scenario by having threads initially write their operations to a shared queue. +Then a single thread will acquire the lock and execute all the operations in the queue. +This means that the thread holding the lock will have all the cache lines in exclusive mode, and the other threads will only need to invalidate the cache lines of the thread holding the lock. + +The flat combining paper goes on to show how to amalgamate the operations in the queue to reduce the amount of work required. +For instance, taking the largest element 5 times from a balanced binary search tree is 5 logarithmic walks of the tree, but it is possible to remove the five smallest elements in a single logarithmic walk. +This can also have significant benefits in throughput. +The combining lock we have built does not amalgamate the operations, just execute them on a single thread. + +[Gupta et al.](https://www.usenix.org/conference/osdi23/presentation/gupta) provide OS level support to enable flat combining locks that are transparent to the user. +For `snmalloc`, we have a much simpler use case as we control the code, and so we can make the combining lock explicit in the API. + +[Dice el al.](https://dl.acm.org/doi/pdf/10.1145/1989493.1989502) built a flat combining lock that was NUMA aware. +This is a more complex version of the combining lock that we have built that provides additional performance benefits on NUMA systems. + +## The API + +The API for our combining lock is super simple. We have one type `CombiningLock` and one function `with`. The `with` function takes a lambda and executes it while the lock is held. +Once the lambda has completed, `with` will return. +We have mutual exclusion, there is at most one lambda executing at a time on `my_lock`. + +```C++ +CombiningLock my_lock; +... + +with(my_lock, [&] (){ + // Code that needs to be protected by the lock +}); +``` + +The lambda passed to `with` can capture state that needs to be executed while holding the lock. +As the lambda may be executed on a different thread the use of `thread_local` state can be a little tricky. + +Let us assume we have a thread local variable `my_state` that we want to update while holding the lock. +```C++ +// a global thread local variable. +thread_local int my_state = 0; +``` +We can update this variable while holding the lock as follows: +```C++ +with(my_lock, [&] (){ + // Code that needs to be protected by the lock + my_state=8; +}); +// One threads my_state is 8, but we aren't guaranteed which thread. +// Hence, the following assert may fail. +assert(my_state == 8); +``` +The problem is that the lambda may be executed on a different thread, so the `thread_local` variable `my_state` may not have been updated on the thread that requested it. + +We can address this by directly capturing the thread local variable in the lambda. +```C++ +with(my_lock, [&my_state] (){ + // Code that needs to be protected by the lock + my_state=8; +}); +``` + + +## Background: MCS Queue Lock + +Our implementation of the combining lock is based on the classic [MCS queue lock by Mellor-Crummy and Scott, 1991](https://dl.acm.org/doi/10.1145/103727.103729). +This lock is a simple queue lock that allows threads to spin on a single cache line. + +Let us provide an MCS queue lock for our API given above: +```C++ +struct LockNode +{ + std::atomic next{nullptr}; + std::atomic available{false}; +}; + +struct MCSLock +{ + // End of queue + std::atomic last; +}; + +template +inline void with(MCSLock& lock, F&& f) +{ + // Stack allocate our node to use in the queue. + LockNode node; + + // **************ACQUIRE************** + // Add ourselves to the end of the queue. + LockNode* prev = lock.last.exchange(&node, std::memory_order_acq_rel); + if (prev != nullptr) + { + // Add link to previous element in the queue + prev->next.store(&node, std::memory_order_release); + // Wait for out turn. + while (!node.available.load(std::memory_order_acquire)) + ; + } + + // **************PERFORM************** + // Run lambda + f(); + + // **************RELEASE************** + // Check if there is a next thread. + if (node.next.load(std::memory_order_acquire) == nullptr) + { + auto node_address = &node; + // No next thread so remove ourselves from the end of the queue + if (lock.last.compare_exchange_strong(node_address, nullptr, std::memory_order_acq_rel)) + return; + // Wait for next thread to be set as we failed to remove ourselves from the end of the queue. + while (node.next.load(std::memory_order_acquire) == nullptr) + ; + } + + // Wake next thread. + node.next.load(std::memory_order_acquire)->available.store(true, std::memory_order_release); +} +``` +The code can be broken into three parts: +* `ACQUIRE`: Add ourselves to the end of the queue. +* `PERFORM`: Execute the lambda. +* `RELEASE`: Remove ourselves from the queue and wake the next thread. + +The `ACQUIRE` part is simple. +We add ourselves to the end of the queue and then wait for the previous thread to set the `available` flag. +If there is no previous thread, then we can immediately execute the lambda. + +The `PERFORM` part simply executes the lambda. + +The most challenging part of the MCS lock is the release of the lock. +When a thread has finished executing its lambda, it must check if there is another thread waiting to execute. +If there is, it must wake that thread. +If there is no other thread waiting, then it must remove itself from the queue, because a subsequent thread may write to the `next` field of the node that is on the stack. +To unlink this stack allocated node from the queue, we need to use a compare and exchange operation. +This checks if the `last` still points at this node, and if it does, then it sets the `last` to `nullptr`, which signifies that there is no thread waiting to execute. +The awkward case is if the compare and exchange fails. +This means that another thread has joined the queue, but had not set the `next` field when we checked. +In this case, we must wait for the `next` field to be set, and then we can simply notify the next thread to execute. + + +## The Combining Lock + +The combining lock follows the same pattern as the MCS queue lock, but adds a few extra fields to the queue. +Firstly, instead of the status being a boolean for whether the lock is available to this thread. +We have a status that represents three cases: +```C++ +enum class LockStatus +{ + WAITING, + DONE, + READY +}; +``` +The `WAITING` state is when the thread is waiting for the lock to become available to it. This roughly corresponds to the `available` field in the MCS queue lock being `false`. +The `READY` state is when this thread is responsible for completing more work from the queue. This roughly corresponds to the `available` field in the MCS queue lock being `true`. +The `DONE` state is when another thread has completed the operation (lambda) for this thread. This is a new state that is not present in the MCS queue lock and represents the case where another thread has completed the work for this thread. + +The lock node has an extra field that is a function pointer. +This represents the operation that the must be executed for this thread. +```C++ +struct CombiningLockNode +{ + std::atomic next{nullptr}; + std::atomic status{WAITING}; + void (*f_raw)(CombiningLockNode*); + + void run() + { + f_raw(this); + } +}; +``` + +The `f_raw` field is a function pointer that is effectively going to do our lambda. + +```C++ +template +struct CombiningLockNodeTempl : CombiningLockNode +{ + F f; + + CombiningLockNodeTempl(F&& f_) + : CombiningLockNode([](CombiningLockNode* self) { + auto self_templ = reinterpret_cast(self); + self_templ->f(); + }), + f(f_) {} +}; +``` +Here we need to use `reinterpret_cast` to convert the `CombiningLockNode*` to the specific instantiation of the `CombiningLockNodeTempl*`. +This is really the virtual dispatch, but we are doing it manually, so as not to incur the overheads of vtables. + +We can now provide our new version of the `with` function that uses the combining lock. +```C++ +template +inline void with(CombiningLock& lock, F&& f) +{ + CombiningLockNodeTempl node(std::forward(f)); + + // **************ACQUIRE************** + // Add ourselves to the end of the queue. + CombiningLockNode* prev = lock.last.exchange(&node, std::memory_order_acq_rel); + + if (prev != nullptr) + { + // Add link to previous element in the queue + prev->next.store(&node, std::memory_order_release); + + // Wait for our turn. + while (node.status.load(std::memory_order_relaxed) == LockStatus::WAITING) + ; + + // Check if another thread completed our work. + if (node.status.load(std::memory_order_acquire) == LockStatus::DONE) + return; + } + + // **************PERFORM************** + // We are now the head of the queue. + // Perform work until we hit a nullptr. + auto curr = this; + while (true) + { + curr->run(); + + // Check if there is another operation to execute + auto next = curr->next.load(std::memory_order_acquire); + if (next == nullptr) + break; + + // Notify thread that we completed its work. + curr->status.store(LockStatus::DONE, std::memory_order_release); + curr = next; + } + + // ***********RELEASE************** + // Attempt to close the queue + auto curr_address = curr; + if (lock.last.compare_exchange_strong(curr_address, nullptr, std::memory_order_acq_rel)) + { + curr->status.store(LockStatus::DONE, std::memory_order_release); + return; + } + + // Wait for next thread to be set as we failed to remove + // ourselves from the end of the queue. + while (curr->next.load(std::memory_order_relaxed) == nullptr) + ; + + // Notify next thread to execute the remaining work. + auto next = curr->next.load(std::memory_order_acquire); + next->status.store(LockStatus::READY, std::memory_order_release); + + // Notify the current thread that its work has been completed. + curr->status.store(LockStatus::DONE, std::memory_order_release); + return; +} +``` +Again, the code can be broken into three parts. +The `ACQUIRE` part is almost the same as the MCS queue lock. +However, it additionally checks if another thread completed the lambda for this thread. +If this happens, then the thread can return immediately. + +The `PERFORM` part is where the executing thread can run the lambdas for itself and many other threads. +It walks the queue executing the lambdas until it reaches a queue node with `next` pointer containing `nullptr`. + +The `RELEASE` part attempts to remove the last node from the `PERFORM` phase from the end of the queue. +If this succeeds, then it can simply notify that queue node that its work is `DONE`, +and return. +Otherwise, as with the MCS lock, it must wait for a new thread that is joining the queue to set the `next` pointer. +Then it can notify (`READY`) the newly joined thread that it can execute the remaining work. +Finally, it must notify the last element it ran the lambda for that its work is `DONE`. +The `DONE` notification must be done after the read of `next`, as signalling `DONE` can cause the memory for the node to be freed from the stack. + +The [actual implementation](../src/snmalloc/ds/combininglock.h) in the `snmalloc` codebase is a little more complex as it uses a fast path lock for the uncontended case. +This uses fewer atomic operations in the common case where the lock is not contended, +and is a standard technique for queue locks. + + +## Performance Results + +We have a particularly tough benchmark for testing [lock contention at startup](../src/test/perf/startup/startup.cc). +We used a machine with 72 hardware threads. +The benchmark causes all the threads to synchronise on starting their first allocation. +This means all 72 threads are contending on the lock at the same time to get their allocator initialised. + +We ran this bench mark with a standard spin-lock (0.7.0-spin) and with the new combining lock (0.7.0): + +![Combining lock performance](combininglockperf.svg) + +As we can see from the benchmark, the combining lock is significantly faster than the spin lock in this highly contented scenario taking only 60% of the time to complete the initialisation. + +This benchmark is not at all representative of most scenarios, and is stressing a worst case scenario of the system. + +## Future Work + +There are two things that would make the Combining lock even better: + +* Back off strategies +* Integration with Futex/Waitonaddress + +The current implementation does not have any back off strategies for its spinning loops. +This can be bad for performance, however, with the MCS queue lock the threads spin on individual cache lines so some of the worst effects of [spinning are mitigated](https://pdos.csail.mit.edu/papers/linux:lock.pdf). +However, for a more general purpose the combining lock should have a back off strategy to increase the time between checks the longer it waits. + +The second improvement would be to integrate the combining lock with the OS level support for waiting. +The spinning should ultimately back off to the OS primitive for waiting such as `futex` or `waitonaddress`. +A naive integration we tried was slower, so we need to integrate with a more sensible back-off strategy to make this work. + +Neither of these improvements are particularly difficult, but we have not found them necessary for the current use case in `snmalloc`. + +## Conclusion + +The combining lock can be surfaced in C++ with a really simple API that just takes a lambda to execute while holding the lock. +It was really easy to integrate into the `snmalloc` codebase and has provided a significant performance improvement in a highly contended micro-benchmark. \ No newline at end of file diff --git a/docs/combininglockperf.svg b/docs/combininglockperf.svg new file mode 100644 index 000000000..3efc38845 --- /dev/null +++ b/docs/combininglockperf.svg @@ -0,0 +1 @@ +0.7.00.7.0-spin05000001000000150000020000002500000Startup cycles (72 hardware threads) \ No newline at end of file diff --git a/src/snmalloc/ds/combininglock.h b/src/snmalloc/ds/combininglock.h index e5b79eaad..1f3ebf52a 100644 --- a/src/snmalloc/ds/combininglock.h +++ b/src/snmalloc/ds/combininglock.h @@ -8,7 +8,7 @@ namespace snmalloc { - class CombineLockNode; + class CombiningLockNode; struct CombiningLock { @@ -16,7 +16,7 @@ namespace snmalloc std::atomic flag{false}; // MCS queue of work items - std::atomic head{nullptr}; + std::atomic last{nullptr}; }; /** @@ -33,10 +33,10 @@ namespace snmalloc * Note that, we should perhaps add a Futex/WakeOnAddress mode to improve * performance in the contended case, rather than spinning. */ - class CombineLockNode + class CombiningLockNode { template - friend class CombineLockNodeTempl; + friend class CombiningLockNodeTempl; enum class LockStatus { @@ -58,10 +58,12 @@ namespace snmalloc std::atomic status{LockStatus::WAITING}; // Used to store the queue - std::atomic next{nullptr}; + std::atomic next{nullptr}; // Stores the C++ lambda associated with this node in the queue. - void (*f_raw)(CombineLockNode*); + void (*f_raw)(CombiningLockNode*); + + constexpr CombiningLockNode(void (*f)(CombiningLockNode*)) : f_raw(f) {} void release(CombiningLock& lock) { @@ -73,12 +75,10 @@ namespace snmalloc status.store(s, std::memory_order_release); } - constexpr CombineLockNode(void (*f)(CombineLockNode*)) : f_raw(f) {} - SNMALLOC_FAST_PATH void attach(CombiningLock& lock) { // Test if no one is waiting - if (lock.head.load(std::memory_order_relaxed) == nullptr) + if (lock.last.load(std::memory_order_relaxed) == nullptr) { // No one was waiting so low contention. Attempt to acquire the flag // lock. @@ -99,7 +99,7 @@ namespace snmalloc { // There is contention for the lock, we need to add our work to the // queue of pending work - auto prev = lock.head.exchange(this, std::memory_order_acq_rel); + auto prev = lock.last.exchange(this, std::memory_order_acq_rel); if (prev != nullptr) { @@ -144,52 +144,50 @@ namespace snmalloc // Determine if there are more elements. auto n = curr->next.load(std::memory_order_acquire); - if (n != nullptr) - { - // Signal this work was completed and move on to - // next item. - curr->set_status(LockStatus::DONE); - curr = n; - continue; - } - - // This could be the end of the queue, attempt to close the - // queue. - auto curr_c = curr; - if (lock.head.compare_exchange_strong( - curr_c, - nullptr, - std::memory_order_release, - std::memory_order_relaxed)) - { - // Queue was successfully closed. - // Notify last element the work was completed. - curr->set_status(LockStatus::DONE); - release(lock); - return; - } - - // Failed to close the queue wait for next thread to be - // added. - while (curr->next.load(std::memory_order_relaxed) == nullptr) - Aal::pause(); - - // As we had to wait, give the job to the next thread - // to carry on performing the work. - n = curr->next.load(std::memory_order_acquire); - n->set_status(LockStatus::READY); + if (n == nullptr) + break; + // Signal this work was completed and move on to + // next item. + curr->set_status(LockStatus::DONE); + curr = n; + } - // Notify the thread that we completed its work. - // Note that this needs to be done last, as we can't read - // curr->next after setting curr->status + // This could be the end of the queue, attempt to close the + // queue. + auto curr_c = curr; + if (lock.last.compare_exchange_strong( + curr_c, + nullptr, + std::memory_order_release, + std::memory_order_relaxed)) + { + // Queue was successfully closed. + // Notify last element the work was completed. curr->set_status(LockStatus::DONE); + release(lock); return; } + + // Failed to close the queue wait for next thread to be + // added. + while (curr->next.load(std::memory_order_relaxed) == nullptr) + Aal::pause(); + + // As we had to wait, give the job to the next thread + // to carry on performing the work. + auto n = curr->next.load(std::memory_order_acquire); + n->set_status(LockStatus::READY); + + // Notify the thread that we completed its work. + // Note that this needs to be done last, as we can't read + // curr->next after setting curr->status + curr->set_status(LockStatus::DONE); + return; } }; template - class CombineLockNodeTempl : CombineLockNode + class CombiningLockNodeTempl : CombiningLockNode { template friend void with(CombiningLock&, FF&&); @@ -197,15 +195,12 @@ namespace snmalloc // This holds the closure for the lambda F f; - // Untyped version of calling f to store in the node. - static void invoke(CombineLockNode* self) - { - auto self_templ = reinterpret_cast(self); - self_templ->f(); - } - - CombineLockNodeTempl(CombiningLock& lock, F&& f_) - : CombineLockNode(invoke), f(f_) + CombiningLockNodeTempl(CombiningLock& lock, F&& f_) + : CombiningLockNode([](CombiningLockNode* self) { + CombiningLockNodeTempl* self_templ = + reinterpret_cast(self); + self_templ->f(); + }), f(std::forward(f_)) { attach(lock); } @@ -219,6 +214,6 @@ namespace snmalloc template inline void with(CombiningLock& lock, F&& f) { - CombineLockNodeTempl node{lock, std::forward(f)}; + CombiningLockNodeTempl node{lock, std::forward(f)}; } } // namespace snmalloc From f8604b0c8107f6e088c3282b3699ba56fc5b8b21 Mon Sep 17 00:00:00 2001 From: Matthew Parkinson Date: Wed, 2 Oct 2024 22:01:18 +0100 Subject: [PATCH 2/4] Small refactoring --- docs/combininglock.md | 8 ++--- src/snmalloc/ds/combininglock.h | 61 ++++++++++++++++----------------- 2 files changed, 34 insertions(+), 35 deletions(-) diff --git a/docs/combininglock.md b/docs/combininglock.md index ad7fe0a4b..c15aa991f 100644 --- a/docs/combininglock.md +++ b/docs/combininglock.md @@ -175,11 +175,11 @@ enum class LockStatus { WAITING, DONE, - READY + HEAD }; ``` The `WAITING` state is when the thread is waiting for the lock to become available to it. This roughly corresponds to the `available` field in the MCS queue lock being `false`. -The `READY` state is when this thread is responsible for completing more work from the queue. This roughly corresponds to the `available` field in the MCS queue lock being `true`. +The `HEAD` state is when this thread is responsible for completing more work from the queue. This roughly corresponds to the `available` field in the MCS queue lock being `true`. The `DONE` state is when another thread has completed the operation (lambda) for this thread. This is a new state that is not present in the MCS queue lock and represents the case where another thread has completed the work for this thread. The lock node has an extra field that is a function pointer. @@ -276,7 +276,7 @@ inline void with(CombiningLock& lock, F&& f) // Notify next thread to execute the remaining work. auto next = curr->next.load(std::memory_order_acquire); - next->status.store(LockStatus::READY, std::memory_order_release); + next->status.store(LockStatus::HEAD, std::memory_order_release); // Notify the current thread that its work has been completed. curr->status.store(LockStatus::DONE, std::memory_order_release); @@ -295,7 +295,7 @@ The `RELEASE` part attempts to remove the last node from the `PERFORM` phase fro If this succeeds, then it can simply notify that queue node that its work is `DONE`, and return. Otherwise, as with the MCS lock, it must wait for a new thread that is joining the queue to set the `next` pointer. -Then it can notify (`READY`) the newly joined thread that it can execute the remaining work. +Then it can notify (`HEAD`) the newly joined thread that it can execute the remaining work. Finally, it must notify the last element it ran the lambda for that its work is `DONE`. The `DONE` notification must be done after the read of `next`, as signalling `DONE` can cause the memory for the node to be freed from the stack. diff --git a/src/snmalloc/ds/combininglock.h b/src/snmalloc/ds/combininglock.h index 1f3ebf52a..214765501 100644 --- a/src/snmalloc/ds/combininglock.h +++ b/src/snmalloc/ds/combininglock.h @@ -17,6 +17,11 @@ namespace snmalloc // MCS queue of work items std::atomic last{nullptr}; + + void release() + { + flag.store(false, std::memory_order_release); + } }; /** @@ -49,7 +54,7 @@ namespace snmalloc // The work for this thread has not been completed, and it is the // head of the queue. - READY + HEAD }; // Status of the queue, set by the thread at the head of the queue, @@ -65,36 +70,11 @@ namespace snmalloc constexpr CombiningLockNode(void (*f)(CombiningLockNode*)) : f_raw(f) {} - void release(CombiningLock& lock) - { - lock.flag.store(false, std::memory_order_release); - } - void set_status(LockStatus s) { status.store(s, std::memory_order_release); } - SNMALLOC_FAST_PATH void attach(CombiningLock& lock) - { - // Test if no one is waiting - if (lock.last.load(std::memory_order_relaxed) == nullptr) - { - // No one was waiting so low contention. Attempt to acquire the flag - // lock. - if (lock.flag.exchange(true, std::memory_order_acquire) == false) - { - // We grabbed the lock. - f_raw(this); - - // Release the lock - release(lock); - return; - } - } - attach_slow(lock); - } - SNMALLOC_SLOW_PATH void attach_slow(CombiningLock& lock) { // There is contention for the lock, we need to add our work to the @@ -129,7 +109,7 @@ namespace snmalloc } // We could set - // status = LockStatus::Ready + // status = LockStatus::HEAD // However, the subsequent state assumes it is Ready, and // nothing would read it. } @@ -164,7 +144,7 @@ namespace snmalloc // Queue was successfully closed. // Notify last element the work was completed. curr->set_status(LockStatus::DONE); - release(lock); + lock.release(); return; } @@ -176,7 +156,7 @@ namespace snmalloc // As we had to wait, give the job to the next thread // to carry on performing the work. auto n = curr->next.load(std::memory_order_acquire); - n->set_status(LockStatus::READY); + n->set_status(LockStatus::HEAD); // Notify the thread that we completed its work. // Note that this needs to be done last, as we can't read @@ -202,7 +182,7 @@ namespace snmalloc self_templ->f(); }), f(std::forward(f_)) { - attach(lock); + attach_slow(lock); } }; @@ -214,6 +194,25 @@ namespace snmalloc template inline void with(CombiningLock& lock, F&& f) { - CombiningLockNodeTempl node{lock, std::forward(f)}; + // Test if no one is waiting + if (SNMALLOC_LIKELY(lock.last.load(std::memory_order_relaxed) == nullptr)) + { + // No one was waiting so low contention. Attempt to acquire the flag + // lock. + if (SNMALLOC_LIKELY(lock.flag.exchange(true, std::memory_order_acquire) == false)) + { + // We grabbed the lock. + // Execute the thunk. + f(); + + // Release the lock + lock.release(); + return; + } + } + + // There is contention for the lock, we need to take the slow path + // with the queue. + CombiningLockNodeTempl node(lock, std::forward(f)); } } // namespace snmalloc From d536de4994105c7e0fb05a39d97c9fa1f5f4fc53 Mon Sep 17 00:00:00 2001 From: Matthew Parkinson Date: Wed, 2 Oct 2024 22:04:07 +0100 Subject: [PATCH 3/4] Clangformat --- src/snmalloc/ds/combininglock.h | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/src/snmalloc/ds/combininglock.h b/src/snmalloc/ds/combininglock.h index 214765501..fa91cdbc2 100644 --- a/src/snmalloc/ds/combininglock.h +++ b/src/snmalloc/ds/combininglock.h @@ -180,7 +180,8 @@ namespace snmalloc CombiningLockNodeTempl* self_templ = reinterpret_cast(self); self_templ->f(); - }), f(std::forward(f_)) + }), + f(std::forward(f_)) { attach_slow(lock); } @@ -199,7 +200,8 @@ namespace snmalloc { // No one was waiting so low contention. Attempt to acquire the flag // lock. - if (SNMALLOC_LIKELY(lock.flag.exchange(true, std::memory_order_acquire) == false)) + if (SNMALLOC_LIKELY( + lock.flag.exchange(true, std::memory_order_acquire) == false)) { // We grabbed the lock. // Execute the thunk. From 81e0434448ce574967cd5ad7087005bc61e37d53 Mon Sep 17 00:00:00 2001 From: Matthew Parkinson Date: Fri, 4 Oct 2024 20:15:46 +0100 Subject: [PATCH 4/4] CR --- docs/combininglock.md | 8 +++++--- src/snmalloc/ds/combininglock.h | 16 +++++++++++----- 2 files changed, 16 insertions(+), 8 deletions(-) diff --git a/docs/combininglock.md b/docs/combininglock.md index c15aa991f..a609d5d11 100644 --- a/docs/combininglock.md +++ b/docs/combininglock.md @@ -13,7 +13,7 @@ To further improve the startup time, we introduced a new [_combining lock_](../s The core idea of the combining lock is that it holds a queue of operations that need to be executed while holding the lock, and then the thread holding the lock can execute multiple threads' operations in a single lock acquisition. This can reduce the amount of cache misses as a single thread performs multiple updates, -and thus overall the system will have fewer cache misses. +and thus overall the system may have fewer cache misses. The rest of this document will describe the combining lock in more detail. @@ -274,12 +274,14 @@ inline void with(CombiningLock& lock, F&& f) while (curr->next.load(std::memory_order_relaxed) == nullptr) ; - // Notify next thread to execute the remaining work. + // Read the next thread auto next = curr->next.load(std::memory_order_acquire); - next->status.store(LockStatus::HEAD, std::memory_order_release); // Notify the current thread that its work has been completed. curr->status.store(LockStatus::DONE, std::memory_order_release); + // Notify the next thread it is the head of the queue, and should + // perform operations from the queue. + next->status.store(LockStatus::HEAD, std::memory_order_release); return; } ``` diff --git a/src/snmalloc/ds/combininglock.h b/src/snmalloc/ds/combininglock.h index fa91cdbc2..338c94b47 100644 --- a/src/snmalloc/ds/combininglock.h +++ b/src/snmalloc/ds/combininglock.h @@ -110,7 +110,7 @@ namespace snmalloc // We could set // status = LockStatus::HEAD - // However, the subsequent state assumes it is Ready, and + // However, the subsequent state assumes it is HEAD, and // nothing would read it. } @@ -119,11 +119,15 @@ namespace snmalloc auto curr = this; while (true) { + // Start pulling in the next element of the queue + auto n = curr->next.load(std::memory_order_acquire); + Aal::prefetch(n); + // Perform work for head of the queue curr->f_raw(curr); // Determine if there are more elements. - auto n = curr->next.load(std::memory_order_acquire); + n = curr->next.load(std::memory_order_acquire); if (n == nullptr) break; // Signal this work was completed and move on to @@ -153,14 +157,16 @@ namespace snmalloc while (curr->next.load(std::memory_order_relaxed) == nullptr) Aal::pause(); + auto n = curr->next.load(std::memory_order_acquire); + // As we had to wait, give the job to the next thread // to carry on performing the work. - auto n = curr->next.load(std::memory_order_acquire); n->set_status(LockStatus::HEAD); // Notify the thread that we completed its work. - // Note that this needs to be done last, as we can't read - // curr->next after setting curr->status + // Note that this needs to be before setting curr->status, + // as after the status is set the thread may deallocate the + // queue node. curr->set_status(LockStatus::DONE); return; }