Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Alternative remote queue initialisation #586

Closed
mjp41 opened this issue Feb 2, 2023 · 2 comments
Closed

Alternative remote queue initialisation #586

mjp41 opened this issue Feb 2, 2023 · 2 comments

Comments

@mjp41
Copy link
Member

mjp41 commented Feb 2, 2023

Problem

The current implementation of remote queue is designed to always contain at least one element.
This is essential for efficient operation as it removes branchs from fast paths, but leads to complexity.
At initialiasation time this means we need to allocate a single 16 byte object to prime the queue:

auto dummy = capptr::Alloc<void>(small_alloc_one(MIN_ALLOC_SIZE))
.template as_static<freelist::Object::T<>>();
if (dummy == nullptr)
{
error("Critical error: Out-of-memory during initialisation.");
}
message_queue().init(dummy);

This causes a 16 byte size class to come into existence, and this is unlikely to go away.

This issues is about alterating the way the queue works, so that it can still always appear to contain one element, but without
the need to bring a 16 byte size class into being.

Current implementation

We could perform an alternative approach to initialising the remote allocator. Currently there are two fields that represent the remote allocator:

// Store the message queue on a separate cacheline. It is mutable data that
// is read by other threads.
alignas(CACHELINE_SIZE) freelist::AtomicQueuePtr back{nullptr};
// Store the two ends on different cache lines as access by different
// threads.
alignas(CACHELINE_SIZE) freelist::QueuePtr front{nullptr};

Once, we initialise our queue it looks like:

              back
                |
                v
front ------> dummy -next-> nullptr

and then once it receives the first deallocation it looks like

                            back
                              |
                              v
front ------> dummy -next-> first -next-> nullptr

It is only when there are at least two elements in the queue that we can remove an element. Hence after the first deallocation we can dequeue the dummy element.

                            back
                              |
                              v
front  ------------------>  first -next-> nullptr

This illustrates the current implementation.

To detect if a queue is empty, we currently perform:

inline bool is_empty()
{
freelist::QueuePtr bk = back.load(std::memory_order_relaxed);
return bk == front;
}

However, it should work equally well to perform:

front->next == nullptr

Proposal

I propose we implement this using two stub objects to build the free queue to start with:

  alignas(CACHELINE_SIZE) freelist::AtomicQueuePtr back{nullptr};
  alignas(CACHELINE_SIZE) freelist::Object::T stub1{};
  alignas(CACHELINE_SIZE) freelist::Object::T stub2{};

The state is initialised by:

  stub1.set_next(stub2);
  stub2.set_next(nullptr);
  back = &stub1;

This effectively looks like a single element queue from the point of view of next fields, however, back is pointing to
stub1.

back
  |
  v
stub1 -next-> stub2 -next-> nullptr

This means once we push one element onto the queue after initilisation it still looks like a single element queue:

              back
                |
                v
stub1 -next-> first -next-> nullptr

Now, we can treat the queue as before.

The is_empty check can be basically the same.

(stub1.next)->next == nullptr

Challenge

The main challenge with implementing this is the plumbing for handling domestication. Otherwise, this is fairly straight forward change.

@mjp41
Copy link
Member Author

mjp41 commented Feb 2, 2023

@nwf-msr what do you think.

@mjp41 mjp41 mentioned this issue Mar 23, 2023
@mjp41
Copy link
Member Author

mjp41 commented Aug 9, 2023

This is sufficiently handled by #604.

@mjp41 mjp41 closed this as completed Aug 9, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant