-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFC: Allocator trait #39
Conversation
This looks good to me. |
|
||
The old size passed to `realloc` and `dealloc` is an optional performance enhancement. There is some | ||
debate about whether this is worth having. I have included it because there is no drawback within | ||
the current language and standard libraries, so it's an obvious performance enhancement. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you happen to have some hard numbers for this improvement? I'm sure there were benchmarks run to convince people to add the parameters to the C++ allocators, but a cursory search didn't find anything about sized deallocation vs. non-sized deallocation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you happen to have some hard numbers for this improvement?
I can tell you that it removes the need for metadata headers from simple allocators like those based solely on free lists for various size classes. It will reduce memory usage by up to 50%, and that comes with a large performance benefit.
For general purpose allocators where this data may already be around, it's still faster. In jemalloc and TCMalloc, an optional size parameter will save at least one cache miss. Taking advantage of the guarantee would require a complete redesign.
When an object is deallocated, we compute its page number and look it up in the central array to find the corresponding span object. The span tells us whether or not the object is small, and its size-class if it is small.
After an
|
I want to stick to the trait itself first rather than talking about containers. I don't think trait objects are a good solution because it will add space overhead to all containers, even though most will be using an allocator instance without any state. |
/// Return the usable size of an allocation created with the specified the `size` and `align`. | ||
#[inline(always)] | ||
#[allow(unused_variable)] | ||
unsafe fn usable_size(&self, size: uint, align: u32) -> uint { size } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you sure it's a good idea to have this separate as opposed to either of those options:
- Having malloc and realloc return a tuple of (ptr, usable_size)
- Keeping this as a function, but forcing the user to call it before malloc and realloc and pass the return value to them instead of the normal size
The problem with this design is that usable_size will usually have to computed twice, both in malloc and when the allocator user calls usable_size.
To decide which alternative: is it useful to know the usable size given the size without actually allocating memory? Is it useful to be able to write an allocator where the usable size is not constant given the size?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it's useful to write an allocator where the usable size is not constant given the size. The excess space comes from statically defined size classes or excess space from the page granularity. It seems only a very naive allocator would end up in a situation where it would have dynamic trailing capacity unusable by other allocations.
I can see why the current API is not ideal for very cheap allocators, and the size calculation is likely not a huge cost with jemalloc. I'll need to think about this more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I edited the above comment a bit based on my opinion changing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it's useful to write an allocator where the usable size is not constant given the size.
I'm not certain about this assumption. For example, suppose you have an allocator tuned for large allocations. If it receives a request for a 4 kB region and has a 4.5 kB free block, it might want to return the whole block because it knows it will never get a request that fits in 0.5 kB.
Additionally, you may be right, but I see opening up the possibility for more allocators as more important than making the API (which probably shouldn't be used directly) prettier.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be quite odd to have space after an allocation that's not usable for future small allocations though. If you're packing stuff together via bitmaps, you'll be able to reuse that space for something else. If you're not, you won't end up with unusable gaps anyway because size classes prevent it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm tempted to change how it works for another reason, which is that some allocators can avoid an extra calculation that won't always be constant folded by LLVM.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I admit that my example is highly unlikely, but I think it would arise if the following were true:
- The allocator is only used for large arrays, meaning that keeping track of small chunks of memory is not very useful.
- The allocated objects remain allocated for a long time, meaning that coalescing adjacent free regions is not very useful, making small chunks even less useful.
- The user of the allocator can use any extra memory provided.
- The sizes of allocations are neither constant nor rounded to nice (e.g. page size) values, meaning that size classes would waste lots of memory.
I can only think of one use case right now that satisfies these requirements (an unrolled linked list with a gradually increasing unrolling amount), but I think it could cause the situation I described. Because of the weirdly sized allocations, there might be a 4.5 kB free block. Because only long arrays are allocated, no request will fit in an 0.5 kB block. Because of the long object lifetime, there is no reason to suspect that an 0.5 kB block can be merged into another block in any reasonable timeframe. Taking all of that into account, returning all 4.5 kB is the logical thing to do.
I completely agree that for the vast majority of allocators, especially general purpose allocators, this sort of behavior is not going to show up. However, it might be useful in very specialized allocators, and I don't see a convincing argument that you would never need the ability to dynamically choose usable sizes in general purpose allocators - the lack of need in current systems and classes of systems just tells me that no one has thought of a use yet.
Moreover, I don't see the downside of returning the usable size. On the implementation side, it only makes things marginally more complicated, as the usable size is presumably already calculated, and so can simply be returned. On the user side, the complication should be contained to the standard library, as no one should be directly calling allocators - they should use predefined new
functions.
passing the size to the deallocator, I really hope this goes through. Without it, there are messy workarounds to get a simialr result. it seems to me the option in the interface to exploit the fact that collections can compute how much memory they point at should be there, you have more choice in how to implement allocators, more control. headers are a big deal for situations with alignment, the wrong bytes in the wrong place can pad things out or overstep cachelines. some of the workarounds rely on an mmu which you might not always have, or you might need a larger pagesize |
I think it might be a good idea to have alloc return the actual size of an allocation instead of having a specific function for it, in case of certain allocators. |
@Tobba that's being discussed in more detail above: #39 (comment) |
How will I be able to allocate a closure over arena-allocated variables inside an arena itself? Or, more generally, how will closures interact with this proposal? |
There's no interaction between closures/collections and this proposal. It's just an API for low-level allocators to conform to. Adding type parameters to collections will come later. The hard-wired unique pointers and proc types will likely never support an allocator parameter, but there can be library replacement (possibly obsoleting them) with allocator support. Anyway, it's way out of the scope of this RFC. |
Why take a |
It might be nice to have some trait that says that a certain type can be safely null-initialized, this way optimizations based on something similar like All |
One can allocate huge vectors for "free": a newly allocated vector (i.e. length 0) doesn't need to be zeroed. Are you considering the case when allocating something like |
@huonw Yes, I'm talking about something like this. However, if this null-initialization will be added to the language, then it'll probably be more like something along the lines of |
Don't you think allocators should be cloneable? Other than that, the design looks fine as long as low-level allocators aren't used directly. Allocators that clear the memory belong to a special kind, I suppose. A separate trait @eduardoleon Containers should store the allocator by value of type I believe in the most cases, the In addition to the pluggable default allocator (with lang item?), some allocators could implement #[deriving(Clone, Default)]
pub struct DefaultAllocator;
impl<T, A: Allocator + Default> Vec<T, A> {
#[inline(always)]
pub fn new() -> Vec<T, A> {
let alloc: A = Default::default();
Vec::with_alloc(alloc)
}
#[inline(always)]
pub fn with_capacity(capacity: uint) -> Vec<T, A> {
let alloc: A = Default::default();
Vec::with_alloc_capacity(alloc, capacity)
}
} |
@tbu-: Zero initialization of the memory is covered in the RFC. It can be added in a backwards compatible way so I think it's out of the scope of this discussion. It's useful because |
@pczarn: Inheriting from |
I'm not really sure what cloning an allocator with state would be expected to do though. |
@thestinger: It would just clone its immutable state. However, containers that require an allocator to be cloneable could use trait bounds, for example when implementing |
What would |
@thestinger: Still, allocators may or may not need to implement * or |
I found an interesting possibility regarding how to handle allocators. At the present moment, Rust has two fundamental notions: types and terms, where every term has a type. I think that safely talking about allocation phenomena requires a third notion: regions. In some ways, regions are like types:
In some other ways, regions are like terms:
A proof of concept of how a region system could be implemented in a language runtime is here: https://gist.github.com/eduardoleon/10785631 . My C is really bad, so there could be bugs in that code. |
@eduardoleon: This RFC just intends to create a standard interface for low-level allocators, rather than work out the high-level API exposed from the standard library to users. It needs to map well to the API provided by allocators like jemalloc, tcmalloc and hoard while also working well for more naive special cases like a free list per size class or arenas based on bitmaps. Rust already has a concept of lifetimes based on region typing, and I don't really think it would benefit from more of this. It already has a working system for handling destruction and lightweight references efficiently. An arena allocator will already bestow a lifetime on the containers it's used in and the references created from those simply by storing references to the arena in the container. Alternatively, it can be done dynamically by using |
pub trait Allocator { | ||
/// Return a pointer to `size` bytes of memory. | ||
/// | ||
/// A null pointer may be returned if the allocation fails. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what's the alternative behavior, calling fail!
? Just making sure I understand what the expectation is here.
(Or maybe this is just an english ambiguity; are you saying "A null pointer may be returned. In particular, a null pointer is return if and only if the allocation fails"? Or are you saying "If the allocation fails, then a null pointer may be returned, or may happen.")
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll rephrase this to say that returning a null pointer indicates a failed allocation. The alternative would be calling abort
or fail!()
, preventing this trait from being used where handling out-of-memory is necessary.
I need to update this based on the feedback and some of the changes I made for the API landed for the transitional jemalloc-based heap API, so take the current proposal with a grain of salt (although more feedback is welcome). It's a bit tricky to create a good API mapping to the desires for both simpler user-defined allocators and jemalloc. |
Closing. Allocators are affected by GC, and this design is out of date. Will revisit soon. |
Implement IntoFuture for tuples of IntoFutures
No description provided.