Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC: Allocator trait #39

Closed
wants to merge 1 commit into from
Closed

RFC: Allocator trait #39

wants to merge 1 commit into from

Conversation

thestinger
Copy link

No description provided.

@emberian
Copy link
Member

emberian commented Apr 8, 2014

This looks good to me.


The old size passed to `realloc` and `dealloc` is an optional performance enhancement. There is some
debate about whether this is worth having. I have included it because there is no drawback within
the current language and standard libraries, so it's an obvious performance enhancement.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you happen to have some hard numbers for this improvement? I'm sure there were benchmarks run to convince people to add the parameters to the C++ allocators, but a cursory search didn't find anything about sized deallocation vs. non-sized deallocation.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you happen to have some hard numbers for this improvement?

I can tell you that it removes the need for metadata headers from simple allocators like those based solely on free lists for various size classes. It will reduce memory usage by up to 50%, and that comes with a large performance benefit.

For general purpose allocators where this data may already be around, it's still faster. In jemalloc and TCMalloc, an optional size parameter will save at least one cache miss. Taking advantage of the guarantee would require a complete redesign.

When an object is deallocated, we compute its page number and look it up in the central array to find the corresponding span object. The span tells us whether or not the object is small, and its size-class if it is small.

@eduardoleon
Copy link

After an Allocator trait is added, I think collections (and other types whose contents are primarily dynamically allocated) should...

  1. ... be parametrized over allocators:

    pub struct Queue<T> {
         priv alloc: ~Allocator,  // unique ownership is justified in that an Allocator object
                                  // merely represents a handle for using an allocator, not the
                                  // allocator itself
         // queue-specific stuff...
    }
    
  2. ... implement traits that support moving and cloning objects into a memory region managed by a different allocator:

    pub trait MoveWhere {
        fn moveWhere<A: Allocator>(self, alloc: A) -> Self;
    }
    
    pub trait CloneWhere {
        fn cloneWhere<A: Allocator>(&self, alloc: A) -> Self;
    }
    

@thestinger
Copy link
Author

I want to stick to the trait itself first rather than talking about containers. I don't think trait objects are a good solution because it will add space overhead to all containers, even though most will be using an allocator instance without any state.

/// Return the usable size of an allocation created with the specified the `size` and `align`.
#[inline(always)]
#[allow(unused_variable)]
unsafe fn usable_size(&self, size: uint, align: u32) -> uint { size }

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you sure it's a good idea to have this separate as opposed to either of those options:

  1. Having malloc and realloc return a tuple of (ptr, usable_size)
  2. Keeping this as a function, but forcing the user to call it before malloc and realloc and pass the return value to them instead of the normal size

The problem with this design is that usable_size will usually have to computed twice, both in malloc and when the allocator user calls usable_size.

To decide which alternative: is it useful to know the usable size given the size without actually allocating memory? Is it useful to be able to write an allocator where the usable size is not constant given the size?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it's useful to write an allocator where the usable size is not constant given the size. The excess space comes from statically defined size classes or excess space from the page granularity. It seems only a very naive allocator would end up in a situation where it would have dynamic trailing capacity unusable by other allocations.

I can see why the current API is not ideal for very cheap allocators, and the size calculation is likely not a huge cost with jemalloc. I'll need to think about this more.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I edited the above comment a bit based on my opinion changing.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it's useful to write an allocator where the usable size is not constant given the size.

I'm not certain about this assumption. For example, suppose you have an allocator tuned for large allocations. If it receives a request for a 4 kB region and has a 4.5 kB free block, it might want to return the whole block because it knows it will never get a request that fits in 0.5 kB.

Additionally, you may be right, but I see opening up the possibility for more allocators as more important than making the API (which probably shouldn't be used directly) prettier.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be quite odd to have space after an allocation that's not usable for future small allocations though. If you're packing stuff together via bitmaps, you'll be able to reuse that space for something else. If you're not, you won't end up with unusable gaps anyway because size classes prevent it.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm tempted to change how it works for another reason, which is that some allocators can avoid an extra calculation that won't always be constant folded by LLVM.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I admit that my example is highly unlikely, but I think it would arise if the following were true:

  • The allocator is only used for large arrays, meaning that keeping track of small chunks of memory is not very useful.
  • The allocated objects remain allocated for a long time, meaning that coalescing adjacent free regions is not very useful, making small chunks even less useful.
  • The user of the allocator can use any extra memory provided.
  • The sizes of allocations are neither constant nor rounded to nice (e.g. page size) values, meaning that size classes would waste lots of memory.

I can only think of one use case right now that satisfies these requirements (an unrolled linked list with a gradually increasing unrolling amount), but I think it could cause the situation I described. Because of the weirdly sized allocations, there might be a 4.5 kB free block. Because only long arrays are allocated, no request will fit in an 0.5 kB block. Because of the long object lifetime, there is no reason to suspect that an 0.5 kB block can be merged into another block in any reasonable timeframe. Taking all of that into account, returning all 4.5 kB is the logical thing to do.

I completely agree that for the vast majority of allocators, especially general purpose allocators, this sort of behavior is not going to show up. However, it might be useful in very specialized allocators, and I don't see a convincing argument that you would never need the ability to dynamically choose usable sizes in general purpose allocators - the lack of need in current systems and classes of systems just tells me that no one has thought of a use yet.

Moreover, I don't see the downside of returning the usable size. On the implementation side, it only makes things marginally more complicated, as the usable size is presumably already calculated, and so can simply be returned. On the user side, the complication should be contained to the standard library, as no one should be directly calling allocators - they should use predefined new functions.

@dobkeratops
Copy link

passing the size to the deallocator, I really hope this goes through. Without it, there are messy workarounds to get a simialr result. it seems to me the option in the interface to exploit the fact that collections can compute how much memory they point at should be there, you have more choice in how to implement allocators, more control. headers are a big deal for situations with alignment, the wrong bytes in the wrong place can pad things out or overstep cachelines. some of the workarounds rely on an mmu which you might not always have, or you might need a larger pagesize

@Tobba
Copy link

Tobba commented Apr 10, 2014

I think it might be a good idea to have alloc return the actual size of an allocation instead of having a specific function for it, in case of certain allocators.

@huonw
Copy link
Member

huonw commented Apr 10, 2014

@Tobba that's being discussed in more detail above: #39 (comment)

@cgaebel
Copy link

cgaebel commented Apr 12, 2014

How will I be able to allocate a closure over arena-allocated variables inside an arena itself? Or, more generally, how will closures interact with this proposal?

@thestinger
Copy link
Author

There's no interaction between closures/collections and this proposal. It's just an API for low-level allocators to conform to. Adding type parameters to collections will come later. The hard-wired unique pointers and proc types will likely never support an allocator parameter, but there can be library replacement (possibly obsoleting them) with allocator support. Anyway, it's way out of the scope of this RFC.

@anasazi
Copy link

anasazi commented Apr 14, 2014

Why take a u32 for alignment and require it to be some 2^n when it opens up so much potential for undefined behavior? You could just take the n instead and eliminate a whole class of usage bugs.

@tbu-
Copy link
Contributor

tbu- commented Apr 14, 2014

It might be nice to have some trait that says that a certain type can be safely null-initialized, this way optimizations based on something similar like calloc could be done.

All ints, maybe bool, float and Option<PtrType> would benefit from this, and it would make allocating huge Vec<>s of them relatively cheap.

@huonw
Copy link
Member

huonw commented Apr 14, 2014

One can allocate huge vectors for "free": a newly allocated vector (i.e. length 0) doesn't need to be zeroed. Are you considering the case when allocating something like Vec::from_elem(1_000_000, 0) which has length > 0?

@tbu-
Copy link
Contributor

tbu- commented Apr 14, 2014

@huonw Yes, I'm talking about something like this. However, if this null-initialization will be added to the language, then it'll probably be more like something along the lines of Vec::from_null(1_000_000) where this function is only defined for Ts that satisfy some Zero trait.

@pczarn
Copy link

pczarn commented Apr 14, 2014

Don't you think allocators should be cloneable? Other than that, the design looks fine as long as low-level allocators aren't used directly.

Allocators that clear the memory belong to a special kind, I suppose. A separate trait ZeroAllocator : Allocator could guarantee that the memory is initialized.

@eduardoleon Containers should store the allocator by value of type A. Then the "handler" can access its inner state in any flexible way it wants to.

I believe in the most cases, the Allocator trait will be implemented for a unit-like struct such as Heap or a custom stateful allocator Rc<RefCell<MyAllocatorState>>.

In addition to the pluggable default allocator (with lang item?), some allocators could implement Default, which will be useful once collections start using an Allocator trait.

#[deriving(Clone, Default)]
pub struct DefaultAllocator;

impl<T, A: Allocator + Default> Vec<T, A> {
    #[inline(always)]
    pub fn new() -> Vec<T, A> {
        let alloc: A = Default::default();
        Vec::with_alloc(alloc)
    }

    #[inline(always)]
    pub fn with_capacity(capacity: uint) -> Vec<T, A> {
        let alloc: A = Default::default();
        Vec::with_alloc_capacity(alloc, capacity)
    }
}

@thestinger
Copy link
Author

@tbu-: Zero initialization of the memory is covered in the RFC. It can be added in a backwards compatible way so I think it's out of the scope of this discussion. It's useful because mmap always hands over zeroed memory, and will be much less useful in the future when it's no longer the best allocation API exposed by operating systems (see the Linux vrange work).

@thestinger
Copy link
Author

@pczarn: Inheriting from Clone makes sense, I'll add that in.

@thestinger
Copy link
Author

I'm not really sure what cloning an allocator with state would be expected to do though.

@pczarn
Copy link

pczarn commented Apr 15, 2014

@thestinger: It would just clone its immutable state. Clone is already implemented for Rc.

However, containers that require an allocator to be cloneable could use trait bounds, for example when implementing Clone for Vec<T, A: Allocator + Clone> and HashMap<K: Hash + Eq, V, A: Allocator + Clone>.

@thestinger
Copy link
Author

What would Clone on an allocator implementing an arena do? I guess it would need to be wrapped behind & or Rc. To be useful, most allocators do need to implement Clone, but there are cases like linked containers where a single arena may be used reserved for the specific container, and Rc would be wasteful.

@pczarn
Copy link

pczarn commented Apr 15, 2014

@thestinger: Clone on a linked container would also clone the allocator of type RefCell<ArenaAllocatorState>*. Cloning the state must allocate a new arena. This way, every container can either use an arena wrapped behind Rc or keep its own specific arena that should be cloneable so that the container is cloneable.

Still, allocators may or may not need to implement Clone. A single arena reserved for a single container doesn't need to.

* or &mut ArenaAllocatorState?

@eduardoleon
Copy link

I found an interesting possibility regarding how to handle allocators. At the present moment, Rust has two fundamental notions: types and terms, where every term has a type. I think that safely talking about allocation phenomena requires a third notion: regions.

In some ways, regions are like types:

  • Under my proposal, every term, in addition to a type, has a region, which is statically known, either by explicit annotation or via inference.
  • It ought to be possible to write parametrically polymorphic functions that provide static guarantees regarding the specific region in which a parameter or a return value is allocated.
  • It ought to be possible for the compiler to reject programs on the basis that objects that live in different regions are passed as arguments to a function that expects both to live in the same region.
  • As an exercise, we can see that fn memmove(T/R1) -> T/R2 is a function that, merely by looking at the types of the argument and the return value, looks like an identity function, but, if we look at the regions, the region of the argument may be different than the region of the return value.

In some other ways, regions are like terms:

  • Unlike types, which exist purely for compile-time checking purposes, regions have a runtime manifestation: a collection of memory blocks that are guaranteed to be freed all at the same time.
  • Unlike types, which "exist forever", regions have lifetimes. Furthermore, the lifetime of every object is bounded by the lifetime of the region that contains it: this guarantees that every object must be destroyed before its region is freed.

A proof of concept of how a region system could be implemented in a language runtime is here: https://gist.github.com/eduardoleon/10785631 . My C is really bad, so there could be bugs in that code.

@thestinger
Copy link
Author

@eduardoleon: This RFC just intends to create a standard interface for low-level allocators, rather than work out the high-level API exposed from the standard library to users. It needs to map well to the API provided by allocators like jemalloc, tcmalloc and hoard while also working well for more naive special cases like a free list per size class or arenas based on bitmaps.

Rust already has a concept of lifetimes based on region typing, and I don't really think it would benefit from more of this. It already has a working system for handling destruction and lightweight references efficiently. An arena allocator will already bestow a lifetime on the containers it's used in and the references created from those simply by storing references to the arena in the container. Alternatively, it can be done dynamically by using Rc<T>. I don't think any language changes are required.

pub trait Allocator {
/// Return a pointer to `size` bytes of memory.
///
/// A null pointer may be returned if the allocation fails.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what's the alternative behavior, calling fail!? Just making sure I understand what the expectation is here.

(Or maybe this is just an english ambiguity; are you saying "A null pointer may be returned. In particular, a null pointer is return if and only if the allocation fails"? Or are you saying "If the allocation fails, then a null pointer may be returned, or may happen.")

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll rephrase this to say that returning a null pointer indicates a failed allocation. The alternative would be calling abort or fail!(), preventing this trait from being used where handling out-of-memory is necessary.

@thestinger
Copy link
Author

I need to update this based on the feedback and some of the changes I made for the API landed for the transitional jemalloc-based heap API, so take the current proposal with a grain of salt (although more feedback is welcome). It's a bit tricky to create a good API mapping to the desires for both simpler user-defined allocators and jemalloc.

@brson
Copy link
Contributor

brson commented Jun 19, 2014

Closing. Allocators are affected by GC, and this design is out of date. Will revisit soon.

@brson brson closed this Jun 19, 2014
@brson
Copy link
Contributor

brson commented Jun 19, 2014

@thestinger thestinger deleted the allocator branch September 18, 2014 20:22
@pnkfelix pnkfelix mentioned this pull request Dec 22, 2014
withoutboats pushed a commit to withoutboats/rfcs that referenced this pull request Jan 15, 2017
Implement IntoFuture for tuples of IntoFutures
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.