-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduce avector
crate at v0.0.0
#1399
Conversation
Co-authored-by: Tim Diekmann <21277928+TimDiekmann@users.noreply.github.com>
Co-authored-by: Tim Diekmann <21277928+TimDiekmann@users.noreply.github.com>
It turns out |
@@ -0,0 +1,4 @@ | |||
[toolchain] | |||
# Please also update the badges in `README.md`, `src/lib.rs`, and `macros/` when changing this |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The readme is empty, we don't mention the toolchain in lib.rs
, and we also don't have a macros folder.
I think it makes sense to add at least a small README, also containing the rust version.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Codecov Report
@@ Coverage Diff @@
## main #1399 +/- ##
==========================================
+ Coverage 40.31% 45.19% +4.88%
==========================================
Files 309 316 +7
Lines 16328 17447 +1119
Branches 813 813
==========================================
+ Hits 6582 7885 +1303
+ Misses 9741 9557 -184
Partials 5 5
Flags with carried forward coverage won't be shown. Click here to find out more.
📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not really sure about the implementation of the vector. I'm not sold on the idea to basically mix a linked list with a vector approach and wrap everything in a spin lockk. Why don't just use a spinlocked mutex and wrap a vector in it?
@@ -0,0 +1,551 @@ | |||
//! Concurrent Read Optimized Vector |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you mean with "read optimized"?
As I can see from the description below, you have a linked list with N
elements per bucket. Read-optimized to me means, that both, index operations and iterating are fast, but none of them are if N
is small (size_of<T> * N
< cache line). For index operation you need to go through i / N
pointers to index i mod N
, for iterating you have to dereference length / N
pointers. Compared to a vector, where N
is infinite, this does not sound read-optimized.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the description isn't accurate; I will replace it. As you outlined, this type won't ever be as fast as Vec
in read-only operations (this also depends on factors like: is an Arc
used, a RwLock
, or a Mutex
). I mean that AVec
is optimized for read access instead of write access. (AVec
never re-allocates, and write has a lock, while read does not.)
} | ||
} | ||
|
||
// SAFETY: We use `UnsafeCell`, the referred `Bucket` is `Send` if `T` is `Send` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not a safety comment, which argues why it is safe to access the struct from two locations at the same time.
Co-authored-by: Tim Diekmann <21277928+TimDiekmann@users.noreply.github.com>
Co-authored-by: Tim Diekmann <21277928+TimDiekmann@users.noreply.github.com>
//! Benchmark usecases against: | ||
//! * (alloc) built-in Vec | ||
//! * (std) intrusive-collections | ||
//! * (std) im |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think these benchmarks are missing the most important operation, where this crate could shine: index read operations. The benchmarks assume you have a set of data to insert or iterate over. So globally, a mutex/RwLock is used to lock the data and do the operations as if it would be single-threaded. The larger the batch of data is, the less the impact of atomic operations will be.
start.wait(); | ||
|
||
let lock = slab.read().unwrap(); | ||
let items: Vec<_> = lock.iter().copied().collect(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you measure the difference by passing the actual data to black_box
instead of a collected vector? I wonder if collect()
has an impact on the benchmarks
@@ -0,0 +1,430 @@ | |||
//! Benchmark usecases against: | |||
//! * (alloc) built-in Vec |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The crate claims to be efficient on read operations but the only operation, which has better performance in these benchmarks, are write-operations in a multi-threaded context. For all other operations, alloc::Vec
is faster.
Co-authored-by: Tim Diekmann <21277928+TimDiekmann@users.noreply.github.com>
After talking with @TimDiekmann over DMs, we decided that the benefits of We might want to resume this in the future if there are any clear benefits. For now,
|
In the case we see benefits for |
@TimDiekmann Fun old Stack question of yours! Might it be worth our x-linking to this closed PR at all from the original post? Not sure if that contravenes some rule, but it may provide useful inspiration for somebody in future. |
🌟 What is the purpose of this PR?
This PR introduces a new crate
avector
, a read optimized concurrently accessible (for reads) vector that can only be pushed to.This crate:
deer
serde
anddefmt
no-std
compatible hooks forerror-stack
Debug
hooks onno-std
forerror-stack
.Reasoning
This crate originates from the needs of
deer
for a thoroughly tested and verified append only vector that is available on no-std systems. Previous crates likescc
,appendlist
,intrusive-collections
,sento
, orappend-only-vec
do not fit those requirements as they are only available onstd
, are not concurrently accessible or are poorly tested. This crate tries to remedy all those problems.🔗 Related links
🚫 Blocked by
🔍 What does this change?
avector
📜 Does this require a change to the docs?
Documentation is still needed, as this is a completely new crate.
Currently
append-vec
has nostd
option and uses spinlock either way, one could remove the spinlock and replace it with a fully blownMutex
onstd
, tho benchmarks have shown that due to speed of push, impact due to spinning is small.🐾 Next steps
📹 Demo
Coming Soon™