- you need rust nightly for this repository currently
- build with
cargo build
- test with
cargo test
- bench with
cargo bench
src
├── allocator.rs // allocator used by lockfree_cchamt for static packing entries
├── cchamt.rs // the simplest cache conscious implementation for showing the optimal case while reading sequentially
├── hamt.rs // plain hash trie implementation
├── lib.rs
├── lockfree_cchamt.rs // An implementation that follows the concurrent trie paper + static data packing
├── mutex_cchamt.rs // cchamt + mutex per hash trie
└── rwlock_cchamt.rs // cchamt + rwrite lock per hash trie
- in the dump directory
- Trie
- Contiguous Trie
- Benchmark
- Bench with different size (such as 1k, 10k, 1m, 10m...)
- Bench with different reading sequence (now is consecutively, should try others)
- Bench with different size on different amount of threads
- Concurrent by lock
- Mutex per trie
- RwLock per trie
- Mutex per element
- RwLock per element
- Concurrent by lock-free
- Every kind of optimization
- Customized Allocator for cache conscious data structure
- Static data packing (clustering)
- Dynamic data packing (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.76.2169&rep=rep1&type=pdf)
- See here.
- In the contiguous base, sequential order such as ascending or descending performs very well when we have more than 10^5 elements
- While other point that worth to talk about is when we read the element from hashmap randomly, the official hashmap performs pretty bad, and our contiguous cctrie seems just become a little bit worse.