Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use task local random when initializing LinearMap #4687

Merged
merged 1 commit into from
Jan 30, 2013

Conversation

alexcrichton
Copy link
Member

I have a project for rust which makes extensive use of hash maps. I recently changed everything over to LinearMap, and was then profiling some very slow portions of the program.

I ended up finding that an absurd amount of time was spent in the read syscall, all from invocations of the function rng::Rng. I found that each time a hash map was created, a new random object was created. To get around this, this change just uses the task local random instead of creating a new one every time.

I'm not super familiar with how the hashing algorithm works using these random numbers, so I'm not sure if this would affect the quality of hashing or anything like that. If this does mess up something with that, I imagine that there should still be a faster way of creating a new map?

Regardless, my one test I was running went from a runtime of 1 minute to 12 seconds, to it was a pretty big win in that respect!

catamorphism added a commit that referenced this pull request Jan 30, 2013
Use task local random when initializing LinearMap
@catamorphism catamorphism merged commit 77f2aac into rust-lang:incoming Jan 30, 2013
@catamorphism
Copy link
Contributor

Thanks!

@alexcrichton alexcrichton deleted the hashmap-speedup branch January 31, 2013 00:56
@graydon
Copy link
Contributor

graydon commented Jan 31, 2013

A note on this: the TLS ISAAC we keep is, itself, a security risk. It does not currently reseed ever after it's constructed. So it's strictly worse than /dev/urandom: not only is it non-entropy-calibrated (as /dev/random would be), it's not even reseeding at a best-effort case (as /dev/urandom is). As a result, this patch makes our hashtables strictly weaker to resisting complexity attacks.

That said, ISAAC is supposedly "as strong" a CPRNG as any other we're likely getting access to via /dev/urandom so I think the patch can stay so long as we fix up the TLS instance of ISAAC to reseed periodically (which we should do anyways -- all users will currently suffer from the non-reseeding). The speed issue is real and is the reason we have ISAAC in-process with us in the first place. I'll file a bug on reseeding the TLS ISAAC.

Please be sure to subject all pull requests concerning randomness to extremely detailed review (preferably with multiple reviewers who are solidly familiar with CPRNGs). This sort of thing is very easy to get wrong: most randomness "looks the same".

@graydon graydon mentioned this pull request Jan 31, 2013
@cpeterso
Copy link
Contributor

cpeterso commented Feb 3, 2013

When I timed Rust's make check before and after this change, hashmap with task_rng() had a 2x slowdown! Why would the task_rng() seemingly have more contention? (My MacBook Pro with 4 hyperthreaded core, so RUST_THREADS=8.)

  • BEFORE: hashmap using rand::Rng():
$ time RUST_THREADS=8 make check &> /dev/null
real    4m9.721s
user    24m54.128s
sys     4m2.524s
$ time RUST_THREADS=1 make check &> /dev/null
real    9m32.939s
user    38m22.876s
sys     3m32.681s
  • AFTER: hashmap using rand::task_rng() has 2.3x slowdown with 8 threads, but a 1.3x speedup with 1 thread:
$ time RUST_THREADS=8 make check &> /dev/null
real    9m28.818s
user    41m50.217s
sys     4m10.457s
$ time RUST_THREADS=1 make check &> /dev/null
real    7m11.443s
user    14m13.074s
sys     2m13.267s

@graydon
Copy link
Contributor

graydon commented Feb 4, 2013

Not sure. Reopening.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants