Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

optimize dep-graph serialization #35232

Closed
nikomatsakis opened this issue Aug 3, 2016 · 7 comments
Closed

optimize dep-graph serialization #35232

nikomatsakis opened this issue Aug 3, 2016 · 7 comments
Labels
A-incr-comp Area: Incremental compilation T-compiler Relevant to the compiler team, which will review and decide on the PR/issue.

Comments

@nikomatsakis
Copy link
Contributor

Looking at the syntex-syntax crate, I see that dep-graph serialization is pretty expensive:

time: 13.588; rss: 424MB        serialize dep graph
incremental: re-using 50 out of 50 modules

We're saving way more data than we really need to. I expect we can make this a lot cheaper by pruning the graph. But it'd be worth doing some profiling.

cc @michaelwoerister

@nikomatsakis nikomatsakis added T-compiler Relevant to the compiler team, which will review and decide on the PR/issue. A-incr-comp Area: Incremental compilation labels Aug 3, 2016
@nikomatsakis
Copy link
Contributor Author

Also surprising, if unrelated: even with 100% reuse, translation takes ~4s on my machine. I think the only thing it's doing is re-generating the metadata.

@nikomatsakis nikomatsakis added this to the Incremental compilation alpha milestone Aug 3, 2016
@nikomatsakis
Copy link
Contributor Author

OK, so, my first attempt at optimizing this was kind of a failure, in that it caused us to take more time than before. =)

Some things I learned:

  • the expensive part is the meta-data hash computation
  • some part of this is computing the sets of HIR inputs for a given metadata, some part of it is probably creating the strings and sorting and so forth; haven't quite gotten to the bottom of it

Also worth nothing:

When I rewrote this code, some of the problems I've encountered with non-determinstic re-use basically went away. So that's interesting.

@michaelwoerister
Copy link
Member

Those seem like very valuable pieces of information! Sounds like some kind of caching on intermediate graph nodes might be a speedup opportunity. But that's just off the top of my head, I haven't really investigated.

@nikomatsakis
Copy link
Contributor Author

I have some thoughts about how to improve performance. I'll experiment
tomorrow (and/or we can discuss)

On Aug 3, 2016 9:30 PM, "Michael Woerister" notifications@github.com
wrote:

Those seem like very valuable pieces of information! Sounds like some kind
of caching on intermediate graph nodes might be a speedup opportunity. But
that's just off the top of my head, I haven't really investigated.


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
#35232 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAJeZmXINxKcVOjHsyEvRfu7arbEj8lzks5qcUDPgaJpZM4JbLQQ
.

@nikomatsakis
Copy link
Contributor Author

My latest version runs in 2.5s -- but it still spoils the unit tests. I have to fix that next, as we talked about @michaelwoerister.

@nikomatsakis
Copy link
Contributor Author

(Still seems like we could go faster...)

@nikomatsakis
Copy link
Contributor Author

OK, got it down to 1.8s.

@bors bors closed this as completed in 02a4703 Aug 9, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-incr-comp Area: Incremental compilation T-compiler Relevant to the compiler team, which will review and decide on the PR/issue.
Projects
None yet
Development

No branches or pull requests

2 participants