-
Notifications
You must be signed in to change notification settings - Fork 12.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Migrate a slew of metadata methods to queries #44142
Conversation
r? @pnkfelix (rust_highfive has picked a reviewer for you, use r? to override) |
CI timed out after 3 hours, with
|
src/librustc/dep_graph/dep_node.rs
Outdated
[] DeriveRegistrarFn(DefId), | ||
[] ForeignCrateDisambiguator(DefId), | ||
[] CrateHash(DefId), | ||
[] ForeignOriginalCrateName(DefId), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@nikomatsakis @michaelwoerister Is there not a better way to deal with this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem being the duplication? I'd like to combine this macro with the query macro, I'm not sure if something is blocking that.
@@ -140,7 +138,27 @@ provide! { <'tcx> tcx, def_id, cdata, | |||
is_panic_runtime => { cdata.is_panic_runtime(&tcx.dep_graph) } | |||
is_compiler_builtins => { cdata.is_compiler_builtins(&tcx.dep_graph) } | |||
has_global_allocator => { cdata.has_global_allocator(&tcx.dep_graph) } | |||
is_sanitizer_runtime => { cdata.is_sanitizer_runtime(&tcx.dep_graph) } | |||
is_profiler_runtime => { cdata.is_profiler_runtime(&tcx.dep_graph) } | |||
panic_strategy => { cdata.panic_strategy(&tcx.dep_graph) } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aren't a bunch of these just reading crate-level attributes? Why can't we do that directly through tcx.get_attrs
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could, but it would mean that if we allow "free" access, then any change to crate attributes will require recompiling everything. Not sure if that's a problem.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm but that could be solved by putting a Symbol
(for the name of the attribute) in the query, which is what you want most of the time anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not all of these are crate attributes as well, some of them, like the panic strategy, can be affected by command line flags as well. The others, sure, but perhaps that's a refactoring for another time?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm if there aren't many of them then this PR is probably fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This all seems ok to me. Regarding @eddyb's comments, my take is that:
- we could (and should) revisit the question of how to autogenerate dep-node variants and query variants (I'd prefer if you did not have to duplicate), but it seems orthogonal from this PR;
- we could consider accessing the crate attributes directly, but I don't see a strong reason to do so.
In particular, accessing crate attributes directly would imply that we would have more coarse-grained incremental. This is not in and of itself a problem, as I don't know that modifying crate attributes was ever intended to be a "supported" incremental use case, but I also don't see much reason to actively make it not work, now that @alexcrichton already did the work.
If the concern is that we are making too many hashmaps or something like that, I think I'd prefer to keep the queries, but add some flags to make a query "direct" or something, in which it is not cached (and presumably then also doesn't help much with incremental). That way we can control this trade-off with minimal effort. (Or does that feel like overkill?)
@nikomatsakis The thing is that there's a handful of methods that were never required, all they do is they read the crate attributes and IMO having the extra method in between muddles that. |
I tracked down some incremental test failures with the help of @nikomatsakis and the conclusion is that this PR will regress some current incremental tests due to new dependency nodes getting introduced. We decided, though, that the regression is ok for now. The red/green system will fix the regressions and so for now I'll just tag them with a FIXME |
40831eb
to
30ea918
Compare
Ok I've updated queries to work with re-r? @nikomatsakis |
It appears I never responded to this. I know I started one... anyway, I actually think the queries do serve (potentially) a purpose. First off, they are clearly (to me) better documentation of the intent than "open-coding" some logic that trawls through the attributes on the crate. That is, I'd prefer that most of the compiler doesn't directly deal in attributes, but uses a helper method to encapsulate that logic (that doesn't necessarily have to be a query). Second, in terms of incremental logic, having queries does serve a real purpose since it allows us to control change propagation. This is what I was trying to say: having a query means that, once red-green is working, if the user changes the crate attributes, we will be able to avoid recompiling everything (since many things consult these crate attributes, e.g. symbol creation), unless they happen to have changed the result of one of those queries. Now, it may be that we could just have helper methods instead of the queries: I wouldn't be opposed to that, but it will result in coarser grained incremental support. |
I've pushed up a fix for the broken test and one more query translated |
94fd925
to
a520552
Compare
☔ The latest upstream changes (presumably #44186) made this pull request unmergeable. Please resolve the merge conflicts. |
668d624
to
7279bec
Compare
7279bec
to
9892072
Compare
}, | ||
extern_mod_stmt_cnum: |tcx, id| { | ||
let id = tcx.hir.definitions().find_node_for_hir_id(id); | ||
tcx.sess.cstore.extern_mod_stmt_cnum_untracked(id) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This method and postorder_cnums
below are something I wanted to ask about here. So what's happening here is that in this query it's reaching into the cstore via the CrateStore
trait into various untracked methods to actually perform the computation.
Is that ok? Does that mean that these aren't tracked properly? Should the dep nodes be manually created here? Just wanted to make sure to flag this!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think what should happen is that we should treat these as "input" queries -- and hence always re-execute them. Really, I think that robably all the metadata-poking queries should be this way, and we should just lean on red-green to let us only invalidate things that are truly invalidated by changing metadata.
(Cc @michaelwoerister)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc #44234 as well, where I believe we're discussing this basically
// Once red/green incremental compilation lands we should be able to | ||
// remove this because while the crate changes often the lint level map | ||
// will change rarely. | ||
self.dep_graph.with_ignore(|| { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wanted to make sure I called this out as well. This with_ignore
was necessary to not instantly break all of the incremental tests!
I imagine though that with red/green this'll always generate the same value, so the node's always green even though it needs to be recomputed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that's the idea, yeah
src/librustc_metadata/cstore_impl.rs
Outdated
item_body => { | ||
if let Some(cached) = tcx.hir.get_inlined_body_untracked(def_id) { | ||
return cached; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can remove this if
and the get_inlined_body_untracked
method, as the result will be cached anyway.
src/librustc/ty/maps.rs
Outdated
@@ -1313,6 +1343,14 @@ define_maps! { <'tcx> | |||
[] get_lang_items: get_lang_items_node(CrateNum) -> Rc<LanguageItems>, | |||
[] defined_lang_items: DefinedLangItems(CrateNum) -> Rc<Vec<(DefIndex, usize)>>, | |||
[] missing_lang_items: MissingLangItems(CrateNum) -> Rc<Vec<LangItem>>, | |||
[] item_body: ItemBody(DefId) -> &'tcx hir::Body, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can this be named extern_const_body
?
5de8ffc
to
1485ed4
Compare
☔ The latest upstream changes (presumably #44233) made this pull request unmergeable. Please resolve the merge conflicts. |
1485ed4
to
d7d5c59
Compare
💔 Test failed - status-travis |
Same error as #43742 (comment)? Something's wrong with the sanitizers. @bors retry |
Everything passed except two builders on OSX which are timing out due to travis issues. I'm going to merge and deal with fallout if there's any (which seems highly unlikely) |
This commit alters the `query` function in the dep graph module to preallocate memory using `with_capacity` instead of relying on automatic growth. Discovered in rust-lang#44576 it was found that for the syntex_syntax clean incremental benchmark the peak memory usage was found when the dep graph was being saved, particularly the `DepGraphQuery` data structure itself. PRs like rust-lang#44142 which add more queries end up just making this much larger! I didn't see an immediately obvious way to reduce the size of the `DepGraphQuery` object, but it turns out that `with_capacity` helps quite a bit! Locally 831 MB was used [before] this commit, and 770 MB is in use at the peak of the compiler [after] this commit. That's a nice 7.5% improvement! This won't quite make up for the losses in rust-lang#44142 but I figured it's a good start. [before]: https://gist.github.com/alexcrichton/2d2b9c7a65503761925c5a0bcfeb0d1e [before]: https://gist.github.com/alexcrichton/6da51f2a6184bfb81694cc44f06deb5b
…elwoerister rustc: Preallocate when building the dep graph This commit alters the `query` function in the dep graph module to preallocate memory using `with_capacity` instead of relying on automatic growth. Discovered in rust-lang#44576 it was found that for the syntex_syntax clean incremental benchmark the peak memory usage was found when the dep graph was being saved, particularly the `DepGraphQuery` data structure itself. PRs like rust-lang#44142 which add more queries end up just making this much larger! I didn't see an immediately obvious way to reduce the size of the `DepGraphQuery` object, but it turns out that `with_capacity` helps quite a bit! Locally 831 MB was used [before] this commit, and 770 MB is in use at the peak of the compiler [after] this commit. That's a nice 7.5% improvement! This won't quite make up for the losses in rust-lang#44142 but I figured it's a good start. [before]: https://gist.github.com/alexcrichton/2d2b9c7a65503761925c5a0bcfeb0d1e [before]: https://gist.github.com/alexcrichton/6da51f2a6184bfb81694cc44f06deb5b
…elwoerister rustc: Preallocate when building the dep graph This commit alters the `query` function in the dep graph module to preallocate memory using `with_capacity` instead of relying on automatic growth. Discovered in rust-lang#44576 it was found that for the syntex_syntax clean incremental benchmark the peak memory usage was found when the dep graph was being saved, particularly the `DepGraphQuery` data structure itself. PRs like rust-lang#44142 which add more queries end up just making this much larger! I didn't see an immediately obvious way to reduce the size of the `DepGraphQuery` object, but it turns out that `with_capacity` helps quite a bit! Locally 831 MB was used [before] this commit, and 770 MB is in use at the peak of the compiler [after] this commit. That's a nice 7.5% improvement! This won't quite make up for the losses in rust-lang#44142 but I figured it's a good start. [before]: https://gist.github.com/alexcrichton/2d2b9c7a65503761925c5a0bcfeb0d1e [before]: https://gist.github.com/alexcrichton/6da51f2a6184bfb81694cc44f06deb5b
…elwoerister rustc: Preallocate when building the dep graph This commit alters the `query` function in the dep graph module to preallocate memory using `with_capacity` instead of relying on automatic growth. Discovered in rust-lang#44576 it was found that for the syntex_syntax clean incremental benchmark the peak memory usage was found when the dep graph was being saved, particularly the `DepGraphQuery` data structure itself. PRs like rust-lang#44142 which add more queries end up just making this much larger! I didn't see an immediately obvious way to reduce the size of the `DepGraphQuery` object, but it turns out that `with_capacity` helps quite a bit! Locally 831 MB was used [before] this commit, and 770 MB is in use at the peak of the compiler [after] this commit. That's a nice 7.5% improvement! This won't quite make up for the losses in rust-lang#44142 but I figured it's a good start. [before]: https://gist.github.com/alexcrichton/2d2b9c7a65503761925c5a0bcfeb0d1e [before]: https://gist.github.com/alexcrichton/6da51f2a6184bfb81694cc44f06deb5b
…elwoerister rustc: Preallocate when building the dep graph This commit alters the `query` function in the dep graph module to preallocate memory using `with_capacity` instead of relying on automatic growth. Discovered in rust-lang#44576 it was found that for the syntex_syntax clean incremental benchmark the peak memory usage was found when the dep graph was being saved, particularly the `DepGraphQuery` data structure itself. PRs like rust-lang#44142 which add more queries end up just making this much larger! I didn't see an immediately obvious way to reduce the size of the `DepGraphQuery` object, but it turns out that `with_capacity` helps quite a bit! Locally 831 MB was used [before] this commit, and 770 MB is in use at the peak of the compiler [after] this commit. That's a nice 7.5% improvement! This won't quite make up for the losses in rust-lang#44142 but I figured it's a good start. [before]: https://gist.github.com/alexcrichton/2d2b9c7a65503761925c5a0bcfeb0d1e [before]: https://gist.github.com/alexcrichton/6da51f2a6184bfb81694cc44f06deb5b
This PR intends to make more progress on #41417, knocking off some low-hanging fruit.
Closes #44190
cc #44137