-
Notifications
You must be signed in to change notification settings - Fork 12.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Regression in trait bounds that are redundant with associated type's HRTB #57639
Comments
I think this is a minimal reproduction example. It compiles on stable (1.31.1) but not on nightly. Playground trait Foo<'a> {}
trait Bar {
type Item: for<'a> Foo<'a>;
}
fn foo<'a, T>(_: T)
where
T: Bar,
T::Item: Foo<'a>
{} |
This is cleaner syntax and also gets around a compiler bug outlined here: rust-lang/rust#57639
Changed the title because this is not specific to custom derives. The characteristic behavior seems to be a trait bound |
Mentioning @nikomatsakis and @scalexm because this seems likely to do with universes. |
triage: P-high,assigning to @nikomatsakis |
OK, I did some digging into this. I'm not sure yet what I think is the correct fix. The problem begins when we are asked to prove
I think we are asked to prove this because we are trying to prove that Here we have two choices. We can use the where clause Now, in the universe PR I kind of took a maximally simplistic view of higher-ranked unification errors, deferring detection as long as possible. This was in part to give us room to get smarter. It seems like that may have too strong. I think we could add some logic here that allows us to reject the where-clause, though I'd have to look to find the best spot. @rozbb @dtolnay -- one question I have: how severe is this regression? it's hard for me to tell whether this affects one particular pattern, or something that is very common (I would've expected the latter to show up in the crater runs, but maybe not) |
I don't know of this being a common pattern. It is startling to me that in @rozbb's code if you replace In the minimal repro: trait Foo<'a> {}
trait FooAll: for<'a> Foo<'a> {}
impl<T: ?Sized> FooAll for T where T: for<'a> Foo<'a> {}
trait Bar {
type Item: FooAll;
}
fn foo<'a, T>(_: T)
where
T: Bar,
T::Item: Foo<'a>
{} |
I mean it's definitely a shortcoming of the trait solver, so it's hard to "explain" the behavior. I'm mostly trying to judge how quickly to try and hack up a fix, not whether to do so. |
I've been thinking about to handle region solving in any case, and I think some of the changes I have in mind would help here. |
Makes sense! If it didn't appear in crater, and there isn't an easy fix, it seems fine to postpone until the trait solver is better prepared to support this case. |
triage Q for @nikomatsakis : how should I interpret the recent comments between you and @dtolnay ? Is the idea that we do not need a fix tomorrow, but would instead aim for a deadline like ... well, I just noticed that the bug has already leaked out to beta. So we want to find a backportable fix before the train hits stable? |
@pnkfelix I am not sure yet. I haven't really had time to try and tinker with a fix, though I have some strategies in mind. I guess it would be good to prevent this regression from leaking out though. So, yes, let's say that a goal is to get a fix before the Rust 1.33 release on Thu Feb 28 2019. On a shorter time frame, I aim to have some notes and/or experiments here before the next compiler meeting. =) |
Visiting for triage; @nikomatsakis How realistic is it to think that you might get around to this, with a beta-backportable fix, in the next two weeks (i.e. before the deadline you established in the previous comment)? |
I'm working with @aturon and other members of @rust-lang/wg-traits on exploring this issue. We did our first call yesterday talking over the details (recording available here) and we have to schedule a follow-up call to keep working on it. Don't yet know the best fix but I think it's still plausible we can have one. |
Assigning @aturon as well as myself. |
[WIP] Re-implement leak check in terms of universes This PR temporarily restores the leak-check, but implemented in terms of universes. This is not because the leak check behavior was necessarily **correct**, but because (a) we may want to have a transition period and because (b) we want to have more breathing room to work through the full implications of handling higher-ranked types correctly. Note that this PR builds atop #58056. Fixes #58451 Fixes #46989 Fixes #57639 r? @aturon cc @arielb1, @lqd ~~Temporary note: I've not finished running `./x.py test` locally -- I'm confident a lot of error messages in tests will need updating. I sort of expect them to revert to the older, (imo) less good error messages, which is mildly unfortunate. There might be a way to preserve the new error messages, not sure.~~
Re-implement leak check in terms of universes This PR temporarily restores the leak-check, but implemented in terms of universes. This is not because the leak check behavior was necessarily **correct**, but because (a) we may want to have a transition period and because (b) we want to have more breathing room to work through the full implications of handling higher-ranked types correctly. Note that this PR builds atop #58056. Fixes #58451 Fixes #46989 Fixes #57639 r? @aturon cc @arielb1, @lqd ~~Temporary note: I've not finished running `./x.py test` locally -- I'm confident a lot of error messages in tests will need updating. I sort of expect them to revert to the older, (imo) less good error messages, which is mildly unfortunate. There might be a way to preserve the new error messages, not sure.~~
…tthewjasper replace the leak check with universes, take 2 This PR is an attempt to revive the "universe-based region check", which is an important step towards lazy normalization. Unlike before, we also modify the definition of `'empty` so that it is indexed by a universe. This sidesteps some of the surprising effects we saw before -- at the core, we no longer think that `exists<'a> { forall<'b> { 'b: 'a } }` is solveable. The new region lattice looks like this: ``` static ----------+-----...------+ (greatest) | | | early-bound and | | free regions | | | | | scope regions | | | | | empty(root) placeholder(U1) | | / | | / placeholder(Un) empty(U1) -- / | / ... / | / empty(Un) -------- (smallest) ``` This PR has three effects: * It changes a fair number of error messages, I think for the better. * It fixes a number of bugs. The old algorithm was too conservative and caused us to reject legal subtypings. * It also causes two regressions (things that used to compile, but now do not). * `coherence-subtyping.rs` gets an additional error. This is expected. * `issue-57639.rs` regresses as before, for the reasons covered in #57639. Both of the regressions stem from the same underlying property: without the leak check, the instantaneous "subtype" check is not able to tell whether higher-ranked subtyping will succeed or not. In both cases, we might be able to fix the problem by doing a 'leak-check like change' at some later point (e.g., as part of coherence). This is a draft PR because: * I didn't finish ripping out the leak-check completely. * We might want to consider a crater run before landing this. * We might want some kind of design meeting to cover the overall strategy. * I just remembered I never finished 100% integrating this into the canonicalization code. * I should also review what happens in NLL region checking -- it probably still has a notion of bottom (empty set). r? @matthewjasper
…tthewjasper replace the leak check with universes, take 2 This PR is an attempt to revive the "universe-based region check", which is an important step towards lazy normalization. Unlike before, we also modify the definition of `'empty` so that it is indexed by a universe. This sidesteps some of the surprising effects we saw before -- at the core, we no longer think that `exists<'a> { forall<'b> { 'b: 'a } }` is solveable. The new region lattice looks like this: ``` static ----------+-----...------+ (greatest) | | | early-bound and | | free regions | | | | | scope regions | | | | | empty(root) placeholder(U1) | | / | | / placeholder(Un) empty(U1) -- / | / ... / | / empty(Un) -------- (smallest) ``` This PR has three effects: * It changes a fair number of error messages, I think for the better. * It fixes a number of bugs. The old algorithm was too conservative and caused us to reject legal subtypings. * It also causes two regressions (things that used to compile, but now do not). * `coherence-subtyping.rs` gets an additional error. This is expected. * `issue-57639.rs` regresses as before, for the reasons covered in #57639. Both of the regressions stem from the same underlying property: without the leak check, the instantaneous "subtype" check is not able to tell whether higher-ranked subtyping will succeed or not. In both cases, we might be able to fix the problem by doing a 'leak-check like change' at some later point (e.g., as part of coherence). This is a draft PR because: * I didn't finish ripping out the leak-check completely. * We might want to consider a crater run before landing this. * We might want some kind of design meeting to cover the overall strategy. * I just remembered I never finished 100% integrating this into the canonicalization code. * I should also review what happens in NLL region checking -- it probably still has a notion of bottom (empty set). r? @matthewjasper
In particular, it no longer occurs during the subtyping check. This is important for enabling lazy normalization, because the subtyping check will be producing sub-obligations that could affect its results. Consider an example like for<'a> fn(<&'a as Mirror>::Item) = fn(&'b u8) where `<T as Mirror>::Item = T` for all `T`. We will wish to produce a new subobligation like <'!1 as Mirror>::Item = &'b u8 This will, after being solved, ultimately yield a constraint that `'!1 = 'b` which will fail. But with the leak-check being performed on subtyping, there is no opportunity to normalize `<'!1 as Mirror>::Item` (unless we invoke that normalization directly from within subtyping, and I would prefer that subtyping and unification are distinct operations rather than part of the trait solving stack). The reason to keep the leak check during coherence and trait evaluation is partly for backwards compatibility. The coherence change permits impls for `fn(T)` and `fn(&T)` to co-exist, and the trait evaluation change means that we can distinguish those two cases without ambiguity errors. It also avoids recreating rust-lang#57639, where we were incorrectly choosing a where clause that would have failed the leak check over the impl which succeeds. The other reason to keep the leak check in those places is that I think it is actually close to the model we want. To the point, I think the trait solver ought to have the job of "breaking down" higher-ranked region obligation like ``!1: '2` into into region obligations that operate on things in the root universe, at which point they should be handed off to polonius. The leak check isn't *really* doing that -- these obligations are still handed to the region solver to process -- but if/when we do adopt that model, the decision to pass/fail would be happening in roughly this part of the code. This change had somewhat more side-effects than I anticipated. It seems like there are cases where the leak-check was not being enforced during method proving and trait selection. I haven't quite tracked this down but I think it ought to be documented, so that we know what precisely we are committing to. One surprising test was `issue-30786.rs`. The behavior there seems a bit "fishy" to me, but the problem is not related to the leak check change as far as I can tell, but more to do with the closure signature inference code and perhaps the associated type projection, which together seem to be conspiring to produce an unexpected signature. Nonetheless, it is an example of where changing the leak-check can have some unexpected consequences: we're now failing to resolve a method earlier than we were, which suggests we might change some method resolutions that would have been ambiguous to be successful. TODO: * figure out remainig test failures * add new coherence tests for the patterns we ARE disallowing
The title is a mouthful, so let me explain and give some motivation. Say I have a trait
MyTrait
with an associated typeMessageType
. This message type must be deserializable (in the serde sense), i.e.,MessageType: for<'a> Deserialize<'a>
. Now suppose I have a generic structGenericStruct<T>
whereT: MyTrait
. I want the struct to contain a message, so I put a thing of typeT::MessageType
inside the struct. I would likeGenericStruct
to also beDeserialize
able now, so I#[derive(Deserialize)]
.Concretely, the code
compiles in 1.31.1-stable and fails in 1.33.0-nightly with error message
Furthermore, if the commented lines are uncommented, and the first
type
line inMyTrait
is commented out, the code now compiles in stable and nightly.I don't believe this is version-specific behavior in serde. I would like to make a neater minimal testcase, but I don't know of any auto-derivable traits that only take a lifetime as a parameter.
The text was updated successfully, but these errors were encountered: