-
Notifications
You must be signed in to change notification settings - Fork 12.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
async fn should support multiple lifetimes #56238
Comments
In case this helps either ideas on how to fix this, or others that get stuck with needing this, here's a way to write this that compiles currently, using the normal multi-lifetime fn foo<'a: 'c, 'b: 'c, 'c>(
x: &'a mut i32,
y: &'b mut i32,
) -> impl Future<Output = i32> + Captures<'a> + Captures<'b> + 'c {
async move {
*x += *y;
*y += *x;
*x + *y
}
} Playground: https://play.rust-lang.org/?version=nightly&mode=debug&edition=2018&gist=d8199b6efc90fe1c3e5df485a5006d15 |
(note that in order for that to work you have to still introduce a third lifetime, |
Isn't it a new lifetime that is outlived by all the lifetimes used elsewhere (i.e. the returned future can only live for the intersection of all incoming data's lifetimes). |
Yes, you're correct. |
I ran into this on my first attempt to make some small async programs with nightly, the Capture workaround gets unwieldy fast as a fn has more elided lifetimes. |
Marking as a blocker. We spent quite a lot of time discussing how to fix this (recorded here). The short version is that we will alter the desugaring of I think @cramertj was planning to take this on, but not 100% sure. |
Refactor async fn return type lowering async fn now lowers directly to an existential type declaration rather than reusing the `impl Trait` return type lowering. As part of this, it lowers all argument-position elided lifetimes using the in-band-lifetimes machinery, creating fresh parameter names for each of them, using each lifetime parameter as a generic argument to the generated existential type. This doesn't currently successfully allow multiple argument-position elided lifetimes since `existential type` doesn't yet support multiple lifetimes where neither outlive the other: ```rust existential type Foo<'a, 'b>:; // error: ambiguous lifetime bound in `impl Trait` fn foo<'a, 'b>(_: &'a u8, _: &'b u8) -> Foo<'a, 'b> { () } ``` This requires a separate fix. Fix #59001 Fix #58885 Fix #55324 Fix #54974 Progress on #56238 r? @nikomatsakis
Refactor async fn return type lowering async fn now lowers directly to an existential type declaration rather than reusing the `impl Trait` return type lowering. As part of this, it lowers all argument-position elided lifetimes using the in-band-lifetimes machinery, creating fresh parameter names for each of them, using each lifetime parameter as a generic argument to the generated existential type. This doesn't currently successfully allow multiple argument-position elided lifetimes since `existential type` doesn't yet support multiple lifetimes where neither outlive the other: ```rust existential type Foo<'a, 'b>:; // error: ambiguous lifetime bound in `impl Trait` fn foo<'a, 'b>(_: &'a u8, _: &'b u8) -> Foo<'a, 'b> { () } ``` This requires a separate fix. Fix rust-lang#59001 Fix rust-lang#58885 Fix rust-lang#55324 Fix rust-lang#54974 Progress on rust-lang#56238 r? @nikomatsakis
So @cramertj and I had a big meeting about this (video). We took some notes in this dropbox paper. For the time being, we are hoping to make a "fairly small" change and see how far it takes us. The basic idea is that we will add a form of constraint As a working example, consider this one: existential type T<'a, 'b>; // = (&'a (), &'b ())
fn foo<'a, 'b>(x: &'a (), y: &'b ()) -> T<'a, 'b> { (x, y) } Here, what would happen is roughly like this (borrowed from the dropbox paper):
Our goal is basically to convert these "in" constraints into equality constraints, in the case where exactly one of the choices should work. In this case, if we look at At present, we can ensure that we only add these "in" bounds for cases where we have Extending the lexical solver. The idea would be to do a sort of pre-pass, I imagine -- we would walk the region graph starting from R0 and find all successors (Rs) and predecessors (Rp) from R0, filtering down to just free regions (as well as I think I would just do this as a pre-pass, even though it's sort of inefficient in some sense -- i.e., we'll be walking the regiion graph once per constraint etc. There won't be many of these constraints so I don't think it'll be a particularly efficiency concern. Extending the NLL solver. This is perhaps a more interesting question. The NLL solver works rather differently than the lexical solver. It begins with a constraint set, which it converts into a graph. For this graph, it computes the SCCs of the graph, resulting in a tree. It then walks from the leaves of the tree up, merging the bits from the children into the parent. We have a few options here. We could do a similar graph walk but before the SCC computation. We could then convert "in" constraints into new edges in the graph, and ultimately do the SCC computation on this combined graph. (Note that the An alternative would be to try and convert "in" constraints after the SCCs are computed. The advantage here is that the "depth-first search" just becomes walking all the parents in the SCC tree and all the children. If we found we could convert an "in" constraint, we would presumably "set" the value of the given region to contain the appropriate bits. However, this would come with a complication for diagnostics -- if that value resulted in an error, we would need to include some indication that those bits arose from an "in" constraint and adjust some of the diagnostic code appropriately (currently, it assumes that bits either come from liveness constraints or from the constraint graph, and this would kind of be a new thing). I'm not 100% sure which bits of code would need to change. This feels like a more complex but ultimately more efficient way to go about things. OK, these are some high-level notes, I can try to lower them into more actionable steps. One thing I'm not sure about, for example, is where to detect that these "in" constraints are needed and how to "materialize" them. Presumably the code should live somewhere in the vicinity of the "opaque types" code. |
I spent some time reading into the related code etc and refreshing my memory how things work. Presumably the plan is going to look something like this: Extend the Modify the the code that constrains opaque type values. If there is no "least" constraint, we presently report an error, but I think we ought to be adding an "in" constraint. Then we have to modify the solvers: we may be able to avoid modifying the lexical solver, since async fn is 2018 only -- we could say that if we are not using the NLL borrowck, then we continue to report an error unless there is a "least region". We'll have to pass these 'in constraints' into the NLL solver. I think I would probably begin by converting them into the simpler constraints as I described before, versus integrating them as "first class" things. |
…ync-fn-region-solver, r=MatthewJasper generalize impl trait to permit multiple lifetime bounds Generalizes the region solver to support "pick constraints". These have the form: ``` pick R0 from [R1..Rn] ``` where `R1..Rn` are called the "option regions". The idea is that `R0` must be equal to *some* region in the set `R1..Rn`. These constraints are then used to handle cases like this: ```rust fn foo<'a, 'b>(...) -> impl Trait<'a, 'b> { .. } ``` The problem here is that every region R in the hidden type must be equal to *either* `'a` *or* `'b` (or `'static`) -- in the past, the only kinds of constraints we had were outlives constraints, and since `'a` and `'b` are unrelated, there was no outlives constraint we could issue that would enforce that (`R: 'a` and `R: 'b` are both too strict, for example). But now we can issue a pick constraint: `pick R from ['a, 'b]`. In general, solving pick constraints is tricky. We integrate them into the solver as follows. In general, during the propagation phase, we are monotonically growing a set of inference regions. To handle a case like `pick R from [O...]`, where `O...` represents the option regions, we do the following: - Look for all the *lower bounds* of the region R -- that is, every region LB such that `R: LB` must hold. - Look for all the *upper bounds* of the region R -- that is, every region UB such that `UB: R` must hold. - Let the *viable options* be each option region O such that `UB: O` and `O: LB` for each UB, LB bound. - Find the *minimal viable option* M, where `O: M` holds for every option region O. If there is such a *minimal viable option*, then we make `R: M`. (This may in turn influence other bits of inference.) If there is no minimal viable option, either because all options were eliminated or because none of the remaining options are minimal, we do nothing. Ultimately, if the pick constraint is not satisfied, an error is reported. For this logic, we currently require that the option regions O are always lifetime parameters. To determine the bounds, we walk the various outlives edges that were otherwise introduced. r? @matthewjasper cc @cramertj Fixes #56238 TODO: - [ ] Error messages include region variable info sometimes, how to fix? - [ ] Tests for bare `existential type` and other impl Trait usage
According to the RFC, the
async fn
syntax is supposed to capture all input lifetimes in the implicit outer return type. Currently it only captures one elided lifetime, and returns errors regarding other lifetimes.This needs to be changed so that all lifetimes are captured in
async fn
.Example:
Playground: https://play.rust-lang.org/?version=nightly&mode=debug&edition=2015&gist=1953befb5a939c1379468aae9788dac8
cc @cramertj @Nemo157
The text was updated successfully, but these errors were encountered: