Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Leak and Destructor Guarantees #1085

Closed
wants to merge 3 commits into from

Conversation

reem
Copy link

@reem reem commented Apr 23, 2015

Add a new default unsafe trait, Leak. If a type does not implement Leak it can be
memory unsafe to fail to run the destructor of this type and continue execution of the
program past this types lifetime.

Additionally, cause all panics in destructors to immediately abort the process,
solving two other notable bugs that allow leaking arbitrary data.

Add a safe variant of mem::forget (e.g. mem::leak) which requires Leak.
The existing mem::forget remains unbounded, but unsafe.

This proposal also requires a slight breaking change to a few std APIs to add
Leak bounds where none exist currently. This is unfortunate so close to 1.0,
but in the author's opinion is better than dedicating to a safe unbounded mem::forget
forever.

This RFC is largely an alternative to RFC PR 1066, which makes an unbounded mem::forget
safe.

EDIT: An implementation of Leak, including std and compiler changes, can be found here: https://github.com/reem/rust/tree/leak

@pythonesque
Copy link
Contributor

Very much in favor of this RFC.

- `sync::mpsc::Sender` (possibly not, see unresolved questions)
- Possibly other APIs. Please point any others out if you think of them.

_Cause all panics in destructors to immediately abort the process._
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the past I have personally been worried about the implementation details of a strategy such as this, so it would be nice to expand on how you expect this to be implemented.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like to, but am not familiar enough with the internals of panicking to really say anything intelligent here. I included the details of this under unresolved questions, and would be happy to hash it out with someone more familiar and include their recommendation in the RFC.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@alexcrichton Can you specifically elaborate on what concerns you? It seems to me that even a naive solution would only affect destructors that can panic, which I'm not convinced is anything like a majority of them.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pythonesque transitively a lot of code calls panic! (e.g. internal asserts), even thought those code paths will literally never be taken for the code run by most destructors.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@huonw The two most comment collections (Vec and HashMap) don't outside of debug mode, AFAICT. I agree that Rust in general generates lots of landing pads, but in destructors specifically I suspect it is less common.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having read some of the discussion, it sounds like the impact of replicating noexcept semantics might not be too bad. I'd vote for attempting an implementation based on that and seeing what kind of performance impact it has in practice.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A non-native and 0-overhead solution might look like something along the lines of running the stack unwinder and if a frame of a destructor is encountered then an abort happens, otherwise the panic happens normally. I don't know how this would be implemented, nor if it could be implemented reliably in the face of inlining.

Wouldn't this be as simple as inserting abort() into the landing pads while generating Drop::drop()?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I hope this refers only to the libstd’s panic semantics and does not force it on the language as a whole. If it does, then a strong 👎 from me on this.

As for implementation, it is trivial without keeping any state in TLS or using exception tables:

  1. Introduce a new lang-item drop_panic (or adjust panic to receive boolean flag indicating it has been called from a destructor);
  2. If panic! is called inside a destructor, call drop_panic language item instead of panic;
  3. In libstd implement drop_panic language item to abort the process.

Now, the nice thing about this scheme is:

  • No additional runtime cost over current panic!();
  • If you’re not using libstd, you get to choose behaviour of panic-in-destructor yourself.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nagisa This would not work, because:

(1) To be able to rely on this for memory safety it must be a language-wide guarantee.
(2) Destructors may call functions that are not destructors.

Panics within destructors can already cause your program to abort, and in fact will do so if they occur after another panic (Rust's exception implementation does not allow for safe handling of double panics). You cannot rely on them not to do this in current Rust. As a result, panics in destructors are already a bad idea and I strongly advise you not to rely on them being implemented in a particular way.

(This is not to say that Rust couldn't have something like C++'s terminate, which would let you do other stuff before you aborted, but it wouldn't be able to unwind the stack).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Panics within destructors can already cause your program to abort, and in fact will do so if they occur after another panic (Rust's exception implementation does not allow for safe handling of double panics). You cannot rely on them not to do this in current Rust. As a result, panics in destructors are already a bad idea and I strongly advise you not to rely on them being implemented in a particular way.

I am aware of that. However, given the fact that you cannot begin unwinding from a destructor anymore, even when not using libstd/core, seems like a big and unfortunate flexibility loss.

@Diggsey
Copy link
Contributor

Diggsey commented Apr 23, 2015

I'm also very much in favour of this. I have a couple of minor concerns:

  • I think there will be confusion over when it is necessary to opt-out of Leak, and that people will do so over-eagerly: it's strictly never necessary to explicitly opt-out of Leak if you're only writing safe code, even if you're writing a RAII guard. Out of interest, do unsafe default traits require "unsafe impl" to opt-out of them?
  • Having all panics leaving destructors cause the process to abort may be too heavy-handed. For Leak types it's not terminal if a destructor fails to complete successfully (at worst, the type is partially or fully leaked). For !Leak types it makes complete sense for destructors exiting via a panic to cause the process to abort, as to do anything else would break one of the guarantees.

As an example, I think it should be possible to make a RAII guard which just calls a lambda when it goes out of scope. (this is notably different from thread::scoped in that it's safe for the guard to be Leak). In this situation, if the lambda panics, it should propagate out normally, stopping the task rather than the whole process.

@pythonesque
Copy link
Contributor

@Diggsey I can't think of a reliable way for the compiler to know whether a Leak or !Leak type is (potentially) being used by a function called by drop without some sort of effect system, which Rust doesn't have. If all such panics were disallowed now, but in the future Rust were able to make panics in drops safe in some cases, it would be a backwards-compatible change AFAICT.

@Diggsey
Copy link
Contributor

Diggsey commented Apr 23, 2015

Yeah, it could also be solved by at some point later adding the concept of "finalizers", which are called by rust when an instance of that type goes out of scope via normal execution, and thus are allowed to panic, but are not given any of the same guarantees as destructors. That would also be completely backwards compatible.

@sfackler
Copy link
Member

I may have been in support of this if it had been proposed 6 or 9 months ago. As it is, I have several major concerns:

  • Can the old thread::scoped API even exist in its current form if this RFC was accepted? It panics in its destructor to propagate a child's failure to the parent: https://github.com/rust-lang/rust/blob/master/src/libstd/thread/mod.rs#L705-L711. See JoinGuard::join returning an Err is **really** unsafe rust#20807 for discussion of why not doing that is potentially dangerous. Do we think it's acceptable to abort the process in that case?
  • I think this is something that absolutely needs a complete implementation before it can be accepted, and that would need to exist very soon (i.e. in the next couple of days).
  • I have very little confidence that we're going to get this right the first time. Is Leak the right abstraction here in the long term? Who knows! The fact that "should there be a Leak bound on Sender?" is unanswered question is not a good sign. A chain of breaking "oops we forgot a Leak bound" releases is not something I'm excited for.
  • There is exactly one function in the entire standard library that's affected by this memory safety issue (drain_range would be affected as well if it existed, but it doesn't). That function didn't even exist until a couple of months ago (when the 'static bound was pulled off of Send) and if I remember correctly there aren't that many consumers of it. Its functionality is trivially preserved by adding a closure (c.f. RFC: Scoped threads, take 2 #1084) which is moderately unfortunate but does not seem to be that big of a deal. It seems like there's somewhat of a mad scramble to preserve a "ooh, that's a cool idea!" function rather than carefully stepping back and actually evaluating the situation. How confident are we that thread::scoped is actually the right API for the kinds of data parallel tasks that it seems (?) it was designed for? Will people really spawn and then throw away <num_cores> threads every time they want to fork and join?
  • Speaking of RFC: Scoped threads, take 2 #1084, I would like to see a convincing case made in the motivation section of this RFC of why that API is bad enough to force this kind of extremely accelerated, large language change. If thread::scoped isn't the only reason to pursue this, what are some other concrete use cases of !Leak, why are they important, and why would hypothetical lack of Leak be really bad to work around?
  • In a world where we add Leak after 1.0 and make it a default bound, how much generic code would we actually need to change in practice? Is anyone going to do anything with a JoinGuard other than assigning it to a variable and then join on it? We could (I think?) always add ?Leak bounds on demand.

@pythonesque
Copy link
Contributor

@sfackler I am personally fine with thread::scoped aborting on panic (in fact, I originally thought that it would require a stack bomb like that for safety), but I would also be fine with the other API. It's easy enough to recover the original panicking behavior by joining, but either way I don't see this RFC as a referendum on scoped. To me it has two parts:

  • Fix the extremely surprising behavior of panics in destructors.
  • Future-proof the language so that we can preserve a property of "transitively safe" Rust (that all destructors are run on stack unwind) in a way that does not require excessive annotations.

Both of these seem independently useful to me. There are many APIs written under the assumption that RAII "works" for more than just memory (e.g. transactions). That Rust can "leak" these types is surprising. They can certainly be rewritten as closures, but it's not as ergonomic; it makes RAII a much less useful (and less reliable) feature. At least for me, it's not about this one API, it's about improving the predictability of Rust as a language.

As far as your points about timing: those are mostly my concerns as well. But I'm still in favor of pushing forward with a trial implementation. I do want to highlight this comment:

We could (I think?) always add ?Leak bounds on demand.

This wouldn't be the end of the world, but what I fear is people eventually just mechanically adding ?Leak to all their generic bounds after a while, because someone always asks for it. Kind of like IO in old Rust. Rust has a lot of what will be perceived as "noise words" already and this is an opportunity to potentially avoid that. In fact, I would personally be willing to accept this RFC without the destructor changes just to get Leak in the signatures of the appropriate types. That would give us lots of time to work out the precise destructor implementation details before we flipped the "you can rely on this for memory safety" switch.

Edit: Also, since I hadn't yet commented on this: I don't see any reason for such a bound on Sender. Channels are correctly bounded by their lifetimes already. It might be relevant if the proposal were for "real" leak freedom, but as I've stated elsewhere I don't think that's a realistic goal for Rust. Rc being a source of surprising behavior that is either rare or nonexistent in the rest of the language is pretty common; it doesn't seem unlikely to me that it and Arc (and structs containing them) are the only types we need to be concerned about in the stdlib.

@bstrie
Copy link
Contributor

bstrie commented Apr 23, 2015

@sfackler brings up a good point that this still wouldn't let us bring the old scoped API back. And for soundness' sake we probably wouldn't even want to bring back the old API, given that we're still unsure whether or not there are leaking mechanisms out there unaccounted for.

I'm also concerned by the vagueness of how we'd implement abort-on-panic-in-destructors.

Frankly, the old scoped API was not so sublime as to warrant this much machinery (the new API is arguably better in some ways than the old one). I would like to see some more motivation expressed in this RFC as to why this mechanism would be useful in general. In the absence of any compelling general motivation, I'm inclined to say we should simply use the tools we already have (in this case, closures) rather than add another layer of complexity.

@hanna-kruppe
Copy link

@sfackler is spot on. Perfect is the enemy of good, as they say. The current situation is somewhat unsatisfying but most of it can be fixed after 1.0 (Leak, the closure-based scoped API) and the remaining stuff is IMHO not significant enough to take the risks of all the open questions and the risk that mistakes sneak in in the scramble to quickly fix this. Not to mention that 1.0 would probably have to be pushed back another six or twelve weeks.

-1 from me.

@bstrie
Copy link
Contributor

bstrie commented Apr 23, 2015

I would like to also say that I do think that the abort-on-panic-in-a-destructor behavior sounds desirable in its own right, and I could get behind a last-minute RFC to deliberately unspecify the current behavior such that we can experiment with such an idea for a future version of Rust.


Change several `std` APIs to adjust for the guarantees now provided to types
which do not implement `Leak`:

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, with this definition Leak seems to be implemented for almost everything, including Vec<T> for arbitrary T. Does that mean that I can put JoinGuard into a vector and then leak it freely?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, for the same reason Vec<T> where T is not Send is not Send.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess it means that Leak is not an ordinary trait and has to be special-cased in the compiler like Send, i.e. it has to be a lang item.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nope, we can define traits like this directly in Rust now, e.g. see Reflect.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops, it turns out all traits with default+negative impls behave like Send in this regard, and it is just a part of OIBIT design. Sorry for the noise.

@steveklabnik
Copy link
Member

This proposal also requires a slight breaking change

Huuuuge 👎 to me. The time for this has passed.

@nikomatsakis
Copy link
Contributor

So, I considered this sort of design briefly. I think that, even with OIBIT, the pain of adding a "default trait" is often underestimated. For example, if we added Leak as proposed, that would imply that when you write Box<Writer>, it would not be considered to be Leak, just as Box<Writer> is not Send by default. Hence, you'd have to write Box<Writer+Leak> in order to place an object into a Rc value. And so on. It's too late to try and squeeze this design in before 1.0. Note though that having libstd rely on destructors not to leak is a relatively recent phenomenon. (I know that in the past we've avoided such APIs precisely because of Rc-leaks, but clearly, when it comes to scoped, mistakes were made.)

I think there is plenty of room to enable "guaranteed destruction" in the future. Closures are available today. If we want RAII-like APIs, I think there are good options there too, such as having some notation for a fn to return a pre-borrowed value (so the caller gets back a &Foo, not a Foo, and hence can't transfer the Foo away), or perhaps some other more explicit way for a callee to schedule code to run on exit from a scope.

Now, we've long said that the semantics of panic during a destructor are explicitly unsettled. At this point, I favor making destructor bodies "catch" panics and simply abort, which I think has come up on this thread, though it took me some time to come around to this. But I think we can do this after 1.0.

@pcwalton
Copy link
Contributor

Yeah, I lean toward viewing the costs as outweighing the benefits here. It sounds like there are ways around the thread::scoped issue, such as closures, so this ends up being primarily a syntactic win.

I am sympathetic to concerns that we shouldn't let 1.0 prevent us from making emergency changes if we need to (though the barrier is high, of course). But even notwithstanding 1.0, adding an extra built-in trait to the language has to be very beneficial in order to be worth the cognitive and complexity burden. Send/Sync/Copy are used all the time (at least in concurrent code), and they provide a lot of benefit. The question "Why do I need to worry about Send in Rust?" has a clear, convincing answer for anyone who has done threaded programming: it prevents data races. On the other hand, the question "Why do I need to worry about Leak in Rust?", would have the answer "well, there's this edge case in the thread::scoped API..." which isn't as convincing. So, given that there seem to be other ways to solve thread::scoped, Leak feels like a bit of a sledgehammer.

@krdln
Copy link
Contributor

krdln commented Apr 23, 2015

I'm really in favor of implementing this RFC for 1.0. Why? I don't think of this RFC as introducing new feature, but rather as fixing a bug in Rc. While maybe there is little usage of "must drop"/!Leak types in std, I think lot of people are using RAII this way in their own crates. I remember talking few times on IRC about designs involving !Leak types and have used such types myself too.

Please also notice that while introducing Leak is a breaking change, removing it won't be – we can just mark Leak as deprecated no-op trait implemented for each type. So we can just create the infrastructure for Leak for 1.0, prepare std for it, but don't encourage users to actually use it until issues with panics in destructors are resolved and concept is proved sound.

So that's why we should implement this RFC, even if we aren't sure if it's a good idea, just not to close a possibility.

@pcwalton
Copy link
Contributor

I think that we should be clear that this design does not eliminate leaks in RC. It just allows the compiler to reason about "possibly leaking" data types. That's much less of a benefit.

@Diggsey
Copy link
Contributor

Diggsey commented Apr 23, 2015

@krdln Most uses of RAII in other crates should be fine even without this RFC. The problem with "scoped" was not that it used RAII, but that it allowed the new thread to continue running after the lifetime of closed-over locals had expired. Any RAII guards which can be implemented safely are guaranteed to be OK.

@pcwalton It eliminates leaks of types for which leaking would be unsafe, thereby allowing such types to be used in safe code, and at the same time does not restrict the use of Rc to create cycles. Seems like a fairly large benefit to me?

@nikomatsakis How reasonable is the idea of making all traits extend "Leak" by default? To remove the dependency you'd write "Trait : ?Leak". If the Leak trait was added now as a "do nothing" trait which was impossible to opt-out of (for the moment) would it allow the correct behaviour to be implemented later in a backwards compatible way (since you're guaranteed that all pre-existing types and trait objects are Leak)

@pcwalton
Copy link
Contributor

This RFC doesn't fix a safety hole in current Rust. It fixes a safety hole in a hypothetical future version of Rust exposing functions that depend on not leaking for safety. But a simpler approach is to just not expose those kinds of functions. Given that closures exist today, it's not even clear that that hypothetical future Rust would have any benefit over current Rust other than (essentially) syntax.

@Diggsey
Copy link
Contributor

Diggsey commented Apr 23, 2015

Closures are not a direct replacement: they prevent various control flow constructs which would otherwise be valid. Code which depends on that necessarily becomes more complicated without a RAII style API.
(However I would be interested to see what improvements could be made to allow better interaction between closures and control flow, it seems like there could be some equivalence set up between return values and control flow exit modes)

@rkjnsn
Copy link
Contributor

rkjnsn commented Apr 23, 2015

While I agree with the goal of allowing reliable RAII guards, I disagree with the solution outlined in this RFC. Quoting from another post:

It seems to me the fundamental problem here is that garbage collection (Rc) is at odds ownership. Rusts ownership model is very powerful, and means that at any given moment, it is clear who owns an object and is responsible for cleaning it up, and what it's lifetime will be. Garbage collection is fundamentally different: the data does not have a specific owner, and its lifetime is much harder to determine (in the case of reference cycles, it could be forever). Therefore, I think it absolutely makes sense for Rc to have a 'static bound, not just as a solution to the problem of guards, but conceptually as well. It's the only lifetime that honestly describes the how long the contents might last.

I disagree with suggestions to add same kind of Leak trait and bounds on Rc. It adds complexity and headaches all around for something that only applies to Rc. It feels like a kludge that addresses the problem from the wrong side. The problem isn't that some API's want to ensure that their objects are dropped within a given lifetime (this is otherwise well supported by the language), the problem is that Rc claims it can enforce objects of a limited lifetime getting dropped soon enough, when in reality it cannot. The fact that it cannot is fine; it is a result of trade offs made in its design (specifically allowing reference cycles). However, it should be honest about that fact.

@pcwalton, closures are often much less convenient than guards. For example, you can't use try! or otherwise return early from the parent function in a closure. Also, if you need several resources, each would required it's own nested closure, which would get unwieldy quite quickly.

@pcwalton
Copy link
Contributor

@rkjnsn @Diggsey Everything you are talking about boils down to syntax.

@pcwalton
Copy link
Contributor

Anyway, whether you want to call it syntax or not, the only benefit that is control flow-affecting constructs work better with thread::scoped. The downside is that we have to make a big breaking change right now and burden every user of Rust with having to learn about this new trait Leak for all their objects. It's pretty clear in my mind that the cost/benefit ratio skews toward the costs here.

It's better to make folks who want to use complex control flow with thread::scoped have a more awkward API than to burden every user of Rust with having to learn and think about Leak. The former set of users is a small percentage of all Rust users.

@aturon
Copy link
Member

aturon commented Apr 23, 2015

@Diggsey

Closures are not a direct replacement: they prevent various control flow constructs which would otherwise be valid. Code which depends on that necessarily becomes more complicated without a RAII style API.

Agreed. However, I actually feel that for scoped threads, this is a benefit -- using try!, for example, with a scoped thread could easily lead to deadlock due to the implicit join at the end of the scope. The closure-based API is much more explicit.

(However I would be interested to see what improvements could be made to allow better interaction between closures and control flow, it seems like there could be some equivalence set up between return values and control flow exit modes)

Agreed, I think this kind of thing could be very interesting to explore later on.

@nikomatsakis nikomatsakis self-assigned this Apr 23, 2015
@krdln
Copy link
Contributor

krdln commented Apr 23, 2015

@Diggsey

@krdln Most uses of RAII in other crates should be fine even without this RFC. The problem with "scoped" was not that it used RAII, but that it allowed the new thread to continue running after the lifetime of closed-over locals had expired.

By "using RAII this way" I meant using it in cases that would imply !Leak under this RFC – which is exactly scoped-like usage.

Any RAII guards which can be implemented safely are guaranteed to be OK.

I disagree here. There are things like database transaction handlers or eg. some kind of secret protection handlers which you can implement without unsafe, which conceptually should be !Leak even if they don't cause memory unsafety. I would be happy if Rust prevented me before accidentally putting such things in not-cycle-proofed Rc.

@Diggsey
Copy link
Contributor

Diggsey commented Apr 23, 2015

By making such a RAII guard !Leak, you'd be disallowing perfectly desirable behaviour. Putting a RAII guard inside a not-cycle-proofed Rc is a completely legitimate thing to do: just because you may not want to do that in your particular use-case doesn't mean rust should forbid it, or that a library author writing such a RAII guard should try to impose that restriction on their users.

Rust should only try to prevent things that are objectively wrong (eg. segfaults and other memory unsafety), not subjectively wrong like what you're describing.

@carllerche
Copy link
Member

I'm a 👎 on this for all the reasons that have already been mentioned.

The RFC would definitely need to be updated w/ an analysis of how libraries would need to be updated to factor in ?Leak as well as a more thorough analysis of the benefits (more concrete examples) as well as exploring cases where Leak would block libraries from doing things that should be permitted.

This change would introduce a non-negligible amount of cognitive overhead to users and library authors. Rust already has a steep learning curve. There needs to be a high bar to add more complexity and as pointed out by other commenters, the benefit of this change does not seem to meet this bar right now.

@reem
Copy link
Author

reem commented Apr 24, 2015

An implementation of Leak, including std and compiler changes, can be found here: https://github.com/reem/rust/tree/leak

Also updated the OP with the above link.

@theemathas
Copy link

Sender needs to require Leak, since it can leak via an Rc cycle.

See #1066 (comment)

Code: https://gist.github.com/anonymous/6e4c9fe67283da121c97

(Note that it triggers a timeout on playpen for some reason, although it silently terminates on my machine)

@reem
Copy link
Author

reem commented Apr 25, 2015

@theemathas it is sufficient to require Leak for the impl of Clone for Sender (as is done in the sample implementation) and SyncSender, since you cannot create a cycle without having multiple Senders.

The vast majority of breakage can be avoided by very careful placement of Leak bounds on APIs. For instance, both Arc and Rc only require Leak on new, which allows most code using these types to continue to function without changes.

The remaining most problematic case is with trait objects, since Rc<Box<io::Read>>, for instance, will now require a + Leak bound like so: Rc<Box<io::Read + Leak>> in order to be constructible.

By only requiring Leak on new it is also very easy to add an escape hatch, e.g. new_leak in the sample implementation, and use that for the internals of structures which will maintain the Leak invariants manually.

@theemathas
Copy link

@reem Here is a variant of leaking that does not require clone

http://is.gd/39IhIi

@reem
Copy link
Author

reem commented Apr 25, 2015

@theemathas you're right, I hadn't considered forming a cycle just between the sender and receiver. I will adjust the sample implementation and measure the breakage.

This is a good example of where we have to be careful and think quite hard about where Leak is necessary when writing unsafe code (safe code will require the bounds naturally), which is a knock against the proposal. That said, this is true of the existing unsafe traits, Send and Sync, as well; it's quite easy to write unsafe code which one incorrectly thinks makes their data structure Sync.

Note that this example only occurred because I tried to be clever and use the unsafe constructors for Arc in the implementation - if I had instead just listened to the compiler and added appropriate Leak bounds where prompted, the implementation would have been correct.

@comex
Copy link

comex commented Apr 25, 2015

I claim that for 99% of use cases of scoped, you want to create a bunch of threads and have them all join at the end of the scope. If the number of threads is fixed, you use local variables. If it's not, you use a Vec<JoinGuard> and let it go out of scope. For any use case this doesn't work for, you can always create a Vec and use indices into it instead of the guards themselves. (Is there a library to easily manage a free list of Vec indices yet? Vec-as-heap is a general pattern.)

In other words, make Leak work like Sized, allow !Leak in Vec and maybe a small number of other types, and tell everyone else to suck it (and hope this sets a precedent not to include ?Leak all over the place). Join guards are just not important enough to make a lot of unsafe code worry about them.

Edit: As for making destructors panic on unwind, isn't this equivalent to having a variable at the beginning of all destructors that panics in its own destructor? (And defusing it at the end.) For safety purposes, doesn't Rust need to get destructor order right even in the presence of unwinding and inlining? - so that should just work...

(Perhaps this should only happen for destructors of objects containing !Leak data?)

@imccrum
Copy link

imccrum commented Apr 27, 2015

I believe this is the right thing to do. Conscious that this is a breaking change but better to fix this now than live with an inferior solution indefinitely - that's what betas are for after all. Implementation has existed now for a few days and seems like fallout could be addressed before 1.0. 👍

@pythonesque
Copy link
Contributor

I just thought of another kind of interesting use case for Leak. Currently, types with destructors are not allowed in statics because (IIRC) they are guaranteed to leak. Allowing such types in statics is a pretty commonly requested feature. By requiring destructors to have a Leak bound, this feature could be added in a principled way (similar to how Sync lets us allow safe access to statics in a principled way). This doesn't necessarily affect the "before 1.0" part of this, of course.

@lilyball
Copy link
Contributor

👎

I feel like everyone who's proposing changes to Rust beyond simply altering thread::scoped (such as in #1084) is doing so because they believe that leaking RAII objects is a more serious problem than just the thread::scoped API. And I think that's wrong. thread::scoped is not an example of a common API that demonstrates this problem, it is the sole exception that has the problem. Every other case of a RAII object that's produces memory unsafety when leaked is due to it being relied upon to maintain invariants around other unsafe code, and I believe that these cases can be fixed without affecting the API (and the only example I can even think of right now is Vec::drain_range which was in fact fixed without affecting the API; all other RAII objects I can think of off the top of my head don't even come close to having an issue like this). thread::scoped is a rather special case, and I just wrote down my thoughts about it in this long comment on #1084.

@reem
Copy link
Author

reem commented Apr 28, 2015

@kballard it's certainly not the only example, see, for instance, hlua's stack handling for an example from the community. This would be made quite a bit more onerous to use if the library instead had to use nested closures.

@reem
Copy link
Author

reem commented Apr 28, 2015

The standard library is a tiny, tiny percentage of all rust code that will be written, and likely a small percentage of all unsafe rust code too. We have to look at this issue not from the lens of "how does this affect the std APIs" but "how does this affect Rust, and the sorts of APIs that can be written in it."

@lilyball
Copy link
Contributor

@reem Looking over that linked post, it does not appear that the stack handling RAII values have anything to do with memory safety, they only need to have their destructors run in order to guarantee program correctness. Rust only cares about memory safety, not about whether your program has a bug in it.

Furthermore, that Lua wrapper could actually be adjusted to work correctly even in the face of leaked destructors. It just requires reordering the internals slightly, but the API can remain the same. Adjust Lua to keep a size: i32 field, and a private method that looks something like

fn pop_stack_to(&mut self, size: i32) {
    if self.size > size {
        lua_pop(self.as_mut_lua(), self.size - size);
        self.size = size;
    }
}

Then make it so that the creation of a PushGuard doesn't record the number of values to pop, but rather records the current size and then modifies the Lua's size to include the newly-pushed values. And on destruction it pops the stack back to that recorded depth. And finally, all of the methods on the Lua object itself that touch the stack in any way (including creating guards) should have self.pop_stack_to(0). This way destruction still pops the stack back to the state before the PushGuard, but leaking a PushGuard doesn't actually cause any problems (as the values will be popped when the enclosing PushGuard is destructed, or alternatively when the root Lua value is used again if the outermost guard is leaked). And the API doesn't have to change at all.

@glaebhoerl
Copy link
Contributor

Rust only cares about memory safety, not about whether your program has a bug in it.

That's a bit extreme. There are many ways in which Rust tries to help ensure correctness even where memory safety is not implicated (mut checking, overflow checking, match exhaustiveness checking, lints, ... the list is probably long). Rust makes an iron-clad guarantee of safety, but that doesn't mean it shouldn't also try very hard for correctness. People who write Rust code care a whole lot about whether their code is correct. Every guarantee that Rust provides makes that job easier. It just happens to be the case that in unsafe blocks, correctness and safety often coincide.

@pythonesque
Copy link
Contributor

@reem Rust only cares about memory safety, not about whether your program has a bug in it.

That's not true at all, and you know it. That's why we have Ord and PartialOrd, Eq and PartialEq, for example. Rust provides plenty of tools for program correctness that aren't related to memory safety. Leak does have a purpose in providing memory safety, and can coincidentally be used to provide other useful safety guarantees; these guarantees should be taken under consideration since these are also guarantees that exist in transitively safe Rust (so it's an explicit choice to opt out of them in unsafe code).

In this particular case I especially disagree. It would have been memory safe for Rust to just make deallocation unsafe. In fact, IIRC this is what Ada does. Rust didn't want to do that. Just because it's not an absolute rule that all destructors will run on stack unwind doesn't mean Rust can't guarantee that property in select cases. It's also not a guarantee that every API that can be "fixed" to work around this is going to have the same performance properties as the version that can just assume the destructor will be run; in fact, it seems almost inevitable to me that this will happen. Using unsafe traits to allow for safe performance optimizations is something that's come up several times now and looks likely to be useful going forward. Finally, at least in some cases it seems to me that the solution to avoid memory safety necessarily leads to an overall increase in unsafe code (for example, preleaking means you can't use any of the checked vector access methods); this, too, needs to be weighed against the argument that it's not strictly necessary.

@pcwalton
Copy link
Contributor

I agree that Rust cares about bugs other than memory safety, but I wouldn't necessarily hold up PartialEq/Eq as a consequence of core language principles. That one was a tough call and was motivated as much by performance/consistency as anything. (To be specific, we wanted to allow hardware IEEE 754-compliant instructions to be usable to implement comparison on floats without changing language semantics when they're used as for example happens in Haskell.)

A better example, IMO, would be match exhaustiveness. Even there, though, it's as much motivated by safety as it is for code size and control (not wanting the programmer to have to worry about the Rust compiler inserting panics everywhere).

I think safety features are strongest when they double as performance features. Exhaustiveness checking, the lack of null pointers and zero values, and the mutability restrictions all function not only as safety features but also as optimization enablers. Those kinds of features have a much different cost-benefit ratio, in my mind, than features such as Leak which purely aid safety of relatively rare types of code.

@Valloric
Copy link

@pcwalton I'm curious; how does exhaustiveness checking enable better optimization? For the other examples you cited I can see the connection, but not for that one.

@comex
Copy link

comex commented Apr 28, 2015

I think he's saying that the panic that would otherwise have to be inserted for missing cases costs code size. (You can also save two instructions for the jump table dispatcher if you're totally sure the value is in range, but I doubt LLVM actually does that...)

@pcwalton
Copy link
Contributor

@comex It actually does elide the bounds checks, I believe. (That's why we have range metadata on our loads of enum discriminants.)

@Valloric Yeah, it's primarily the code size. (But it's also that it makes more code nounwind, which helps in various other ways by reducing CFG complexity.)

@comex
Copy link

comex commented Apr 28, 2015

Hmm...

pub enum Op { A, B, C, D, E }
pub fn do_something(op: Op, n: i32) -> i32 {
    match op {
        Op::A => n * 2,
        Op::B => n + 1,
        Op::C => n / 2,
        Op::D => n | 1,
        Op::E => n & 0xffff,
    }
}

rustc --crate-type dylib -O test.rs
do_something::h23aaf44d8d2e9fbelaa:
[...snip...]
000000000000148d        movzbl  %dil, %ecx
0000000000001491        cmpl    $0x3, %ecx
0000000000001494        jbe     0x149b [**jump table**]
0000000000001496        movzwl  %si, %esi  [**the last case, & 0xffff**]
0000000000001499        jmp     0x14c3
000000000000149b        leaq    0x26(%rip), %rcx
00000000000014a2        movslq  (%rcx,%rax,4), %rax
00000000000014a6        addq    %rcx, %rax
00000000000014a9        jmpq    *%rax

It puts the last case as the default case and omits it from the jump table, but it could have saved a jump (and saved one entire byte, net - and sacrificed safety) by removing the cmp/jbe and keeping it in the jumptable.

@rkjnsn
Copy link
Contributor

rkjnsn commented Apr 28, 2015

In case anyone watching this is interested, I wrote up an alternative proposal: #1094

@nikomatsakis
Copy link
Contributor

So, re-reading the thread here, I realize that I was so exhausted by writing this blog post, I failed to post it on all the relevant discussion threads. For the record, I wrote a blog post addressing my current feelings on this RFC and others:

http://smallcultfollowing.com/babysteps/blog/2015/04/29/on-reference-counting-and-leaks/

@aturon
Copy link
Member

aturon commented May 7, 2015

Thanks @reem for the RFC, and everyone for the great discussion!

Of course, this needs to be settled prior to the 1.0 release next week, and the core team met yesterday to come to a final decision on the matter. This is truly a thorny problem with multiple reasonable paths to take, but in the end the analysis of the tradeoffs presented in Niko's blog post and the follow up represents the core team's consensus, which emerged through the discussion on this thread and others.

As such, we are going to close this RFC, to settle the matter for 1.0.

There is still discussion to be had about what to do with the scoped API (which might be best done through iteration in crates.io first). And in the future, we may want to consider making this kind of deferred computation a first-class feature.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.