-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Revisiting specialization: Complementary traits #1658
Conversation
IIRC, a big drawback that came up when this was last discussed was that implementing traits becomes a breaking change. This isn't a problem in specialization (the lattice rule) because you can't use it to "punch holes". |
@Stebalien No, that's not correct. In this RFC, |
Oh, you mean breaking |
For example: // crate_1:
pub struct Bar;
pub trait Foo {}
// crate_2:
extern crate_1;
use crate_1::{Bar, Foo};
trait Baz {}
impl<A: !Foo> Baz for A {} If -- Edit -- |
My take is that it is no different from e.g. removing the implementation, but I get the point. I think I have a solution, but I am heading for sleep now. |
My point is that now removing or adding an implementation of a trait is a breaking change. You would have to implement all traits up-front. One solution is to say "no negative constraints on foreign traits". Another is to say "no hole punching" (if you have a impl with a negative constraint, you have to have a matching impl with the corresponding positive constraint); this is basically specialization. |
@Stebalien I added the requirement of explicitly guaranteeing the absence of some bound. |
Its quite different to add an impl than to remove an impl. The general principle is that increasing the information available to the typechecker should never be a breaking change in Rust. This is the same general principle which is the reason the orphan rule exists, and it really defines Rust's coherence system. Prior to the most recent commit, which adds the requirement of an explicit impl, this RFC was a reiteration of #586, which was closed in part because of this issue. After that commit, its a reiteration of #1148, which was closed because the language team wanted more experience with specialization before adding more coherence rules. The same issues remain, then. The big ones I recall:
All that said, since the previous RFC was postponed only a few months ago, I don't know if now is the time to raise the issue again. |
On one and two, I have thought about this a lot, but have not yet hit any issues, which isn't fixed by the rules in the current rfc. On your last point: I believe you are refering to the
It seems to me that these were postponed for multiple reasons:
|
I don't really know what this means; those two points are both design questions, not questions about soundness. There are multiple sound formalisms, the question is about which is the most useful.
The issue of
I'm not referring to any formalism in particular (and don't really like discussing things in terms of formalisms). The RFC text contains this statement:
My read of this statement (and some other comments, but this statement is most direct), is that inside a crate, one can assume that |
If that's the case, I don't really see how this is messing with those.
Ah, ok. Sorry for misunderstanding. Well, I view this in a similar manner to the orphan rules, crates "controls" the implementations themself, and might break them. If you add an overlapping implementation in your own crate, you get an error, so this is already an existing today. |
@withoutboats I agree. I think it still makes sense to let specialization stabilize, get used in practice, and have idioms emerge before considering the interaction with negative bounds. Personally, I see the appeal of negative reasoning (and have occasionally wanted the feature), but it's hard for me to predict how specialization will shake out without first getting a chance to use (and write!) libraries that make use of the feature. @ticki Do you think there's something that we've learned in the meantime that should make it easier to revisit this issue now? Or do you think we made a mistake in holding off until we get more experience with specialization in the first place? (I realize specialization has taken some time to land, and maybe that's a factor?) |
Right. There was discussion on the last RFC about modifying these rules, especially because introducing negative bounds makes it more likely for users to run aground here. |
The point is exactly to bring up multiple ideas to compete before stabilizing. I think that stabilizing without investigating orthogonal solutions would be unfortunate. |
I was trying to read some of this RFC and basically found that I don't really understand what it's trying to say. For example, this inference rule:
What does this mean? I think of a bound |
Also, @ticki, have you seen https://gist.github.com/nikomatsakis/d5fe4a4b5b075b3d0ec9? That contains roughly the state of my thinking about negative bounds the last time I thought about them. =) It still roughly describes what I believe to be an implementable and plausible system (and one I might like to see) -- but I'll note that negativity doesn't really replace specialization, due to the limitations imposed by not using classical logic (that is, in particular, |
The misunderstanding here is the meaning of Having
I cannot think of anything where complementary traits isn't sufficient as a replacement for specialization. Do you have a concrete example on hand? |
@ticki Would this be legal?
|
Yes. |
I'm very confused by this comment. First, this is the exact backcompat hazard that #586 ran aground on. Second, this seems to me to be contradictory to our prior conversation and this comment in the RFC:
Necessarily, this means that nonlocal types without any impl of |
I was speaking about local items. For non-local ones, it is not true. |
@ticki I'm even more confused by that comment. Surely the type parameter of |
Yeah, sorry. That example isn't local. |
So does that mean that this RFC would not (in itself) allow violating parametricity? (The example I posted would be a classic example of a violation, and apparently it would not be allowed; but maybe there are other examples?) As far as I'm aware, this RFC is being pitched as an alternative to the specialization RFC, which has been accepted but not, as of yet, stabilized. If this RFC is in fact a viable replacement for it and preserves parametricity, then I would definitely prefer it; on the other hand, I strongly suspect that if examples like the one I posted are disallowed then @aturon and @nikomatsakis would regard it as not being a viable replacement. (Indeed, I just happened to scroll up and saw that @nikomatsakis wrote roughly the same thing.) |
@glaebhoerl I very strongly want Rust to gain the ability to express that the types which implement two traits are disjoint sets, and I do not see it as an alternative to specialization for exactly the reason you describe. Specialization and disjointness can sometimes sort of imitate one another, but they are different features with different behaviors. In particular, I think disjointness is useful for the ability to describe a mutually exclusive 'trait hierarchy' defining abstractly the behavior of different scalar and collection types while explicitly declaring which traits cannot coherently be implemented by the same type (classic examples: |
@withoutboats Yeah, I wasn't making any comment either way on that aspect of it. |
@withoutboats I think I get your point, but I really cannot think of an example of specialization which strictly requires LEM. |
@ticki How would you implement the specialization of |
You gave two motivations:
So, IMO, a good first step would be a smaller RFC that introduces negative equality bounds. As an added bonus, that might motivate someone to actually implement equality constraints in where clauses... † By a lot simpler, I mean rust can already do this (although I don't know how robust this is, copied from my post on internals): #![feature(optin_builtin_traits)]
pub trait NotSame {}
impl NotSame for .. {}
impl<T> !NotSame for (T, T) {}
// Example: usage:
trait Trait<T> {}
impl<T> Trait<T> for T {}
impl<A, B> Trait<A> for B where (A, B): NotSame {} |
A reddit user has pointed out that even with the intuitionistic form, this feature could make adding impls a breaking change - specifically, blanket impls. Here's the example Here's crate A: // crate A
trait Bar { }
trait Baz { } Here's crate B, which depends on A: struct Foo;
impl Bar for Foo { }
impl !Baz for Foo { } Crate A is updated ot include this impl: impl<T> Baz for T where T: Bar { } This is a breaking change. |
Interesting. I hadn't thought of this before, but seems obvious. In particular, I had been thinking about how the "intuitionistic" form can be modeled as two distinct traits ( I'll have to think on that. Not sure how to bring this together in my mind. It suggests that we may want a more limited form of negative reasoning, more targeted at specific use cases. |
Is it? wouldn't this update give you the same kind of coherence issues if your code was this? struct Foo;
impl Bar for Foo { }
impl NotBaz for Foo { } |
So I believe the basic statement - "adding an impl is not a breaking change" is not compatible with auto traits, no matter what. For example: struct Foo;
trait Bar { }
auto trait Baz { }
impl<T: Baz> Bar for T { } Adding So we've got to accept that the statement is actually "implementing a non-auto trait is not a breaking change." It might be a worthwhile trade off to modify it further, so that we have an enumerable list of cases in which implementing a trait is a breaking change:
This adds to the complexity of the rules, but fortunately, at least, it seems like blanket impls like this would be a very rare addition. |
I don't understand what you are showing here. =( Similarly this auto trait example feels incomplete?
What is the breaking change exactly? |
@nikomatsakis The blanket impl presents the same problem if you just use a mutually exclusive marker trait vs having // crate A
trait Bar { }
#[exclusive(NotBaz)]
trait Baz { }
#[exclusive(Baz)]
trait NotBaz { } // crate B
struct Foo;
impl Bar for Foo { }
impl NotBaz for Foo { } Now adding this impl to crate A is a breaking change: impl<T> Baz for T where T: Bar { } I don't know what you mean when you say there's a difference between this and having literal negative impls; they seem the same to me. The breaking change in the other case is this: If you have an auto trait, negatively implementing it for a type is a breaking change, because people could have been using that type as implementing the auto trait. Negative impls of auto traits are inherently breaking changes. I think you could construct a case where doing a positive impl of Send for a type that contained a |
Are you referring to this comment that I made?
If so, what I was trying to say is that they are the same thing, not that they are different. What I meant is that, if you did not have any
OK. I agree with this, but it doesn't worry me. That is, it seems obvious. =) Auto traits are basically the place that we traded explicit opt-in for implicit opt-in. Part of that is that opting out is a breaking change. A more worrisome -- but also known in advance -- case would be that adding a private field can cause you to (e.g.) no longer implement |
ping @nikomatsakis @ticki status? |
I had a conversation about specialization with @aturon a few weeks ago in which he convinced me of something related to this (well I don't think he even said anything to this effect, but he still convinced me). So the big issue with the coherence rules and making them more flexible is this - we want very badly to be able to say:
This is, unfortunately, quite impossible to say. So we want the next best thing - a small, easy to grok, easy to remember list of exceptions. With the changes to specialization that Niko has talked about on his blog, I believe we will be able to say:
The second of these gets wrecked by the introduction of arbitrary negative impls, because of the example in this comment. My position used to be that losing the ability to add blanket impls of pre-existing types was worth this feature, but now my position has changed. :-) What I realized during the conversation with @aturon was that every use case I've had where I want negative bounds, I've wanted them as the supertrait for a trait. That is, I want this: trait Rectangle: !Circle { }
trait Circle: !Rectangle { } I do not ever want this: struct Square;
impl !Circle for Square { } In other words, I'm increasingly in favor of taking an approach where we only allow negative bounds as the bounds on the self type of a trait, and nowhere else. |
I like this approach where a In particular, we could recognize the existing disjointness of
I have not yet attempted to make this work with specialization, but if that fails then one must currently write two separate types like :
|
So if we only allow traits to prove disjointness, we need to provide an orphan rule restriction on that. That is, we can't allow this to compile: trait Foo: !ToString { }
impl Foo for Vec<char> { } Because that would make it a breaking change to add One way we could make this more self-justifying is by just making it incoherent to have |
Basically the right way to say this is that we can't infer |
Talked to @nikomatsakis a bit about this today. We talked about - if we only support these super trait negative bounds, it might be more sensible to conceptualize this in terms of a named "mutually exclusive group" of traits, and then you declare your trait is a part of that group. A problem is that its hard to come up with a natural syntax. Some pretty wild syntaxes follow: // negative bound syntax (equivalent to subsequent examples)
trait Circle: !Rectangle { }
trait Rectangle: FooBar + !Circle { } trait enum Shape;
trait Circle: Shape { }
trait Rectangle: Shape + FooBar { } trait enum Shape {
trait Circle { }
trait Rectangle: FooBar { }
} group Shape;
trait Circle in Shape { }
trait Rectangle in Shape: FooBar { } |
@rfcbot fcp postpone I'm proposing we postpone this and #1672 for a later date, when chalk is up and running, and we can put forward a unified proposal for how to expand the negative reasoning performed by our coherence system. I still want something like this RFC someday, but keeping this RFC open is not tracking progress toward that goal at all, and I think these two RFCs will want to be majorly revised into a single proposal. |
Team member @withoutboats has proposed to postpone this. The next step is review by the rest of the tagged teams: No concerns currently listed. Once these reviewers reach consensus, this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See this document for info about what commands tagged team members can give me. |
🔔 This is now entering its final comment period, as per the review above. 🔔 |
The final comment period is now complete. |
Closing as postponed. Thanks for the RFC @ticki, I'm really optimistic that we'll see a feature in this space someday. |
We revisit specialization through ‘complementary traits’. This has been considered
previously, but postponed due to mainly coherence issues. The basic idea is to add
negative trait bounds, denoted
!Trait
, satisfying types not implementing sometraits.
Rendered.
cc @Manishearth, @withoutboats, @nikomatsakis, @glebm, @aturon.