-
Notifications
You must be signed in to change notification settings - Fork 12.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tracking issue for const_eval_limit: configurable, per-item instruction limit for CTFE #67217
Comments
I'd like to give it a go, although I will have to dig into this matter first to better understand it. Wouldn't it help to make #![recursion_limit] applyable on item level? |
Upon reflection, this is more like E-medium. I still think it's a good first issue because it doesn't require too much familiarity with the compiler. rust/src/librustc_mir/const_eval.rs Line 32 in a233302
Is where the instruction threshold for the infinite loop detector is hard coded. Search for where this constant is used, and it will take you to the spot where the count can be checked. Note that terminators, not statements are being counted.
|
Before implementing anything I think there's need of a discussion wrt. what the language design should be here, at least if this is going to be user-facing. |
Of course! |
If we have an instruction limit, why do we need a loop detector at all? FWIW, I'd not count instructions but blocks -- each block is of finite length and this way we are somewhat less sensitive to long swaths of straight-line code. |
We wouldn't. In the medium term, I would like to remove it. In the short-term, I think #67216 should be blocked on providing some recourse to the user when their code runs long enough to trigger the detector. It seems like an attribute in the style of |
We have such a recourse right now, in the form of the current loop detector, right? If you think that's not good enough, implementing a limit on the number of basic block should be fairly trivial -- and then removing the loop detector also isn't very hard. So I am not sure what big piece of work you are planning here that mandates a short-term strategy with further medium-term adjustments? |
I mean that, if I have a very expensive const initializer that I know will terminate, I want a way to run it to completion without traversing the heap every 2^8 basic blocks. Whatever system of attributes we end with will require an actual design, even if it's not hard (some might even say E-easy). Meanwhile beta branches in about a week (I think), and I'd like to avoid another |
But it seems you want to land an instruction/block counter before the beta branches? At which point the loop detector becomes redundant. But you want to keep it anyway? I don't get it.^^ |
I'd like to do something quickly that's better than a |
Okay, so the plan of action is this?
I guess I don't see why step 3 can't be made together with step 1, but I also don't really care strong enough to push for that.^^ |
Step 1 is more about picking the syntax for the attribute, while step 2 would be to fully specify its semantics. If we can't settle on syntax in the next week, I'll just add a temporary A strawman proposal is I think we should use the word "instruction" (or something similar) in the attribute, even though we are actually counting basic blocks. |
By the way, I'm assuming that we will put the attribute itself behind a feature gate. We don't have to stabilize something in a week, just get something merged. |
Other suggestions: But yeah, this would be unstable for now anyway. |
I don't see how we can backwards compatibly add these (unless we add them only if the code has a loop or branch anywhere, which seems like the only important case). Because if the user has compiling code right now that will trigger the limit when introduced, this breaks their code |
But yea, please add the unstable attribute so it gets into beta, even if it doesn't do anything yet |
Okay, this comment assumes that everyone is comfortable with adding some kind of attribute to limit the time spent in const-eval behind a feature flag and hashing out the exact semantics later. Please object if this is not the case. @TheSamsa, my intention is still to mentor someone through implementing the attribute, although the time pressure is not ideal. Do you have some time to spare this week? For anyone who wants to take this on, the first step would be to add a new feature gate at the end of the following list that uses a descriptive name. I would create a new tracking issue specifically for the attribute, since this issue is too broad. rust/src/librustc_feature/active.rs Lines 526 to 531 in 90b957a
You'll want to add a symbol for the feature gate, as well as the name of the attribute itself. I believe the name of the attribute must begin with rust/src/libsyntax_pos/symbol.rs Lines 111 to 117 in 90b957a
You'll then need to actually parse and store the attribute. I don't have detailed instructions for this, but looking at the code that handles Once this is complete, I can instruct you how to incorporate this information into the compile-time interpreter. |
Also, instead of having a sentinel value, we could have a proper lint for long-running const-eval that defaults to warn. |
@ecstatic-morse I'd really really like to takle it. BUT I am not sure if I understand the parts good enough to wrap this up in a week. Besides I guess mentoring could be harder due to timezones (UTC+1 here) Nonetheless, I'll try my own implementation (for my learning) for the first steps the next day or two. |
For context, the RFC states that:
I would prefer to start out with a coarse per-VM granularity and considering per-item as needed. I also don't have an intuition for what per-item means in the face of mutually recursive functions (where e.g. both functions use
I agree with @RalfJung's idea here and it seems consistent with what the RFC text ^-- states.
I don't understand why we need to block #67216 (especially for beta bootstrap) on having this attribute in nightly as well. I would prefer not to rush things. We can still block stabilization of
I would prefer something vague and brief, e.g. @RalfJung's suggested
@oli-obk It has to be a lint to be backwards compatible as @ecstatic-morse suggests:
I see this as a two-step thing:
I'm not, but I've nominated this issue for language team discussion to ensure there's agreement about the general direction. |
Denying a lint won't suddenly stop the loop detector from running. This will require some additional logic beyond that. We'll also need to take care to report the lint on the root of the evaluation and not e.g. somewhere in another crate if calling const fns from other crates. |
I'm removing the nomination here because there doesn't seem to be a specific lang-team topic. If y'all would like some input, can somebody summarize the precise question? For example, should we evaluate #67260? If so, can someone summarize the design implemented there? |
Status update: #67260 landed, but from what I can see that is a global limit, not per-item as the issue here tracks. It's just the first step, then. |
I'm gonna nominate this issue for @rust-lang/lang discussion, since this is blocking #52000. At the moment, we have a global const-eval limit that will execute a fixed number of MIR basic blocks before emitting a In the issue title and summary, I mentioned a "per-item" limit. I had envisaged this working the same as the current, "per-crate" limit, except that each @nikomatsakis has expressed the desire for this to work on Which of these variations do people prefer? Personally, I'm still partial to the version that works on |
It should be easy to make this work on Maybe what would make more sense is to add it to the current limit and revert to the previous step value after executing a function that has a limit override. This way both the function can enable itself to be easier to execute, and the user can bump the value futher in case the function screwed up the limit. |
The problem with adding up limits is that it can result in no effective limit for recursive There's also the problem of explainability. Limits at the crate or const-eval root have a clear interpretation (attribute at crate root overrides default, attribute on For the record, I think it would be possible to disallow |
One thing we could do is, when we enter a function that has a per-function limit and that function is not yet on the call stack (to handle recursion), we store the current limit... somewhere, and then we make that function's limit the new limit. When this function returns, we restore the old limit. That avoids changes to this limit, and indeed any changes to what this function does, from "leaking" to the callers. The main issue I see with this scheme is higher-order functions: |
I considered that, but that would mean that library authors could set a low limit to the function, get it wrong, and then callers have no control over setting the limit higher. |
OK so there are a few considerations, I guess. One of them is that the caller may supply some input that requires a lot of execution, and hence the callee may not know that an annotation would be required (because they only ever used the fn with small inputs). The other is that, as I frequently suspect, the callee may know that the fn is very computationally heavy and thus needs a "high limit". However, it's true that it's may be hard to determine an appropriate limit without knowing what inputs the caller are using. My concern was initially about the latter point: I think the experience with macro-rules is that people often define "complex macros" and then have to add to their annotation that all consuming crates have to add these esoteric annotations, which seems suboptimal. But the former point is very true. This suggests though that something like what @RalfJung suggested -- where we remember the limit as we enter a function -- might be good, but perhaps with a "max" instead? i.e., we remember a max, and we pop it when we return from the function to the old max, but the limit never goes down? |
This was discussed at the @rust-lang/lang meeting. One concern was that, by tying the step limit to MIR basic blocks, we make it possible for changes to MIR building or optimization to break code. One way to mitigate this is to limit only the number of function calls and jumps to already executed basic blocks as opposed to all basic block terminators. My concern with this change was that there would be even less correlation between the wall-clock time spent in const-eval and the Separately, there seems to consensus that making this attribute work per-function is worth doing. The specifics were not decided on, but something like Ralf's approach above seemed promising. |
There is some relevant discussion happening at #93481 |
Discussed in T-lang backlog bonanza -- tagging with design concerns, discussion of the concerns raised in #93481 (comment) needs to be driven to conclusion. One suggestion brought up in bonanza was a timeout (for const fn or just general compilation) at 10 minutes or 30 minutes or whatever. |
Currently, the const evaluator will continue forever if the program it is executing never terminates. There's a mechanism for detecting trivial infinite loops during const-eval, but this can't catch every non-terminating program.
We should have a configurable instruction limit for CTFE that's set to a relatively low initial value. Furthermore, disabling the instruction limit should also disable the infinite loop detector, since the user has indicated that they expect their program to execute for a long time.
The existing
#![recursion_limit]
and#![type_length_limit]
attributes are somewhat related, but it would be ideal if the instruction limit could be applied per-item as well.cc @rust-lang/wg-const-eval
As of 2022-01-31, the default
const_eval_limit
is 1_000_000:rust/compiler/rustc_middle/src/middle/limits.rs
Line 40 in af46699
The text was updated successfully, but these errors were encountered: