-
-
Notifications
You must be signed in to change notification settings - Fork 491
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Increase default test timeout #32973
Comments
comment:1
What about having a "hard" timeout of 7 minutes, but keep a "soft" timeout of 5m (or less?) Meaning: tests run to the end for 7 minutes, but files taking more than 5 are reported at the end as "taking too long". In effect, this means the whole file is doctested, but it still shows as an error/warning at the end. |
comment:2
Replying to @tornaria:
I don't think a warning about files that take a long time is too useful, because what would we do to fix them? We have As far as I know, the So, ultimately, I think we should be setting the default timeouts so that |
comment:3
What you say makes perfect sense, thanks for the explanation. I was worried that a slow doctest like in #32964 would be missed. In my experience it only occasionally timed out on i686 and it never gave any warning on x86_64. Maybe this just means the |
comment:4
It does look like But I think the main reason that I found #32964 and others like it is because nobody else is running the test suite without |
Author: Michael Orlitzky |
Commit: |
Branch: u/mjo/ticket/32973 |
comment:5
I've gone all the way to 10 minutes. With I still plan to check the timings on my laptop, probably tonight, to make sure that 10 is enough. New commits:
|
comment:6
The laptop (Core II duo @ 1.8ghz) actually fares better than my desktop, but with fewer threads (2 vs 4). So ten minutes looks like a good number. |
comment:7
Replying to @orlitzky:
On my desktop is set to ~ 36s (it varies a little bit).
In the void package I'm running the test suite using It might be useful if: a. The slow doctest warning is summarized at the end (just mentioning the file is useful enough so one can search for it in the log). b. There's an option so that the check exits with an error on warnings so this is catched by the CI. c. Even if test is run with |
comment:8
Replying to @orlitzky:
LGTM, although I should mention that I have a box where Maybe the time can be adaptive (based on |
comment:9
Replying to @tornaria:
There is some voodoo that happens in My problem is that I usually have too many test failures for it to use the stored timings (testing development branches and package upgrades all the time), so it gives me no
I think this one already happens? The other two are good ideas, I opened #32995 for those. |
comment:10
Replying to @tornaria:
We would still need a fallback if the timing information isn't usable -- if I'm sure it happened once but I don't recall any instances where someone, accidentally or otherwise, uploaded a branch with dangerous code and sent it to the patchbots. |
comment:11
Replying to @orlitzky:
Aha! That's it. I run tests on a clean chroot with a just built sagemath. Hence the warn_long time defaults to infinity which is bad. Can we make the default to a sane value? Even 60s would be small enough to catch #32964 on a fast machine but maybe it should be smaller. It's kind of weird that doctesting needs
There ought to be a simple way to get a good enough approximation to second_on_modern_computer() that is usable when stats have not been collected. This is just a timeout not for benchmarks so anything approximate enough that gives a reasonable small value on a fast computer while still allowing enough time on slower computers would be good. Then both warn_long and the per-file timeout can be multiple of that so both fast computers and slow computers are happy. And it can still be "more precise" when doctest timings are available. I mean: just bogomips may be good enough for this purpose (i.e. better than infinity or constant).
I didn't check so it may be true. |
comment:12
TL;DR the effective default for
|
comment:13
Replying to @tornaria:
This is good solution, but the devil is in the details. BogoMIPS won't be available on Windows, for example. What else can we use? The whole thing is really a mess: when you allow sage to choose your The |
comment:14
I've switched |
comment:15
Replying to @orlitzky:
I meant any stupid benchmark, whatever it is I don't think it's very important. This is just so the default timeout can be reasonably small on fast cpus while being large enough on your slow computer. First silly thing that I could think of:
This gives me:
What do you get? Does it look like a reasonable approximation to Would K * secs_approx() be a reasonable timeout? I'm guessing your slow cpu (core 2?) will give more than 2 so K = 300 or less should give you more than 10 minutes while giving 3 minutes or less on recent cpus.
I fail to see how this is a wall vs cpu time issue. I agree 100% on your switch to cputime, but this is orthogonal: different cpus can do different amounts of work in the same cpu time. The stupid benchmark above is trying to do a "fixed amount of work" (whatever that means) and measuring the cputime taken gives a rough conversion factor between cpu time and work.
Fair enough, that's ok on the assumption that each doctest runs in a single thread. AFAIK the doctest runner parallelizes by running multiple independent doctests in parallel and OMP_NUM_THREADS is set to 1 (or 2?). It's still useful to have a reasonable approximate conversion factor as above that depends on the cpu (not on number of threads) so the timeout on a core 2 is still 10 minutes wall time but the timeout on a coffe lake is 2-3 minutes at most. NOTE: my suggestion above is a REALLY dumb proxy for seconds_on_modern_computer(). We could keep the factor calculated using doctest stats if available. OTOH: this is also broken since the doctest stats are stored in $HOME/.sage without hostname; my $HOME is shared between my nehalem and my coffee lake and the doctest times have a factor of 2 between them so using my stupid benchmark might be better in some cases (I do like to build on coffee lake then test on nehalem so I know if there are illegal instructions). It might be better to do it in cython (with a larger value of p) to get a more meaningful value (i.e. closer to what sage actually does), but I bet this is already good enough. |
comment:16
Replying to @orlitzky:
After your patch in #32981, doctests warn if > 10s when not using I would expect that "non-long" doctests warn at the same default time regardless of Maybe:
Also IMHO all default values should be multiples of |
comment:17
Replying to @tornaria:
I get between 1.4 and 1.5 on my desktop (the slower of the two).
Disclaimer: I still think this is overkill for the timeout value, which is intentionally used maybe once a decade. But yes, in general, I think the idea of a fixed amount of work (rather than a fixed amount of time) is preferable.
I know, but wall time is simply the worst. I can mess up the wall time by e.g. running the computation at 2am when daylight savings time switches on/off. CPU time is still flawed, but is always going to be a better metric than wall-time.
There are some things we'll never be able to account for. New versions of python may change how much work is involved in the "fixed" amount of work. Many sage doctests depend on Cython or C libraries whose efficiency depends on CFLAGS, the linker used, etc. Python itself (being written in C) is subject to the same uncertainties. Still, I think if we're careful, we can do a lot better than the existing |
comment:18
Replying to @tornaria:
I think a separate option is the best way to do it cleanly. We'll probably wind up with less code that way than if we add a bunch of special cases to guess at what should happen. (These options are for developers only, so I'm not too worried about overcomplicating the UI.) I added another item to #32995 for this.
Yes, this makes more sense for I propose the following:
|
…t timeout When running the test suite on an older machine, many files time out. For example, ``` $ sage -t src/sage/manifolds/differentiable/tensorfield.py ... File "src/sage/manifolds/differentiable/tensorfield.py", line 248, in sage.manifolds.differentiable.tensorfield.TensorField Warning: Consider using a block-scoped tag by inserting the line 'sage: # long time' just before this line to avoid repeating the tag 4 times s = t(a.restrict(U), b) ; s # long time Timed out (and interrupt failed) ... ---------------------------------------------------------------------- Total time for all tests: 360.3 seconds cpu time: 0.0 seconds cumulative wall time: 0.0 seconds ``` This has run over the default (non-long) test timeout of 300s. This commit doubles that default, a change that should be unobjectionable for a few reasons: 1. This timeout is a last line of defense intended to keep the test suite from looping forever when run unattended. For that purpose, ten minutes is as good as five. 2. As more tests get added to each file, those files take longer to test on the same hardware. It should therefore be expected that we will sometimes need to increase the timeout. (Basically, if anyone is hitting it, it's too low.) 3. We now use Github CI instead of patchbots for most automated testing, and Github has its own timeout. 4. There is a separate mechanism, --warn-long, intended to catch tests that run for too long. The test timeout should not be thought of as a solution to that problem. Closes: sagemath#32973 URL: sagemath#36223 Reported by: Michael Orlitzky Reviewer(s):
…t timeout When running the test suite on an older machine, many files time out. For example, ``` $ sage -t src/sage/manifolds/differentiable/tensorfield.py ... File "src/sage/manifolds/differentiable/tensorfield.py", line 248, in sage.manifolds.differentiable.tensorfield.TensorField Warning: Consider using a block-scoped tag by inserting the line 'sage: # long time' just before this line to avoid repeating the tag 4 times s = t(a.restrict(U), b) ; s # long time Timed out (and interrupt failed) ... ---------------------------------------------------------------------- Total time for all tests: 360.3 seconds cpu time: 0.0 seconds cumulative wall time: 0.0 seconds ``` This has run over the default (non-long) test timeout of 300s. This commit doubles that default, a change that should be unobjectionable for a few reasons: 1. This timeout is a last line of defense intended to keep the test suite from looping forever when run unattended. For that purpose, ten minutes is as good as five. 2. As more tests get added to each file, those files take longer to test on the same hardware. It should therefore be expected that we will sometimes need to increase the timeout. (Basically, if anyone is hitting it, it's too low.) 3. We now use Github CI instead of patchbots for most automated testing, and Github has its own timeout. 4. There is a separate mechanism, --warn-long, intended to catch tests that run for too long. The test timeout should not be thought of as a solution to that problem. Closes: sagemath#32973 URL: sagemath#36223 Reported by: Michael Orlitzky Reviewer(s):
Having just run through the default (non
--long
) test suite for the first time in a while, I've hit...which runs afoul of our default timeout in
src/sage/doctest/control.py
:There's nothing wrong in that file; it just has a lot of tests. Even after #32967, I think it would be safer to go straight to 7 minutes, because the runtime is more or less nondecreasing.
Component: doctest framework
Author: Michael Orlitzky
Branch/Commit: u/mjo/ticket/32973 @
c3b0124
Issue created by migration from https://trac.sagemath.org/ticket/32973
The text was updated successfully, but these errors were encountered: