-
-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Convert python test suite to pytest #949
Comments
Thanks for all those good ideas. I don't have experience with pytest, but that sounds like a reasonable solution, and some prototyping to get a sense of it would be good.
|
Craig's going to bring it up on gdal-dev when he's back from leave next week, but I can probably answer most of these:
Yes. You can have pretty advanced test selectors. eg:
Tags are pretty powerful, can mark tests with anything and then use that to include/exclude sets. There's also the usual skip, skip-if, and expected-fail handling on top of that.
Python supports re-framing tracebacks, so between that and pytest's custom assertions I think it will be good. We could also use custom assertions to do eg. consistent image/metadata/etc comparisons across the test suite. Since pytest collapses output by default for successful tests so they can print/log in much more detail, I think the gdaltest.reason stuff could [eventually] go away in a lot of cases since the assertion would provide the "where". Enabling a custom gdal log handler for tests so we can set CPL_DEBUG/CPL_CURL_VERBOSE/etc on automatically for most tests might be helpful too? Means it's automatically on hand when tests fail. (though I'm debugging a CPL_DEBUG-only segfault at the moment, so might not always be helpful!)
pytest supports both py2 & py3, so there's no reason we can't support both. This is a different idea from eg. tox which runs your test suite under different python versions — I can't see a lot of benefit for GDAL for something like that except making sure SWIG is doing its job? But tox & pytest can work together if there is specific python compatibility we want to test.
Sure, at work we have tests that fork subprocesses and start/stop servers and all sorts of crazy stuff. If we can wrap it in a function that checks the return code/output/etc it should be good. |
Note that GDAL tests assume that they are run with the current working directory being the one where the file is located. Due to relative test filename. The pymod/gdaltest.py infrastructure makes sure to change the CWD automatically.
I agree. Having to write gdaltest.post_reason("some painful message"); return "fail" is the worse part of the current testing experience (I'm super lazy and in general my reason is "fail", and I just look at the line number in the report to figure out where, and then why, it failed). My remark was just to make sure that for tests not upgraded and using the compatibility shim we would still get this line number because the "fail" reason might not be sufficient to locate the precise point of failure.
My question was to make sure that there wasn't pytest API/constructs that was different in py2 & py3 so we can have the same autotest code. Testing py2 vs py3 is needed since we have SWIG typemaps that are fairly different in both cases (string vs byte array handling of course...) |
By default You can also do
Sure, so in CI we could test under both interpreters just by running pytest using each one. Not sure most developers need to run all tests against both python versions during dev though? We can skip tests based on python interpreter version if there's particular oddities we're testing.
Sure, so the compat wrapper could fix that by changing cwd as part of a fixture. |
In the current CI, we have a dedicated config to run with Py3.
No, generally the current python version is good enough. But it is good to be able to "python my.test" or "python3 my.test" manually if needed (that involves tweaking the PYTHONPATH when working with a dev non-installed environment) |
An issue I just thought of: pytest supports python 2.7+. I see compatibility code for Python 2.3 in GDAL (!) Is there any reason to keep this for GDAL 2.4+? Python 2.6 has been unsupported for years now... If not, I love ripping things out 😄
Almost all of the calls to if condition:
gdaltest.post_reason(reason_message)
return 'fail' I think it would be straightforward to automate turning that into: assert not condition, reason_message pytest gives a stack trace at the assertion point including line numbers, stdout/stderr, warning messages and relevant context variables, which in addition to the reason message, is generally plenty to diagnose the issue. Perhaps the remaining calls to |
As far as I now, none of our CI targets tests 2.6 or older. I presume the bindings themselves can support older python, but for me requiring 2.7+ is fine at that point. |
A naïve first pass at auto-translating those, converted update: 12139/14487 now (84%. nearly there already!) |
I've posted this to gdal-dev |
(Spun off from this comment #947 (comment)
GDAL's python test suite is a little bit crazy, and in need of an overhaul.
We've been using pytest lately with great success. If there's interest in this I could probably work on something to convert the GDAL test suite to use it.
To give an idea of the pytest style, consider this test
As a strawman, I'd probably rewrite that something like this using pytest:
That's quite a bit of refactoring, so doing that to the entire test suite is a largish goal. I also doubt much of that could be automated easily.
However, I think we could very easily make a compatibility shim to make the old tests work with it in the meantime. Something like this:
In fact, we could even override a hook in the pytest configuration file to wrap all the tests with this, but I suspect it will be nice having tests wrapped explicitly with it rather than magically. That way you can see how many old tests remain by just grepping for it.
If this all makes sense, I can probably whip up a PR for the initial compatibility step.
The text was updated successfully, but these errors were encountered: