-
-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Raise CollectError if pytest.skip() is called during collection #1519
Raise CollectError if pytest.skip() is called during collection #1519
Conversation
good thinking to just catch it - that way around is far more easy to detect, and requires far less logic by simply requiring the user to correct his misuse i'm a bit surprised by the "unrelated" tests removals, i'm also surprised that most testing is via junit-xml, i believe/hope this can be done with the normal inline run at least could you take a look at how easy/feasible that is |
@RonnyPfannschmidt I know what you mean by 'unrelated tests removal' (even though they are not really unrelated because the removed/modified tests all rely on pytest.skip at a module level). This is what I previously wrote to @nicoddemus by mail:
He recommended creating a pull request and discussing it here :-) I agree that this commit breaks external contracts because currently it is possible to use This could be legitimate: import pytest
pytest.skip("This whole module is currently broken")
def test_one():
... This is misleading: import pytest
@pytest.skip("Test is broken")
def test_one():
...
The main test for my change is Is |
thanks for the clarifications, it makes me wonder if we should test junitxml output in a more localized manner in future with the "unrelated" test i meant the change in result-log - its basically a legacy test result output format i would appreciate the return of removed test in runner as it is a very local, also well done 👍 @hpk42 and @nicoddemus : i think we need a target branch for this one as it cant be part of just a feature release, since its a "breaking change" - alternatively we can bump the next version in the features branch to 3.0 ? |
You mean avoid removing
I don't understand what you mean. Could you give some more details? |
Hi @omarkohl, first of all thanks for all the work!
That example could easily be changed to: import pytest
pytestmark = pytest.mark.skip(reason="This whole module is currently broken")
def test_one():
... So IMHO this would be reasonable to ask users to change this rather unusual use-case in face of the benefits of someone misusing
I'm not entirely sure this qualifies as an API change, it seems to me more that it fixes an API misuse. On the other hand, pytest's own test suite relied on this, so one might argue that it is actually a feature change. 😁 I'm fine with this going to 3.0 only. |
@nicoddemus i agree that the old behavior was unintended/bad, however fixing it required a change in how certain exceptions propagate, and thus how py.test interacts with the testing of stuff this will turns skips at user projects to collection errors, thus "breaking" their builds on update if we want to be able to push this into a feature release, it would have to us a different mechanism to warn the user of his missuse |
@RonnyPfannschmidt @nicoddemus can this be merged (for 3.0)? I can have a look at the conflict in that case... |
Yes please... we probably won't break anything serious, but this type of misbehavior should change. |
Hmm that is a lot of commits... I just created a merge commit to merge my two commits into 'features'. Two minor merge issues were fixed... Next time i'll rebase instead. |
@omarkohl, could you please rebase this branch now? I think we all can agree that this will go into 3.0. 😁 |
pytest.skip() must not be used at module level because it can easily be misunderstood and used as a decorator instead of pytest.mark.skip, causing the whole module to be skipped instead of just the test being decorated. This is unexpected for users used to the @unittest.skip decorator and therefore it is best to bail out with a clean error when it happens. The pytest equivalent of @unittest.skip is @pytest.mark.skip . Adapt existing tests that were actually relying on this behaviour and add a test that explicitly test that collection fails. fix #607
@nicoddemus I rebased as we discussed... |
I'll add the
CHANGELOG
and documentation once I get some feedback if this is correct.This change effectively disables using
pytest.skip
at module level (even as a normal function not only as a decorator) because that is how I understand issue #607 .@nicoddemus:
@nicoddemus:
This earlier comment by @hpk42 is also an option:
I tried checking the type of the
msg
parameter inpytest.skip
and it achieves the desired result.Here's a quick checklist that should be present in PRs:
master
; for new features, targetfeatures
AUTHORS
CHANGELOG
(choose any open position to avoid merge conflicts with other PRs)pytest.skip() must not be used at module level because it can easily be
misunderstood and used as a decorator instead of pytest.mark.skip, causing the
whole module to be skipped instead of just the test being decorated.
This is unexpected for users used to the @unittest.skip decorator and therefore
it is best to bail out with a clean error when it happens.
The pytest equivalent of @unittest.skip is @pytest.mark.skip .
Adapt existing tests that were actually relying on this behaviour and add a
test that explicitly test that collection fails.
fix #607