-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Runtime ignoring #19
Comments
Could you provide an example of using that in 0.4? If you have an actual project doing that, you can just link it here. A minimal example is fine, too. |
Having thought it through, my more immediate needs can be worked around
I do know of other places where runtime ignoring is needed by a test case. For example, cargo's tests have nightly and git dependencies and are skipped otherwise. That logic all lives in test function and libtest just sees "pass". If a case like this were moved to libtest-mimic, it would have the same deficiency. |
I'm still slightly confused by the "runtime" part of it because the list of |
Sometimes you can bake conditionals into the framework around a test. Sometimes you can't and want to allow the test the flexibility when it can be skipped. For example, in my python days I used |
I would also like to be able to have tests decide at runtime whether to be ignored. In the environment I'm working in, tests are built (along with the rest of the system) on a build server, and then run on various different devices. Some tests require particular hardware support, and so should be skipped on devices which don't have that hardware. I'd like to make this check as part of the test rather than the runner, so the test can decide partway through that it should be ignored, rather than pass or fail. This seems to be a fairly common feature in test frameworks for other languages, such as Java and C++. |
Just chiming in to say that I’d also love to be able to return enum Success {
Passed,
Ignored,
} |
We've switched to this library too and this is very much a feature we'd love to have as well, it takes more time calculating upfront if a test will be ignored which slows down creation of the entire runner, vs making every test and letting it figure out if it can run or not. |
I think I have a better understanding of this feature request now. I will look into this! (No promises as to when though, sorry!). |
Piling on to this request. If there is a branch somewhere, maybe I can look at finishing it? |
In 0.5,
Outcome
was made private and callers lost the ability to mark a test as ignored/skipped at runtime. This is very useful for test runners built on top that support runtime skipping to communicate what happened to the user.The text was updated successfully, but these errors were encountered: