Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Runtime ignoring #19

Open
epage opened this issue Sep 1, 2022 · 9 comments
Open

Runtime ignoring #19

epage opened this issue Sep 1, 2022 · 9 comments

Comments

@epage
Copy link

epage commented Sep 1, 2022

In 0.5, Outcome was made private and callers lost the ability to mark a test as ignored/skipped at runtime. This is very useful for test runners built on top that support runtime skipping to communicate what happened to the user.

@LukasKalbertodt
Copy link
Owner

Could you provide an example of using that in 0.4? If you have an actual project doing that, you can just link it here. A minimal example is fine, too.

@epage
Copy link
Author

epage commented Sep 1, 2022

Having thought it through, my more immediate needs can be worked around

  • snapboxs wrapper which takes an action environment variable can instead set ignore upfront
  • If I were to switch trycmd to using libtest-mimic, I could pre-parse everything and track which cases are ignored.

I do know of other places where runtime ignoring is needed by a test case. For example, cargo's tests have nightly and git dependencies and are skipped otherwise. That logic all lives in test function and libtest just sees "pass". If a case like this were moved to libtest-mimic, it would have the same deficiency.

@LukasKalbertodt
Copy link
Owner

I'm still slightly confused by the "runtime" part of it because the list of Trials is constructed at runtime too. So you can do whatever check you want to perform while constructing the list of tests, right?

@epage
Copy link
Author

epage commented Sep 15, 2022

Sometimes you can bake conditionals into the framework around a test. Sometimes you can't and want to allow the test the flexibility when it can be skipped.

For example, in my python days I used pytest to test hardware. To know which hardware to test, I injected some test fixtures using command line variables. The tests could then decide if the hardware had the needed capabilities and skip themselves if not.

@qwandor
Copy link

qwandor commented Nov 29, 2022

I would also like to be able to have tests decide at runtime whether to be ignored. In the environment I'm working in, tests are built (along with the rest of the system) on a build server, and then run on various different devices. Some tests require particular hardware support, and so should be skipped on devices which don't have that hardware. I'd like to make this check as part of the test rather than the runner, so the test can decide partway through that it should be ignored, rather than pass or fail. This seems to be a fairly common feature in test frameworks for other languages, such as Java and C++.

@brendanzab
Copy link

Just chiming in to say that I’d also love to be able to return Ignored dynamically from a test function. Maybe tests could return Result<Success, Failure>, where Success is defined as:

enum Success {
    Passed,
    Ignored,
}

@Dinnerbone
Copy link

We've switched to this library too and this is very much a feature we'd love to have as well, it takes more time calculating upfront if a test will be ignored which slows down creation of the entire runner, vs making every test and letting it figure out if it can run or not.

@LukasKalbertodt
Copy link
Owner

I think I have a better understanding of this feature request now. I will look into this! (No promises as to when though, sorry!).

@bwidawsk
Copy link

Piling on to this request. If there is a branch somewhere, maybe I can look at finishing it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants