-
Notifications
You must be signed in to change notification settings - Fork 255
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How do I report a test that's not run - add ExplicitTestNodeStateProperty
#2538
Comments
Note: I did try using |
You should not report this state for a test that is not "acting". |
I think the same concept exists in Nunit, https://docs.nunit.org/articles/nunit/writing-tests/attributes/explicit.html. In my view this should map to a skipped test + reason in TestExplorer. This way it is not making the categories of tests too complicated, and it is easy for the user to understand what happened. Definitely easier than seeing the blue "not run" icon, which is often symptom of an error to run tests (at least currently). |
Because of extensibility, from an execution pipeline perspective we may not know that a test isn't being run until after we find out that it's not being run; which is to say, we may believe we are starting a test that we then discover won't actually be running. It's philosophically different than skipped. An explicit test can be run if you ask it to, but unless you explicitly ask for it (or ask for all explicit tests to be run), it won't. There's no way to force run a test which is skipped.
I'm fine reporting it that way, but I do think it might be confusing to try to express both "this skipped test can't be run even if you ask" and "this skipped test can be run if you ask" with a single UI expression. I understand if you'd rather not make that distinction in the UI of Test Explorer, though it would be still nice if there was some additional metadata (if not a different state entirely) that some other test runner may be able to take advantage of. |
I would keep current and future IDEs representation and behavior out of the discussion, I think that the platform should be written without specific "caller" implementation in mind, I'm more for custom tailored made state that a caller needs to recognize and use or ignore. |
I also think it's better to have an explicit model that represents that states even if current IDEs are not able to handle it. I am still thinking that the ideal would be to somehow have some kind of handshake between a client and the framework to align on supported states and mapping between them. We are still a bit too early on the design of this concept for now. About this "explicit" state, this seems to me this is some kind of specialized discovery more than a special run state. From the few tests I have done, the current experience is that explicit tests are only run if there are only explicit tests in the list of tests you receive otherwise they are "skipped" (no result returned for them). which seems to confirm my feeling of specialized discovery. @bradwilson do you think this is a good definition of this state? |
Sorry for the confusion. My remark was not purely about TestExplorer, but about keeping the number of possible results (or possible result "categories") to the minimum, so implementing a client does not require implementing 100 different flavors of skipped. In the platform a skipped test result would represent a test that:
It would NOT represent a test that is not runnable even if included in run by a filter. This is detail of the current test frameworks, and can be conveniently represented as some condition + reporting skipped result. This allows for the platform to continue working as is and distinguish only those steps: find tests ("discovery" e.g. via reflection) In the run tests phase, the particural framework (or user) can represent the skipped flavors as condition + result as it does now:
The result would then be Or it would be some flavor of *SkippedTestNodeStateProperty, but there would be Category (or some other metadata), that would make this map to a Skipped test when the client does not know about the details. If the current way that xunit works is that [explicit] tests are not reported at all (ignored), then I would suggest to change that, and report them as skipped. This all also nicely aligns with the run&discover command that cannot use previously reported (discovered) tests. |
If you mean when I run them in my own UI (aka, If you mean when I report them via the Testing Platform, I will report them as skipped based on this conversion, with some standardized language about why they were skipped. I'm going to open a second issue related to metadata. |
Circling back to this, has the decision been made whether "not run" is a category of test that Microsoft.Testing.Platform cares about, as distinct from "skipped"? Thinking about my implementation, there will be a change in behavior between TPv2 and MTP in terms of what the user sees in Test Explorer. With TPv2, a test which is not run (which is typically because it was marked as And this is what it looks like with MTP in 17.12 preview 1: Also of note: the skip reasons aren't being reported into Test Explorer any more. So I guess I have unresolved questions:
|
Is test filtering hooked up in 17.12 Preview 1? When trying to explicitly run just one test, it still runs them all. I am linked against MTP |
I'll check this using your branch.
I'll check with @drognanar what the VS behavior in this case |
If you "Debug" your single tests you should be able to see if the Uid filter is sent or not, if not looks like there's an issue, I'll test that too using my internal preview of VS and your branch. |
I think I see my bug on the UID filter. Hold off on that one. (In the process of trying to handle explicit tests, I forgot to filter the list 😂) Edit: Yep, that was my bug. Whew. 😄 |
I can see that Test Explorer is doing what I want (leaving the test as not run), and I don't see anything that looks like error messages in the Output > Tests window. I was asking more along the lines of whether Microsoft.Testing.Platform would be upset at me leaving these tests "hanging" without resolution when I call |
I tried to build the https://github.com/xunit/xunit/tree/microsoft-testing-platform branch in order to test out how the skipped message is sent back to VS, however I'm hitting lots of nullability build errors. @bradwilson what setup do I need to test out xunit's protocol branch? |
I use .NET SDK 8. If there are new nullability rules added in 9 they may be triggering. Try adding a global.json. |
I'm building with |
Getting past the NuGet warnings, I see a lot of new analyzers that are triggering ( |
I added a global.json file and now I'm able to have the build succeed. Will take a further look into the VS integration later. |
FYI, I have fixed up the compiler issues (at least as much as I'm seeing with |
I checked and the testing platform adds a skipped reason property to a test node, but not to the serialized JSON. |
Fixing in #3754 |
The testing platform won't be upset of it, when you push the info will be consumed by |
I was wondering if this state should be returned as new discovery state. UX(in general not related to VS) needs to know up-front about it to "distinguish" from the other kind of tests node(i.e. put a different icon). |
Obviously it remains to be seen if/how other third party runners (Rider/Resharper/CodeRush/etc.) adopt support for Microsoft.Testing.Platform. The only UX experiences I have right now are:
Owners of other third party runners will obviously be limited to what kinds of execution results you can report. Obviously everybody is going to support Passed/Failed/Skipped as that's the bare minimum. I don't know what other states they might be able to report if there were associated execution results, but doing a survey might help inform whether there are other states you might consider. |
FWIW, our |
The implementation of the trx is "new" and we've some inconsistencies and bug that will be fixed to make it aligned 100% with the past, we will/have same issue for all other users inside AzDo. So it's fine if you provide the xunit one and our own in future when the implementation will both good you can decide if drop our own and keep your or not. It's fine to me up to you, the important thing is that users can generate a trx.
So I think we should add this concept if it's already in xUnit and nUnit:
On UX can be described in an arbitrary way, |
ExplicitTestNodeStateProperty
ExplicitTestNodeStateProperty
ExplicitTestNodeStateProperty
So this would be an execution result, not a discovery result, correct? |
I have been reading this thread a few times and I cannot really make up my mind... There is also some overlap with some questions raised by @thomhurst in #3699. I think that it's interesting to have a separation of discovery "states" and execution "states". We should also thrives for the flexibility of states produced by the framework as it provides more capabilities for extensions and at the same time, we need a way to simplify or merge all these custom states into what a client can understand. My initial thought was to have some kind of handshake where the client could declare the list of supported states and the framework would then map its states to what is supported but that seems to be defeating most of the goals we would like to achieve. I am wondering if we should have some base states (extensible) that would correspond to the main basic states of a workflow: |
yes, it does not care. |
I've got another scenario to complicate things. I've just pushed some TestNodeMessages for my 'hooks' - So Global/Class Set Ups/Tear Downs. I want this so it's easy to see in the test explorer and such if it is running/passed/failed. However now they're being logged as tests in the final output, and also being produced in the trx report. So I either don't push test node updates and lose visibility of them running, or I deal with them being shown as tests |
It's not really complicating. This is something we anticipated but given all barriers and limitations cause by VSTest and Test Explorer we had to disregard some scenarios. I think we are getting enough maturity to consider back some extra scenario but I want to be clear that given our collaboration experience with Test Explorer I don't have much expectation things would change on their side. |
I'll keep my fingers crossed anyway! 🤞 |
Just as an extra data point, we'd be happy to push Test Collections as a node in the tree as well, if Test Explorer started supporting arbitrary hierarchy. I think part of that negotiation would be informing Test Explorer what the hierarchy of things is, so that they could not only present them in a tree but also so that the user could choose how they sort things. Many (most?) users might not choose to sort by Test Collection, since the default is Test Collection == Test Class, but for those who are using test collections in specific ways, being able to group them and run all the tests in a given collection would probably be a value-add. |
In the
Microsoft.Testing.Platform
object models underMicrosoft.Testing.Platform.Extensions.Messages
, I see messages for passed, failed, and skipped tests.In xUnit.net v3, we also have a concept of a test that wasn't run because it's only ever run when explicitly asked to. It's philosophically closest to a skipped test, but it's not exactly the same. Is there a plan for anything like this in the new object model? I'm leaning towards not reporting anything at the moment, because
dotnet test
does not need to say anything. However, I am wondering whether there's a way to highlight such tests in Test Explorer today (and if not, if it's a feature under considering). By not reporting it, it will always just stay in the tree as "not run" which is the most appropriate UX if there's no concept of "won't be run unless you explicitly ask it to be run".The text was updated successfully, but these errors were encountered: