Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Test runner #789

Open
wants to merge 54 commits into
base: master
Choose a base branch
from
Open

WIP: Test runner #789

wants to merge 54 commits into from

Conversation

nickspoons
Copy link
Member

@nickspoons nickspoons commented Jun 18, 2022

This PR introduces a new test runner window. It displays the tests and their state ("not run", "running", "passed" and "failed") using syntax highlighting, leaving previously run tests in the test runner window for reference, and to easily re-run tests or test files.

Under each test, any error and stacktrace information is displayed, as well as any console output. Each of these sections is enclosed in a fold.

Any new tests run are added to the test runner, which groups tests by project/solution, then file.

testrunner_demo

Partially complete feature list:

  • Display tests and state
  • Display (user configurable) test spinner to show running tests
  • Navigate to tests
  • Navigate to failed test stack trace locations
  • Add stack trace locations as vimspector break points
  • Run test from test runner
  • Debug test from test runner
  • Run all displayed tests in file or project from test runner
  • Delete tests from test runner
  • Configurable glyph to display beside completed tests, e.g.
  • Custom fold text for build failure, test failure and console output sections
  • Documentation
  • Discover tests (search for tests in solution)

There are already some customisation options, including whether to display a "running" spinner and what characters to use for the spinner, whether to display the intro banner, whether to display build output (or only build error output), whether the quickfix list should still be used as well. These will be documented but have not been yet.

@DasOhmoff
Copy link

This is great! Thank you for your efforts :)

@nickspoons
Copy link
Member Author

You can now run the test under the cursor, or the tests in the file under the cursor, or all tests in all listed test files for the project/solution under the cursor with <F5>

@nickspoons nickspoons mentioned this pull request Jun 23, 2022
@nickspoons
Copy link
Member Author

nickspoons commented Jun 27, 2022

I've added a mapping (default dd) to remove a test, test file or project from the test runner. For projects or files this works as expected, but unfortunately not for individual tests. For tests, the test and its output is removed from the runner, but the test will still be run. This is due to the way that OmniSharp-roslyn runs tests - there is only an option to run all tests in a file, not a selection of tests. I'm thinking more and more that the best way forward may be to skip OmniSharp-roslyn for running tests, and build and run the dotnet test... command manually.

There are also a couple of refactors that I need to do:

  1. The dict holding the project file and test details is unwieldy and easy to write to but tricky to read from. (done)
  2. The way the runner is currently "painted" involves deleting the entire buffer contents and re-writing it, which means that folds are lost and then recreated, which makes it difficult to track which folds are open and which are closed. Appending/deleting/modifying the buffer in place is more complicated, but will result in a better user experience.

@Melandel
Copy link
Contributor

Melandel commented Sep 14, 2022

I have tried this using NUnit, using test data from another class' property.
The test execution never seems to end.

Repository for reproduction: https://github.com/Melandel/Zuc.OmnisharpVimBugRepro

@nickspoons
Copy link
Member Author

image

Oh wow yeah, that's not good 😅

Thanks for the repro, I'll look into it.

@nickspoons
Copy link
Member Author

@Melandel give it a try now.

The issue had to do with how these nunit TestCaseSource tests are reported. They were inconsistent with how the MethodName was used elsewhere, including the method signature:

// Test1:
Zuc.OmnisharpVimBugRepro.Tests.Test1

// Test2:
Zuc.OmnisharpVimBugRepro.Tests.Test2("foo")

So I have simply stripped the ("foo") from the handler so that the test name can be matched.

Let me know if you run across any other bugs like this. I don't use these kinds of tests at all so it's great to see some different structures.

@nickspoons
Copy link
Member Author

Oh it doesn't show the test as 2 separate tests though, as Visual Studio does. This is Visual Studio:

image

@nickspoons
Copy link
Member Author

@Melandel The test runner now detects multiple NUnit TestCaseSource tests and runs them all, with correct outputs.

The downsides to the current approach are that test discovery takes longer (it takes a couple of seconds for tests to start, as the discovery request runs) and it is still only possible to run all source tests, you can't run e.g. Test2("foo") and not Test2("bar").

I have some more ideas for improvement for these downsides but I think some pretty hefty refactoring is needed. The real trouble is that the OmniSharp-roslyn responses aren't particularly consistent between the test discovery and the code structure requests - probably because the test discovery is coming directly from dotnet test. So quite a lot of "massaging" of results is required.

@Melandel
Copy link
Contributor

Melandel commented Oct 8, 2022

@nickspoons After updating the branch, I observe that running RunTestsInFile still looks like the test execution never seems to end:
image

@nickspoons
Copy link
Member Author

Urgh, you're right. They work correctly when running them individually though.

@Melandel
Copy link
Contributor

Melandel commented Oct 8, 2022

I have observed other unexpected behaviors for this branch.

Where would it be fit to report them?

@nickspoons
Copy link
Member Author

You're welcome to list them here but it's not "finished" at the moment so there will be changes anyway.

@Melandel
Copy link
Contributor

Melandel commented Oct 8, 2022

Aye.

I would find it interesting to see the workflow you had in mind when implementing the test runner, because I like using OmniSharpRunTest and OmniSharpRunTestsInFile rather than using F1/F5/F6. I basically think of the test runner window as a replacement of the former quickfix window and in the workflow and I do get errors related to the buffer already existing, for instance.

Also, I think I'll try to fork that branch and take some time to provide some code, it's time I give a bit of love to omnisharp-vim

@nickspoons
Copy link
Member Author

nickspoons commented Oct 8, 2022

The motivations for the test runner were:

  • To provide a simple way to re-run tests across multiple files (all tests that have been discovered in the project)
  • To be able to display output (e.g. Console.WriteLine() lines generated during the test)
  • Better control over exception output and stack traces

The original behaviour all still works as before, including the quickfix output. You just have to configure it.

Edit: I also mostly only use :OmniSharpRunTest and :OmniSharpRunTestsInFile while working, but once the tests are displayed it would be weird not to have a way to run them from there I think.

@Melandel
Copy link
Contributor

Melandel commented Oct 8, 2022

I also mostly only use :OmniSharpRunTest and :OmniSharpRunTestsInFile while working, but once the tests are displayed it would be weird not to have a way to run them from there I think.

Right, I agree, however whenever I re-run a :OmniSharpRunTest or a :OmniSharpRunTestsInFile it'd either:

  • display a buffer already exists error
  • run tests with the old school quickfix window (gotta double check)

Which is why I wondered if my workflow was off

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants