Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add test discovery endpoint #3277

Merged
merged 11 commits into from
Dec 10, 2021
Merged

Add test discovery endpoint #3277

merged 11 commits into from
Dec 10, 2021

Conversation

kpodsiad
Copy link
Member

@kpodsiad kpodsiad commented Nov 9, 2021

Adds endpoint which returns information about tests in targets in projects. This information can be consumed by LSP clients.

Needs:

my-app-1637768302619

Tests are run using a DAP server - just like code lenses are. In addition, if it's possible (e.g. running tests for the whole project), tests are batched and run using a single DAP session because creating the DAP session takes some time.

Information about text execution is passed from Build Server (for now it's only implemented for Bloop) to the DAP server, then the server forwards this event to the DAP client. The event contains sufficient information about the execution of both the whole test suite and particular test cases.

There are a few things which would be great to have:

  • better outputs for failed test classes. IMO I did my best on this one. The only source of information is Event from sbt-test-interface and there is no sufficient information for diff or other fancier stuff. For now, we'll have to live with plain throwable.getMessage()

Screenshot 2021-11-16 at 13 47 28

It turned out that this can be done even better, every test can have its own section in the test suite summary.
Screenshot 2021-12-02 at 19 52 51

  • finer granulation of test classes (grouped by packages, not only by project)

Screenshot 2021-11-24 at 16 34 01

  • finer granulation of tests - now the whole test class is the smallest piece. Discovering a single test shouldn't be that hard, but there is an issue with running single tests. Namely, it's very hard at the moment without any changes in e.g. bsp. I have to think about it and I hope I'll come back with some proposition how to address this problem.

@ckipp01
Copy link
Member

ckipp01 commented Nov 9, 2021

This looks super cool @kpodsiad! I haven't looked super close yet, but what does come to mind right away is that I want to see if it's possible to use with with nvim-metals and nvim-dap. I'm surprised at how little code there is in here and taking a look at the testing API I'm assuming VS Code is doing a whole ton of stuff behind the scenes. Any chance you'd be willing to capture the dap-client and dap-server logs for one of these runs? It will help me understand what's happening under the hood and better help me figure out if this is possible for something non-vscode to utilize this without a ton of code.

@Arthurm1
Copy link
Contributor

Excellent stuff @kpodsiad. I look forward to the day I don't have to scroll through log output to see why tests are failing.

The additional features you talk about would be great. But even if it's not currently possible to run individual tests - I think viewing finer granulation of test results would be immensely helpful. In the tree view - having each individual test result as a leaf node below its test suite node and then being able to select them and see the expected/obtained/diff would speed up the development cycle. To be able to then re-run only that failing test would be icing on the cake.

@kpodsiad
Copy link
Member Author

In the tree view - having each individual test result as a leaf node below its test suite node

It isn't easy since the only information Metals got from the build server is a test's class name. Using the class name we should be able to determine to which test framework class belongs and then apply some logic to extract singular tests position and name. Fact that this extraction logic has to be implemented per each test framework worries me, but I don't know if there's a better approach

In the tree view - having each individual test result as a leaf node below its test suite node

It isn't easy since the only information Metals got from the build server is a test's class name. Using the class name we should be able to determine to which test framework class belongs and then apply some logic to extract singular tests position and name. Fact that this extraction logic has to be implemented per each test framework worries me, but I don't know if there's a better approach

and then being able to select them and see the expected/obtained/diff would speed up the development cycle.

Assuming that the former part is already done this should be relatively easy, at least partially. Thanks to changes in scala-debug-adapter and bloop, both sbt and bloop will send "detailed" info about each test. "detailed" because for errors it will only contain error: string; so we'll be able to display it next to the test. Currently, joined info about errors is displayed for the whole test suite (you can see it in one of the images).

To be able to then re-run only that failing test would be icing on the cake.

Say no more! I have one idea of how to pursue this, but I don't know how what the outcome will be since probably it requires changes in a few places :/

Out of curiosity, what test framework do you use most frequently @Arthurm1 ?

@Arthurm1
Copy link
Contributor

@kpodsiad I use JUnit 5 for java code and Scalatest for scala code. Sadly Bloop doesn't support Junit 5 see here so I use Intellij.

It's probably fine to rely on the test framework's text output. JUnit displays actual & expected. MUnit does as well if I remember correctly. The main thing I would like is the ability to click on a failed test and have the results of that specific test appear (whatever format that is) rather than having to scroll through logs to find out why that test failed. I don't know about you but I find something like Github's way of displaying test results quite slow to drill into/navigate and that's what I would like to get away from here.

@kpodsiad kpodsiad marked this pull request as ready for review December 3, 2021 15:14
@kpodsiad kpodsiad requested review from tgodzik, ckipp01 and dos65 December 3, 2021 15:14
Copy link
Member

@ckipp01 ckipp01 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very cool @kpodsiad! 🚀

Once I get a bit more time I'd like to see if it's possible for us to support this in nvim-metals. I love to have something like this.

Just a few nits, no blockers from my side.

@kpodsiad
Copy link
Member Author

kpodsiad commented Dec 5, 2021

Once I get a bit more time I'd like to see if it's possible for us to support this in nvim-metals. I love to have something like this.

Even if it is not possible to add some structured view about I think that other editors can still benefit from this and following PR's. Namely, if Metals will be able to discover single tests inside the test suite and build-server-protocol/build-server-protocol#249 will be added and implemented by build servers then Metals' clients can run code lenses per single test, which will be freaking awesome. I'm so psyched about this possibility.

Copy link
Contributor

@tgodzik tgodzik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work @kpodsiad

I left a bunch of comments, I think we can simplify a couple of things.

@kpodsiad kpodsiad requested a review from tgodzik December 10, 2021 08:50
Copy link
Contributor

@tgodzik tgodzik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just some small comments and we should probably add a bit more testing.

@kpodsiad kpodsiad requested a review from tgodzik December 10, 2021 14:12
Copy link
Contributor

@tgodzik tgodzik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Great work!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants