Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Change presentation of test results #113

Closed
wants to merge 4 commits into from

Conversation

drhumlen
Copy link
Contributor

@drhumlen drhumlen commented May 7, 2017

I've tried to work on the presentation of the test results to increase readability and also (hopefully) look a little prettier. It's still pretty work-in-progress both the presentation and the code, but I hope it's a step in the right direction..(?)

Some feedback would be nice. Is it better or worse? Is it too much color, or should I try to colorize/syntax-highlight more?

One controversial/debetable change I've made is that I've skipped the first test-printing phase. Where as before the output was printed "twice" (first when it actually runs, and then a second time with the "results" of the test running). Personally I strongly prefer to just get the summary (like now), but if anyone has a different opinion, I'm open for changing it back. Perhaps make it configurable somehow?

Before:
screen shot 2017-05-07 at 15 48 49

After:
screen shot 2017-05-07 at 15 47 32

@drhumlen drhumlen changed the title Change presentation of test results Change presentation of test results [WIP] May 7, 2017
@drhumlen drhumlen changed the title Change presentation of test results [WIP] [WIP] Change presentation of test results May 7, 2017
@drhumlen
Copy link
Contributor Author

drhumlen commented May 7, 2017

Here's what it looks like running on a real project ( https://github.com/japgolly/scalajs-react )
screen shot 2017-05-07 at 17 27 57

@japgolly
Copy link
Collaborator

japgolly commented May 9, 2017

Here's what it looks like running on a real project

Oh @drhumlen, that's very nice! I like that and love that timing info as well.
Let me have a play with it this week but I definitely ❤️ where you're going with this. Have you gotten much feedback from the gitter channel?

@drhumlen
Copy link
Contributor Author

drhumlen commented May 9, 2017

Oh @drhumlen, that's very nice!

Thanks 😄

I like that and love that timing info as well.

It turned out that @lihaoyi had already captured all the timing information for each test 💰 -- so all that was left to do was to actually print it 😛 Not sure why he left it out in the first place, but it's a nice secondary information I think when it doesn't steal too much attention. The "faint" (or "dim gray") ANSI code does just that 🎉 but I'm not sure if it's supported in all terminals(?). Perhaps including the timings should be a parameter/option/flag?

Have you gotten much feedback from the gitter channel?

Not yet, so feel free to join. In particular it would be nice to hear if the immediate test result printing is deeply missed, or if it's reasonable to skip it and only print the results at the end 🤔 . I think that's how the others (specs2, scalatest, (etc?)) does it. Or actually I think they do the middle ground where they print the entire results for each suite, but one suite at the time. utest on the other hand prints the results for all the suites in one go at the end. I think that looks pretty nice, but you'll have to wait a bit longer before you get the feedback. Making it a test option is probably a good idea(?), but also redundant if no-one will use it.

My futher plans for the PR is to render the titles for each spec (e.g. japgolly.scalajs.react.core.JsComponentEs6STest) a little nicer -- making it more clear that it's a new spec. I also think the --------------Starting Suite FooBar ------------- and ----------------Results---------------- can be replaced with something more render'y too.

@lihaoyi
Copy link
Member

lihaoyi commented May 13, 2017

One possible resolution to the "avoid printing stack traces twice, but still allow them to be printed as-they-arrive" is to let the end-user flip on as-they-arrive printing with a flag. Thus for small test suites you don't get duplication, but for large test suites that may take a few minutes you can flip the switch and get live feedback as stuff goes wrong

@japgolly
Copy link
Collaborator

Just checking in, what's the status of this?

For the record, I like @lihaoyi's suggestion about error messages. Although I'd want to ensure I could configure SBT to set my preference. I wouldn't want to provide it by CLI each time I run tests.

Unfortunately I don't have time to be in Gitter chatting back and forth so if there's anything waiting on me, please let me know here and I'll get on it. I hoped to provide more feedback (and still hope that I eventually will) but let that not block this issue. Change can be incremental.

@drhumlen
Copy link
Contributor Author

drhumlen commented Sep 1, 2017

Sorry for disappearing all of a sudden. I've been busy with work, and then I forgot all about it unfortunately 😛. But I see there's been some activity since my last visit.

I'm revisiting the code now. I see a couple of tiny refactorings I'd like to do, but I'm also thinking this is good enough for now(?), and then we can open another new PR with further improvements such as end-user flip on as-they-arrive printing with a flag? Does that sound reasonable? (I haven't studied sbt extensively enough to know how to make such flags, so it'll take me some time to read up on that.)

Also if somebody wants to help me nitpick on the code, that would be great 😛 If not, I'll take that as code is good enough for merge.

@lihaoyi
Copy link
Member

lihaoyi commented Sep 9, 2017

I'm cherry-picking the code from this diff into a new release I'm working on. Won't go in exactly as-is, but it should end up pretty close to what you have here. Stay tuned

@lihaoyi
Copy link
Member

lihaoyi commented Sep 10, 2017

FWIW looks like the "as it comes in" logging is set by a flag in SBT http://www.scala-sbt.org/1.0/docs/Howto-Logging.html#Print+the+output+of+tests+immediately+instead+of+buffering

@lihaoyi
Copy link
Member

lihaoyi commented Sep 11, 2017

Here's what I have in master. Large numbers of tests:

screenshot 2017-09-11 12 22 13

uTest still shows the results/failure summaries at the end, because the failure's own output can be quite far up and it's annoying to scroll to copy-paste the full test path (what I always end up doing). For small numbers of tests (threshold configurable) It skips those because you can probably already see the initial output that occurred when the test was run:

screenshot 2017-09-11 12 27 04

Some differences from your code:

  • I avoided using unicode bullets. Not a huge deal, but given how fundamental uTest is in many of my projects, I'll happily live with the deal with the slight ugliness of + and X rather than nice round circles

  • I killed the Starting Suite messages, since now uTest defaults to println instead of SBT's loggers, and println isn't buffered so you can quickly see the tests in a suite begin running as the suite progresses

  • I added manual forced wrapping, to ensure that when test return-value's toString is long, it doesn't spill onto the start of the next line and screw up the indentation outline

  • I avoided using background-color: red. To me, it makes the terminal look at bit too christmas-tree-y. Instead I used light-red + underline for the exception class, to make it stand out just-a-bit from the bulk of the red exception text

  • I did not hide the exception stack trace. This is core information to me, so not sure how we could ever do without it. But I did improve the cleverness of stack-frame truncation to truncate the "internal" stack trace within the utest.asserts asserts, so they're often shorter, and horizontally-truncating the file path column when running on Scala.js to just the file-name so they aren't so crazy wide and wrapping. The same failures in Scala.js look like this now:

screenshot 2017-09-11 12 34 33

Not super-pretty, but not as ugly as they used to be, and they still have the useful information

  • I didn't do special-cased rendering for the utest.AssertionError exception class. We could do that in future, but the way it was done here seems a bit too hacky/special-case-ish. Perhaps a more general registration-based system where people can put in nice renderers for common exception classes would work.

The new Formatter is a lot more configurable, so you can override any or all of the things i describe on a per-suite or per-run basis (by overriding them in a trait, then inheriting from the trait in all your test suites). In theory, in master you should be able to tweak it yourself in that way to make it look exactly like what you have here, and even the defaults look I think not-too-bad.

That, I think, should be enough to satisfy the requirements of this PR.

@lihaoyi lihaoyi closed this Sep 11, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants