-
Notifications
You must be signed in to change notification settings - Fork 17.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
proposal: testing: add -json flag for json results #2981
Comments
Let's do this. Plan of attack: Structs for output: type Result struct { Tests []TestResult Benchmarks []BenchmarkResult About struct { Version string OS string Arch string Hostname string StartTime time.Time EndTime time.Time } } type BenchmarkResult += Package string Name string type TestResult struct { Package string Name string Pass bool Output string Duration time.Duration } Add flag -test.package=math/rand (import path of package), for use in setting Package in TestResult and BenchmarkResult. Add -test.json (bool flag) to generate json instead of standard output. Generated json should be indented. |
Looking over the junit stuff: http://pzolee.blogs.balabit.com/2012/11/jenkins-vs-junit-xml-format/ https://svn.jenkins-ci.org/trunk/hudson/dtkit/dtkit-format/dtkit-junit-model/src/main/resources/com/thalesgroup/dtkit/junit/model/xsd/junit-4.xsd Here's a few notes: 1. JUnit supports separate stdout and stderr, although we've found that this sometimes makes it more difficult to understand what happened, so Output is probably fine. 2. JUnit distinguishes between errors and failures. Failures are probably t.* stuff and errors are panics, but we write a lot of tests that simply panic instead of passing t around everywhere, so maybe the distinction isn't important. So I think the model above is probably fine. |
wrote https://bitbucket.org/tebeka/go2xunit. Willl gladly rewrite to work with JSON output (specially in parallel testing). |
I've got the beginning sketches of an implementation at https://golang.org/cl/7061048/. I'm wondering, though, what the interaction of -test.json with cmd/go/test should be. - If we print out summary info (e.g., ok archive/tar 0.006s), then the output of 'go test' is not valid json -- you'd have to pre-process it a bit. - Ditto for multiple packages, if we just print out the results sequentially for each package. But, the flag is called -test.json, so 'go test' doesn't directly handle it, right? Similarly, what should happen if -test.v and -test.json are both set? I'd suggest that -test.json disable chattiness because the output is part of the json result anyway. |
In json mode the testing framework should not print about starting and stopping each test. It cannot clear test.v, though, because tests might use testing.Verbose() (which looks at test.v) to decide how much log output to generate, and we do want to generate that output (and save it into the JSON). The go command will need to know about the json flag regardless, in order to turn -json into -test.json. When in that mode, it should synthesize its own Result built containing all the TestResults and BenchmarkResults from the individual packages. The ability to do this is the reason that Package appears in each TestResult/BenchmarkResult instead of in the top-level Result. Russ |
I spent a while looking into this. I think it's too complex to get into Go 1.1. The hard part is making sure that every possible case ends up printing json and not text intended for humans. First there is the problem that all test output needs to be caught, even output generated with fmt.Printf or the builtin println or panic traces, and even if the test binary exits unexpectedly. The only truly feasible way to do this is to make the go command interpret the test binary output and generate the JSON from that interpretation. I started to try to make that a little easier by adding a -test.outputprefix flag to specify a prefix string that the test binary will put at the beginning of each line that it generates, so that the structure of the test output can be seen more readily. That's in CL 7678044. But then the next problem is making the go command parse that output and generate JSON. And the go command must also catch all its own output and convert it to JSON equivalents, not just for running tests but also for build failures and the like. That's a bigger task, and at some point it starts to seem like maybe we should make the go command just have a little bit of framing and have a separate program whose only job is to invoke the go command (or, perhaps, also a test binary directly) and parse the output and turn it into JSON. This is too big a change for what time we have remaining for Go 1.1. Perhaps we can figure out what the story is by Go 1.2. For now, an external tool that parses the go command output (and therefore also the test binary output, if using go test -v) seems like the best choice. And then once that exists we can think about small framing changes to make the parsing less confusable. Labels changed: added go1.2, removed go1.1. |
Comment 16 by bgarcia@google.com: What about defining an interface: type TestReporter interface { Report(result Result) } (or something more involved that can obtain test results as they finish). Add a testing.SetReporter(reporter TestReporter) function to set it. Have a default implementation that outputs text to stdout. Allow users to pass in alternate implementations that can output alternate formats, as well as save the output to a file, database, etc. |
What about letting the input parameter for ``-json`` be a filename and not a boolean? Let the json format contain a boolean ``Pass`` as well as a log only containing information logged into the T.Log & B.Log functions. Then let the command output contain what it normally contains (and let -test.v do what it normally does). |
One problem I see with the "parsing `go test` output" solution is that there are 3rd party `go test` enhancements, which tend to change the output format. For example, github.com/stretchr/testify/assert uses \r to overwrite t.Errorf's "file:line: message" output with its own format and, more importantly, caller information. Such uses would break the parser, while they'd still work fine if the json generation was integrated into `testing`. To me, it would make sense to store testing output directly in *T's methods. stdout and stderr could be captured separately. That way, `testing` can provide a core set of functionality, as it does today, allowing 3rd party enhancements to build on top of those, without having to reinvent certain parts. |
The problem with \r is twofold. 1) If the standard format is "file:line: message", and a custom format turns that into "file:line: \rfile:otherline: message", parsing will give me a message that 1) includes the \r 2) more information than should be in the message 2) The entire point behind the \r hack was to change the line number because otherwise the test library, with its `assert` functions, would show up as the source of the failing test. This information would be lost to the parser. |
Python's nose (https://nose.readthedocs.org/en/latest/) captures the stdout of the tests and emit it when the test ends (you can override with --nocapture flag). We might consider this as an option. If the consumer of the output is a machine (like Jenkins), it's probably a good approach. |
I agree with #7 and like the interface idea in comment #16. This makes it more-or-less a dup of 3343. I'm writing a generalized test result saving/reporting package based on TAP-Y/J as defined by Ruby's tapout (https://github.com/rubyworks/tapout/wiki/TAP-Y-J-Specification). The specification isn't well written, but it does optionally allow for a lot of useful information. A short list of useful information that I would like to see the testing package support: - line numbers and stack-traces of failures - metadata (e.g. TODO means the test failure doesn't indicate a general failure, good for TDD); not yet supported by testing.T, but would be nice - skipped tests w/reason - elapsed time (can be calculated if tests are run in series) - test coverage It would be nice to have a formal notion of test suites, test cases and tests, but for now packages (or perhaps files?) and functions can be used as test cases and tests respectively. TAP-Y/J (and perhaps JUnit as well) has a notion of embedding test cases within other test cases, but I don't think this is useful enough to complicate the testing package. |
This would definitely be a useful feature. In addition, having a specified format for the benchmark data would be good as well. What I'm trying to put together is a system where not only do I have historical information from build to build on unit tests & coverage data, but also information on how the performance of my code has gotten better or worse over time. That way, we know ahead of time if some new feature/bug fix impacts performance and we can go through a new round of capacity planning if necessary. Thanks |
@nodirt thanks for the update. I can probably take this over and work on it (for the 1.10 timeframe) but there's no decision on the proposal yet. AFAICT the existing concerns have been addressed. @golang/proposal-review what else would you like to see? |
This looks really cool. I wonder whether it's also useful emitting a state change to indicate blocking around func TestA(t *testing.T) { t.Parallel() }
func TestB(t *testing.T) { t.Parallel() }
func TestC(t *testing.T) {}
func TestD(t *testing.T) {}
func TestE(t *testing.E) { t.Parallel() } would result in tests A and B each starting and blocking immediately; tests C then D each starting and completing in turn; test E starting and blocking; and finally tests A, B, and E all unblocking and completing in parallel. Perhaps we could add an extra couple of new const (
RUN State = iota + 1
...
PAUSE
RESUME
) Which for {"Package": "github.com/user/repo", "Name": "TestA", "State": "RUN", "Procs": 4}
{"Package": "github.com/user/repo", "Name": "TestA", "State": "PAUSE"}
{"Package": "github.com/user/repo", "Name": "TestA", "State": "RESUME"}
{"Package": "github.com/user/repo", "Name": "TestA", "State": "PASS"} Maybe we can think of a more explicit name to indicate why the test is paused. On a related note, have I correctly understood that each event is emitted as line-delimited JSON objects, as above (but with RS chars), without other line breaks throughout the object? Sure would make using grep and the like easier. There's always Separately: start, end, and execution times seem to have been omitted from the proposal, despite being discussed above. ✌️ |
I'd like to return to this proposal after some other work in the go command. |
Planning to work on this in the Go 1.10 cycle. Will want to fix #19280 first. |
@rsc I would be happy to work on this as well (and any guidance from you would be good). Also this can have the |
Thoughts about implementation strategy, per discussion with proposal-review.
|
The subject line on this issue is to add a -json flag. If there's a -json flag, it needs to use a tool from the standard repo - we want Go to be self-contained. |
Change https://golang.org/cl/76873 mentions this issue: |
Change https://golang.org/cl/76872 mentions this issue: |
Also add cmd/internal/test2json, the actual implementation, which will be called directly from cmd/go in addition to being a standalone command (like cmd/buildid and cmd/internal/buildid). For #2981. Change-Id: I244ce36d665f424bbf13f5ae00ece10b705d367d Reviewed-on: https://go-review.googlesource.com/76872 Run-TryBot: Russ Cox <rsc@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
I've tried to apply this feature in GoLand and this is the list of issues I faced:
The third line is barely parseable. |
@zolotov Would you be willing to open new issues for the problems you have found? Thanks. |
@iangudger I can but I believe these problems show that this issue is not solved, what's the point of creating one more "support json output" issue? |
@zolotov, you got the wrong Ian. We don't reuse bugs. The bug is closed and is not tracked on our dashboards, so comments here are likely to go into the void and never be seen. |
@zolotov The general problem of JSON output is, we believe, solved. You are talking about specific problems with the solution. |
I filed #22734 |
@bradfitz got it. thanks |
The text was updated successfully, but these errors were encountered: