Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: Descending test id on line ... #50

Open
ForbiddenEra opened this issue Mar 30, 2023 · 6 comments
Open

ValueError: Descending test id on line ... #50

ForbiddenEra opened this issue Mar 30, 2023 · 6 comments

Comments

@ForbiddenEra
Copy link

ForbiddenEra commented Mar 30, 2023

Trying to use this to convert the tap output from the built-in test runner to junit for CI/CD use.

# ~/.local/bin/tap2junit -i tests/report.txt -o tests/report.xml
Traceback (most recent call last):
  File "~/.local/bin/tap2junit", line 8, in <module>
    sys.exit(main())
  File "~/.local/pipx/venvs/tap2junit/lib/python3.8/site-packages/tap2junit/__main__.py", line 94, in main
    convert(
  File "~/.local/pipx/venvs/tap2junit/lib/python3.8/site-packages/tap2junit/__main__.py", line 68, in convert
    result = parse(name or input_file, data, package)
  File "~/.local/pipx/venvs/tap2junit/lib/python3.8/site-packages/tap2junit/__main__.py", line 58, in parse
    tap_parser.parse(data)
  File "~/.local/pipx/venvs/tap2junit/lib/python3.8/site-packages/tap2junit/tap13.py", line 238, in parse
    self._parse(io.StringIO(source))
  File "~/.local/pipx/venvs/tap2junit/lib/python3.8/site-packages/tap2junit/tap13.py", line 193, in _parse
    raise ValueError("Descending test id on line: %r" % line)
ValueError: Descending test id on line: 'ok 1 - fileSize checks'

report.txt (as outputted from node --test-reporter=tap --test-reporter-destination=report.txt tests/run-tests.js)

TAP version 13
# Subtest: System Configuration Required Tests
ok 1 - System Configuration Required Tests # TODO System Configuration Required Tests
  ---
  duration_ms: 3.723
  ...
# Subtest: System Configuration Optional Tests
    # Subtest: System Configuration Optional Tests
    ok 1 - System Configuration Optional Tests
      ---
      duration_ms: 3.2683
      ...
    1..1
ok 2 - System Configuration Optional Tests
  ---
  duration_ms: 4.5664
  ...
# Subtest: Main Validation Tests
    # Subtest: fileSize checks
        # Subtest: fileSize is too small
        ok 1 - fileSize is too small
          ---
          duration_ms: 20.9785
          ...
        # Subtest: fileSize is too large
        ok 2 - fileSize is too large
          ---
          duration_ms: 27.1384
          ...
        1..2
    ok 1 - fileSize checks
      ---
      duration_ms: 51.8895
      ...
    1..1
ok 3 - Main Validation Tests
  ---
  duration_ms: 53.4117
  ...
1..3
# tests 3
# pass 2
# fail 0
# cancelled 0
# skipped 0
# todo 1
# duration_ms 81.1599

Not entirely sure why this is happening but if I had to guess it's nesting depth/recursion related..?

If I take out the fileSize checks wrapping around those tests, it seems to work..

There's no issues anywhere else, even with the built-in experimental node test runner (a previous version of node gave bad output when nesting hit the third level, but on latest [v19.8.1] the TAP output looks correct to me and is parsed and handled fine by other tools such as tap itself) so I'd rather not lose the 'nice' grouping of the tests just to get a junit XML file.

@cclauss
Copy link
Collaborator

cclauss commented Mar 30, 2023

@MoLow Any ideas?

@ForbiddenEra We made a new release this morning. Can you please try pip install tap2junit==0.1.5 vs. pip install tap2junit==0.1.6 and see if you get the same or different results?

@MoLow
Copy link
Member

MoLow commented Mar 30, 2023

this seems to reproduce on 0.1.6.
validation of test numbering seems to ignore nesting.
@ForbiddenEra you can take a look at https://www.npmjs.com/package/@reporters/junit

@ForbiddenEra
Copy link
Author

ForbiddenEra commented Apr 2, 2023

this seems to reproduce on 0.1.6. validation of test numbering seems to ignore nesting. @ForbiddenEra you can take a look at https://www.npmjs.com/package/@reporters/junit

Oo, will take a peek - nice to see an alternative, I was tempted to whip up my own since the XML doesn't look too complex but no sense in re-inventing any wheels, I'll take a peek at that; fortunately I can have my tests still run properly and specify a reporter on the command line without using --test or using run() (nodejs/node#47314 describes in more detail what/why etc and it seems like commits are coming to improve this for my case)

I should also mention that the XML that I did get out of it when removing nesting wasn't the greatest either; it didn't pick up any 'filenames' and split any of my test comments/descriptions up that had a / character in them:

<?xml version="1.0" encoding="utf-8"?>
<testsuites disabled="0" errors="0" failures="0" tests="17" time="4.4751723">
	<testsuite disabled="0" errors="0" failures="0" name="tests/report" skipped="2" tests="17" time="4.4751723" hostname="ForbiddenEra">
		<testcase name="- System Configuration Required Tests " time="0.003377">
			<skipped type="skipped" message="System Configuration Required Tests"/>
		</testcase>
		<testcase name="- System Configuration Optional Tests" time="0.001361"/>
		<testcase name="- System Configuration Optional Tests" time="0.002587"/>
		<testcase name="- fileSize is too small" time="0.023804"/>
		<testcase name="- fileSize is too large" time="0.033457"/>
		<testcase name="- fileType is invalid" time="0.096726"/>
		<testcase name="- fileType mismatch" time="0.003848"/>
		<testcase name="- jpeg vs jpeg handling" time="0.831943"/>
		<testcase name="- image dimensions too small" time="0.003943"/>
		<testcase name="- image dimensions too large, resize (x&gt;y)" time="0.444136"/>
		<testcase name="- image dimensions too large, resize (y&gt;x)" time="0.421050"/>
		<testcase name="metadata stripping" time="0.368525" classname="- image exif"/>
		<testcase name="- Main Validation Tests" time="2.231721"/>
		<testcase name="- VirusTotal FAIL test - EICAR-STANDARD-ANTIVIRUS-TEST-FILE" time="0.001433"/>
		<testcase name="1000x750-JPG.jpg" time="0.001964" classname="- VirusTotal PASS test - ..images"/>
		<testcase name="- VirusTotal Tests" time="0.004483"/>
		<testcase name="- PDF-Related Validation Tests " time="0.000814">
			<skipped type="skipped" message="verify PDF was exported as JPG"/>
		</testcase>
	</testsuite>
</testsuites>

Which results in meh output on Gitlab, which is ultimately my goal:

image

Any comments on this? None of my tests are actually named with a - at the beginning, it's simply picking that up from node:test's output.

I never tried 0.1.5 as I had only started testing with it that same day, I can give that a shot too if it might be worth it, though I worry that even if it handles nesting appropriately that this "secondary" issue may still be present.

I'll definitely try the mentioned reporter and if I find time I may also try 0.1.5 but ultimately if the reporter works for me than I'm happy and might not have time otherwise.

@MoLow As I see you're the maintainer for the linked reporter, I figure it may be easier to leave my comments here, especially considering tap2junit being maintained under nodejs and me attempting to make use of the experimental node:test stuff that maybe it wouldn't hurt visibility to those who might care?; let me know though if you want me to create an issue in that repo regarding this.

@ForbiddenEra
Copy link
Author

ForbiddenEra commented Apr 2, 2023

I tried installing 0.1.5 but can't (easily) 100% confirm that it overrode 0.1.6 as it doesn't provide a --version or similar option, though it installed fine and got the same result for descending values.

Tried the @reporters/junit quickly and the result seems maybe better in some ways, worse in others?

<?xml version="1.0" encoding="utf-8"?>
<testsuites>
        <testsuite name="Test Suite #1a" time="0.003282" disabled="0" errors="0" tests="2" failures="0" skipped="0">
                <testcase name="Test Suite #1a/Test #1" time="0.000616" classname="test"/>
                <testcase name="Test Suite #1a/Test #2" time="0.000163" classname="test"/>
        <testsuite name="Test Suite #1b" time="0.000524" disabled="0" errors="0" tests="2" failures="0" skipped="0">
                <testcase name="Test Suite #1b/Test #1" time="0.000183" classname="test"/>
                <testcase name="Test Suite #1b/Test #2" time="0.000114" classname="test"/>
        </testsuite>
        <testsuite name="Test Suite #2" time="0.001196" disabled="0" errors="0" tests="2" failures="0" skipped="0">
                <testsuite name="Test Group #1" time="0.000553" disabled="0" errors="0" tests="2" failures="0" skipped="0">
                        <testcase name="Test Suite #2/Test Group #1/Test #1" time="0.000199" classname="test"/>
                        <testcase name="Test Suite #2/Test Group #1/Test #2" time="0.000131" classname="test"/>
                </testsuite>
                <testsuite name="Test Group #2" time="0.000447" disabled="0" errors="0" tests="2" failures="0" skipped="0">
                        <testcase name="Test Suite #2/Test Group #2/Test #1" time="0.000136" classname="test"/>
                        <testcase name="Test Suite #2/Test Group #2/Test #2" time="0.000131" classname="test"/>
                </testsuite>
        </testsuite>
        <testsuite name="Test Suite #3" time="0.000541" disabled="0" errors="0" tests="2" failures="0" skipped="0">
                <testcase name="Test Suite #3/Test #1" time="0.000175" classname="test"/>
                <testcase name="Test Suite #3/Test #2" time="0.000082" classname="test"/>
        </testsuite>
        <testsuite name="Test Suite #4" time="0.000302" disabled="0" errors="0" tests="2" failures="0" skipped="0">
                <testcase name="Test Suite #4/Test #1" time="0.000100" classname="test"/>
                <testcase name="Test Suite #4/Test #2" time="0.000078" classname="test"/>
        </testsuite>
        </testsuite>
</testsuites>

So, all the other tests are wrapped in the first testcase..

I have a run_tests.js that imports 4 test scripts and runs them.. eg.

import testSuite1 from './test/testSuite1.js';
import testSuite2 from './test/testSuite2.js';
import testSuite3 from './test/testSuite3.js';
import testSuite4 from './test/testSuite4.js';

(async()=>{
	await testSuite1();
	await testSuite2();
	await testSuite3();
	await testSuite4();
})();

Each of these scripts exports a default function, eg:

testSuite1.js:

import test from 'node:test';
import assert from 'node:assert/strict';

export default () => {
    test('Test Suite #1a', async(t) => {
        await t.test('Test Suite #1a/Test #1', async(tt) => {});
        await t.test('Test Suite #1a/Test #2', async(tt) => {});
    });

    test('Test Suite #1b', async(t) => {
        await t.test('Test Suite #1b/Test #1', async(tt) => {});
        await t.test('Test Suite #1b/Test #2', async(tt) => {});
    });
}

testSuite2.js:

import test from 'node:test';
import assert from 'node:assert/strict';

export default () => {
    test('Test Suite #2', async(t) => {
        await t.test('Test Group #1', async(t) => {
            await t.test('Test Suite #2/Test Group #1/Test #1', async(tt) => {});
            await t.test('Test Suite #2/Test Group #1/Test #2', async(tt) => {});
        });
        await t.test('Test Group #2', async(t) => {
            await t.test('Test Suite #2/Test Group #2/Test #1', async(tt) => {});
            await t.test('Test Suite #2/Test Group #2/Test #2', async(tt) => {});
        });
    });
});

The TAP output looks fine, I think?

TAP version 13
# Subtest: Test Suite \#1a
    # Subtest: Test Suite \#1a/Test \#1
    ok 1 - Test Suite \#1a/Test \#1
      ---
      duration_ms: 0.6158
      ...
    # Subtest: Test Suite \#1a/Test \#2
    ok 2 - Test Suite \#1a/Test \#2
      ---
      duration_ms: 0.1631
      ...
    1..2
ok 1 - Test Suite \#1a
  ---
  duration_ms: 3.2816
  ...
# Subtest: Test Suite \#1b
    # Subtest: Test Suite \#1b/Test \#1
    ok 1 - Test Suite \#1b/Test \#1
      ---
      duration_ms: 0.1831
      ...
    # Subtest: Test Suite \#1b/Test \#2
    ok 2 - Test Suite \#1b/Test \#2
      ---
      duration_ms: 0.1139
      ...
    1..2
ok 2 - Test Suite \#1b
  ---
  duration_ms: 0.5244
  ...
# Subtest: Test Suite \#2
    # Subtest: Test Group \#1
        # Subtest: Test Suite \#2/Test Group \#1/Test \#1
        ok 1 - Test Suite \#2/Test Group \#1/Test \#1
          ---
          duration_ms: 0.1986
          ...
        # Subtest: Test Suite \#2/Test Group \#1/Test \#2
        ok 2 - Test Suite \#2/Test Group \#1/Test \#2
          ---
          duration_ms: 0.1305
          ...
        1..2
    ok 1 - Test Group \#1
      ---
      duration_ms: 0.5529
      ...
    # Subtest: Test Group \#2
        # Subtest: Test Suite \#2/Test Group \#2/Test \#1
        ok 1 - Test Suite \#2/Test Group \#2/Test \#1
          ---
          duration_ms: 0.1363
          ...
        # Subtest: Test Suite \#2/Test Group \#2/Test \#2
        ok 2 - Test Suite \#2/Test Group \#2/Test \#2
          ---
          duration_ms: 0.1311
          ...
        1..2
    ok 2 - Test Group \#2
      ---
      duration_ms: 0.4465
      ...
    1..2
ok 3 - Test Suite \#2
  ---
  duration_ms: 1.1957
  ...
# Subtest: Test Suite \#3
    # Subtest: Test Suite \#3/Test \#1
    ok 1 - Test Suite \#3/Test \#1
      ---
      duration_ms: 0.1747
      ...
    # Subtest: Test Suite \#3/Test \#2
    ok 2 - Test Suite \#3/Test \#2
      ---
      duration_ms: 0.0815
      ...
    1..2
ok 4 - Test Suite \#3
  ---
  duration_ms: 0.5411
  ...
# Subtest: Test Suite \#4
    # Subtest: Test Suite \#4/Test \#1
    ok 1 - Test Suite \#4/Test \#1
      ---
      duration_ms: 0.1002
      ...
    # Subtest: Test Suite \#4/Test \#2
    ok 2 - Test Suite \#4/Test \#2
      ---
      duration_ms: 0.078
      ...
    1..2
ok 5 - Test Suite \#4
  ---
  duration_ms: 0.3025
  ...
1..5
# tests 5
# pass 5
# fail 0
# cancelled 0
# skipped 0
# todo 0
# duration_ms 37.9615

If I wrap the calls in run_tests.js in it's own test, then it seems to be OK, of course I have to pass the test handle to each sub-test and await everything but then I get better looking XML?

<?xml version="1.0" encoding="utf-8"?>
<testsuites>
        <testsuite name="All Tests" time="0.006668" disabled="0" errors="0" tests="5" failures="0" skipped="0">
                <testsuite name="Test Suite #1a" time="0.002252" disabled="0" errors="0" tests="2" failures="0" skipped="0">
                        <testcase name="Test Suite #1a/Test #1" time="0.000739" classname="test"/>
                        <testcase name="Test Suite #1a/Test #2" time="0.000166" classname="test"/>
                </testsuite>
                <testsuite name="Test Suite #1b" time="0.000493" disabled="0" errors="0" tests="2" failures="0" skipped="0">
                        <testcase name="Test Suite #1b/Test #1" time="0.000155" classname="test"/>
                        <testcase name="Test Suite #1b/Test #2" time="0.000130" classname="test"/>
                </testsuite>
                <testsuite name="Test Suite #2" time="0.001196" disabled="0" errors="0" tests="2" failures="0" skipped="0">
                        <testsuite name="Test Group #1" time="0.000487" disabled="0" errors="0" tests="2" failures="0" skipped="0">
                                <testcase name="Test Suite #2/Test Group #1/Test #1" time="0.000183" classname="test"/>
                                <testcase name="Test Suite #2/Test Group #1/Test #2" time="0.000111" classname="test"/>
                        </testsuite>
                        <testsuite name="Test Group #2" time="0.000518" disabled="0" errors="0" tests="2" failures="0" skipped="0">
                                <testcase name="Test Suite #2/Test Group #2/Test #1" time="0.000225" classname="test"/>
                                <testcase name="Test Suite #2/Test Group #2/Test #2" time="0.000111" classname="test"/>
                        </testsuite>
                </testsuite>
                <testsuite name="Test Suite #3" time="0.000436" disabled="0" errors="0" tests="2" failures="0" skipped="0">
                        <testcase name="Test Suite #3/Test #1" time="0.000137" classname="test"/>
                        <testcase name="Test Suite #3/Test #2" time="0.000107" classname="test"/>
                </testsuite>
                <testsuite name="Test Suite #4" time="0.000423" disabled="0" errors="0" tests="2" failures="0" skipped="0">
                        <testcase name="Test Suite #4/Test #1" time="0.000131" classname="test"/>
                        <testcase name="Test Suite #4/Test #2" time="0.000116" classname="test"/>
                </testsuite>
        </testsuite>
</testsuites>

Might be a workaround, though I don't really want everything wrapped into one test suite like that, what I'm after should look like this:

<?xml version="1.0" encoding="utf-8"?>
<testsuites>
        <testsuite name="Test Suite #1a" time="0.002252" disabled="0" errors="0" tests="2" failures="0" skipped="0">
                <testcase name="Test Suite #1a/Test #1" time="0.000739" classname="test"/>
                <testcase name="Test Suite #1a/Test #2" time="0.000166" classname="test"/>
        </testsuite>
        <testsuite name="Test Suite #1b" time="0.000493" disabled="0" errors="0" tests="2" failures="0" skipped="0">
                <testcase name="Test Suite #1b/Test #1" time="0.000155" classname="test"/>
                <testcase name="Test Suite #1b/Test #2" time="0.000130" classname="test"/>
        </testsuite>
        <testsuite name="Test Suite #2" time="0.001196" disabled="0" errors="0" tests="2" failures="0" skipped="0">
                <testsuite name="Test Group #1" time="0.000487" disabled="0" errors="0" tests="2" failures="0" skipped="0">
                        <testcase name="Test Suite #2/Test Group #1/Test #1" time="0.000183" classname="test"/>
                        <testcase name="Test Suite #2/Test Group #1/Test #2" time="0.000111" classname="test"/>
                </testsuite>
                <testsuite name="Test Group #2" time="0.000518" disabled="0" errors="0" tests="2" failures="0" skipped="0">
                        <testcase name="Test Suite #2/Test Group #2/Test #1" time="0.000225" classname="test"/>
                        <testcase name="Test Suite #2/Test Group #2/Test #2" time="0.000111" classname="test"/>
                </testsuite>
        </testsuite>
        <testsuite name="Test Suite #3" time="0.000436" disabled="0" errors="0" tests="2" failures="0" skipped="0">
                <testcase name="Test Suite #3/Test #1" time="0.000137" classname="test"/>
                <testcase name="Test Suite #3/Test #2" time="0.000107" classname="test"/>
        </testsuite>
        <testsuite name="Test Suite #4" time="0.000423" disabled="0" errors="0" tests="2" failures="0" skipped="0">
                <testcase name="Test Suite #4/Test #1" time="0.000131" classname="test"/>
                <testcase name="Test Suite #4/Test #2" time="0.000116" classname="test"/>
        </testsuite>
</testsuites>

I did have a reason previously for using a separate script to initiate the tests (eg. the run_tests.js) but the reasons around that have commits coming to node that should negate the reasons I had.

Ditching that (and, of course, moving all the tests outside of the exported function) and using node --test with the @reporters/junit however still doesn't give me the result I was looking for, everything gets wrapped in one of the suites for no real reason?

<?xml version="1.0" encoding="utf-8"?>
<testsuites>
        <testsuite name="Test Suite #4" time="0.002965" disabled="0" errors="0" tests="2" failures="0" skipped="0">
                <testcase name="Test Suite #4/Test #1" time="0.000654" classname="test"/>
                <testcase name="Test Suite #4/Test #2" time="0.000122" classname="test"/>
        <testsuite name="Test Suite #1a" time="0.003078" disabled="0" errors="0" tests="2" failures="0" skipped="0">
                <testcase name="Test Suite #1a/Test #1" time="0.000916" classname="test"/>
                <testcase name="Test Suite #1a/Test #2" time="0.000145" classname="test"/>
        </testsuite>
        <testsuite name="Test Suite #1b" time="0.000626" disabled="0" errors="0" tests="2" failures="0" skipped="0">
                <testcase name="Test Suite #1b/Test #1" time="0.000185" classname="test"/>
                <testcase name="Test Suite #1b/Test #2" time="0.000131" classname="test"/>
        </testsuite>
        <testsuite name="Test Suite #2" time="0.002739" disabled="0" errors="0" tests="2" failures="0" skipped="0">
                <testsuite name="Test Group #1" time="0.001328" disabled="0" errors="0" tests="2" failures="0" skipped="0">
                        <testcase name="Test Suite #2/Test Group #1/Test #1" time="0.000424" classname="test"/>
                        <testcase name="Test Suite #2/Test Group #1/Test #2" time="0.000087" classname="test"/>
                </testsuite>
                <testsuite name="Test Group #2" time="0.000530" disabled="0" errors="0" tests="2" failures="0" skipped="0">
                        <testcase name="Test Suite #2/Test Group #2/Test #1" time="0.000107" classname="test"/>
                        <testcase name="Test Suite #2/Test Group #2/Test #2" time="0.000102" classname="test"/>
                </testsuite>
        </testsuite>
        <testsuite name="Test Suite #3" time="0.002906" disabled="0" errors="0" tests="2" failures="0" skipped="0">
                <testcase name="Test Suite #3/Test #1" time="0.000641" classname="test"/>
                <testcase name="Test Suite #3/Test #2" time="0.000147" classname="test"/>
        </testsuite>
        </testsuite>
</testsuites>

Here's the TAP output from that atttempt:

TAP version 13
# Subtest: Test Suite \#4
    # Subtest: Test Suite \#4/Test \#1
    ok 1 - Test Suite \#4/Test \#1
      ---
      duration_ms: 0.6541
      ...
    # Subtest: Test Suite \#4/Test \#2
    ok 2 - Test Suite \#4/Test \#2
      ---
      duration_ms: 0.1224
      ...
    1..2
ok 1 - Test Suite \#4
  ---
  duration_ms: 2.9651
  ...
# Subtest: Test Suite \#1a
    # Subtest: Test Suite \#1a/Test \#1
    ok 1 - Test Suite \#1a/Test \#1
      ---
      duration_ms: 0.9159
      ...
    # Subtest: Test Suite \#1a/Test \#2
    ok 2 - Test Suite \#1a/Test \#2
      ---
      duration_ms: 0.145
      ...
    1..2
ok 2 - Test Suite \#1a
  ---
  duration_ms: 3.0777
  ...
# Subtest: Test Suite \#1b
    # Subtest: Test Suite \#1b/Test \#1
    ok 1 - Test Suite \#1b/Test \#1
      ---
      duration_ms: 0.1851
      ...
    # Subtest: Test Suite \#1b/Test \#2
    ok 2 - Test Suite \#1b/Test \#2
      ---
      duration_ms: 0.131
      ...
    1..2
ok 3 - Test Suite \#1b
  ---
  duration_ms: 0.6262
  ...
# Subtest: Test Suite \#2
    # Subtest: Test Group \#1
        # Subtest: Test Suite \#2/Test Group \#1/Test \#1
        ok 1 - Test Suite \#2/Test Group \#1/Test \#1
          ---
          duration_ms: 0.4242
          ...
        # Subtest: Test Suite \#2/Test Group \#1/Test \#2
        ok 2 - Test Suite \#2/Test Group \#1/Test \#2
          ---
          duration_ms: 0.0873
          ...
        1..2
    ok 1 - Test Group \#1
      ---
      duration_ms: 1.3277
      ...
    # Subtest: Test Group \#2
        # Subtest: Test Suite \#2/Test Group \#2/Test \#1
        ok 1 - Test Suite \#2/Test Group \#2/Test \#1
          ---
          duration_ms: 0.1065
          ...
        # Subtest: Test Suite \#2/Test Group \#2/Test \#2
        ok 2 - Test Suite \#2/Test Group \#2/Test \#2
          ---
          duration_ms: 0.1022
          ...
        1..2
    ok 2 - Test Group \#2
      ---
      duration_ms: 0.5298
      ...
    1..2
ok 3 - Test Suite \#2
  ---
  duration_ms: 2.7388
  ...
# Subtest: Test Suite \#3
    # Subtest: Test Suite \#3/Test \#1
    ok 1 - Test Suite \#3/Test \#1
      ---
      duration_ms: 0.6406
      ...
    # Subtest: Test Suite \#3/Test \#2
    ok 2 - Test Suite \#3/Test \#2
      ---
      duration_ms: 0.147
      ...
    1..2
ok 4 - Test Suite \#3
  ---
  duration_ms: 2.9056
  ...
1..5
# tests 5
# pass 5
# fail 0
# cancelled 0
# skipped 0
# todo 0
# duration_ms 1150.6397

It seems like whichever test shows up first is considered the root of all tests, even if the subsequent tests are siblings; while some of what I might be doing may not be the exact proper way/slightly weird, I don't think that running node --test in a directory with test/*.js files is in any way weird or wrong, from the documentation I've read, this seems like, if anything, the most official/proper way to do it besides using run()..?

Also..for some reason gitlab seems to use the classname="test" as the test suite name - I'm not super familiar with the junit format so I'm not sure if this is more of an issue on that side, but in the XML generated it shows every testcase as having classname="test" - I'm not sure where this is coming from or how that's generated (haven't peeked at the code yet for the reporter) - assuming it's just responding to the events on a TestsStream and outputting the XML based on what it gets from node on the TestsStream but I wonder if perhaps the classname should actually be the suite name?

image

Here for test suite 3 where it shows 'Test Suite #3/Test#1' is because that's the actual name of the test eg:

    await t.test('Test Suite #3',async(t) => {
        await t.test('Test Suite #3/Test #1', async(tt) => {});
        await t.test('Test Suite #3/Test #2', async(tt) => {});
    });

I had taken out the Test Suite #x/ prefix for all the other ones and had missed that; I had only put it like that in the first place because it seemed like tap2junit was using the / to parse sub-test names ..

I did briefly look into what gets fired on a TestsStream but as presently there's no way to use the internal reporters when using run() which is the only way to get a handle on the TestsStream, it's a bit awkward - this has been fixed already in a commit I found however I don't know when it's planned to land on a release..

@MoLow
Copy link
Member

MoLow commented Apr 2, 2023

@ForbiddenEra can you please open an issue at https://github.com/MoLow/reporters/tree/main describing what the desired output would be and what node version you are using?

@ForbiddenEra
Copy link
Author

ForbiddenEra commented Apr 2, 2023

@ForbiddenEra can you please open an issue at https://github.com/MoLow/reporters/tree/main describing what the desired output would be and what node version you are using?

No problem; I did ask if you would prefer that and will do..

Edit: done - MoLow/reporters#42

thanks again for your time/help/reporter and other work :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants