Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a way for single layout test to report all assertions and continue execution after the first failing assertion #14598

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

youennf
Copy link
Contributor

@youennf youennf commented Dec 19, 2018

This PR records all executed assertions for single test files.
These assertions are passed to the completion callback so that results can be displayed in various ways.

This PR adds a method to enable continue executing the test even if an assertion is failing.
A test opts in by calling a specific method.

A test using this new mode is added to validate the mechanism.
This test is a testharness-based version of https://github.com/WebKit/webkit/blob/master/LayoutTests/fast/frames/detached-frame-property.html

…e execution after the first failing assertion
@foolip
Copy link
Member

foolip commented Dec 20, 2018

This PR is blocked on the required "Travis CI - Pull Request" check after #14499. In order to trigger it, I will close and reopen this PR.

@foolip foolip closed this Dec 20, 2018
@foolip foolip reopened this Dec 20, 2018
test.add_assertion({passed: !!expected_true, message: message});
if (continue_execution_after_failing_assertion) {
if (!expected_true && test.status !== test.FAIL) {
test.set_status(test.FAIL, "At least one assertion failed: " + message);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like this will list the first failure. Is the idea that the treatment of assertions is entirely vendor-specific, or how will one make use of the list of assertions?

For wpt.fyi, how should this feature change what is shown? Since this is limited to single-page tests we could try to treat the assertions as tests in themselves, but then we'll also have to require that each assertion's message is unique, or possibly make it unique automatically if it isn't. The only other option I see is to require the assertion order to be stable across runs or between browsers, as otherwise there's no way to compare results.

@lukebjerring for more ideas.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My 5c: subtests are different that assertions, which are more like a precondition (such as asserting that something is non-null, before asserting the correct value of a property on it).
If you want to have separate failures for multiple things, that don't cascade, then we should be creating subtests.
Something like lukebjerring#14 is what I mean in this specific case.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with @lukebjerring here; this model looks like it should work by wrapping each assert() in an implicit test();

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like this will list the first failure. Is the idea that the treatment of assertions is entirely vendor-specific, or how will one make use of the list of assertions?

Yes, this would be vendor specific.
WebKittens want more reporting in what is produced by WebKitTestRunner.
Ideally, for every failed test (single file test or not), it would be good to have the list of assertions that passed for the given test.
testharnessreport.js could be used to display that information in whatever form.
For instance, it could be dumped in stderr for debugging purposes by WebKitTestRunner and on the inspector console on a browser.

The current PR provides the first error message in the test failing status.
To decrease flakiness, it might just be better to not report it and let testharnessreport.js actually decide how it wants to output the information.

Implicit test() is another valid approach.
My main concern is how much flakiness this actually ends up creating in the output.
Test writers will need to ensure that assertions are called the exact same amount of time in the exact same order. I am not sure though how practical this is to require in WPT.
WebKit js-test actually requires that so I am fine trying it here as well.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The order shouldn't be important; it's not part of the API. You may need to find a way to create a unique identity for the alert to ensure the order doesn't leak.

@@ -28,6 +28,15 @@ function dump_test_results(tests, status) {
stack: status.stack};
results_element.textContent = JSON.stringify(data);

if (tests.length === 1) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this entirely for debugging and intended to be reverted, or is it an important part of this patch?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did it to ensure this was working.
I do not strongly care about this particular change but think that this might be convenient for people running tests in a browser.
If we go with implicit test(), this change is meaningless.
Otherwise, providing additional information to people running tests might be useful.

@lukebjerring
Copy link
Contributor

When comparing the original webkit test to the proposed equivalent, I notice that you don't avoid having to translate the assertion statements (regardless of whether assert_true behaves as you desire), so I'm just curious whether you can change from mapping
shouldBeTrue(c, m) => assert_true(c, m)
to
shouldBeTrue => test(() => { assert_true(c) }, m)

Which should have the desired outcome?

<script src="/resources/testharness.js"></script>
<script src="/resources/testharnessreport.js"></script>
<script>
execute_single_page_test_after_failing_assertion();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If there has to be a big global switch (which I don't love) it should be a property passed to setup()

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

test.add_assertion({passed: !!expected_true, message: message});
if (continue_execution_after_failing_assertion) {
if (!expected_true && test.status !== test.FAIL) {
test.set_status(test.FAIL, "At least one assertion failed: " + message);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with @lukebjerring here; this model looks like it should work by wrapping each assert() in an implicit test();

@foolip
Copy link
Member

foolip commented Dec 20, 2018

Please take a look at #14616 (review), apparently some tests have already added their own helpers for something very similar. There the wrappers are being removed though, because they were implicated in flakiness.

@youennf
Copy link
Contributor Author

youennf commented Dec 21, 2018

Maybe I should split the PR in two parts, one for being able to store assertions and one to have a mode to continue running single file test after an assertion failure.

If we go with test-wrapping asserts, I think it should be done for all single file tests, not just the ones that opt-in in the new mode.

Similarly, we should decide whether the order of running assertions is important or not, independently of the actual single test file mode. If we test-wrap asserts, the execution order becomes meaningful.
For instance, the number of wrapped asserts might change run after run.

Adding more determinism to tests is usually a good thing.
I can have a look on how existing single test files would be ok with this new constraint.
I would prefer reaching consensus on this approach before spending some time there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants