Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[presentation-api] Refine getAvailability() test #4257

Merged

Conversation

tomoyukilabs
Copy link
Member

@tomoyukilabs tomoyukilabs commented Nov 29, 2016

This PR partially fixes #4181.


This change is Reviewable

@wpt-pr-bot
Copy link
Collaborator

Notifying @louaybassbouss, @tidoust, and @zqzhang. (Learn how reviewing works.)

@tidoust
Copy link
Contributor

tidoust commented Dec 2, 2016

Thanks @tomoyukilabs,

Reviewing this test made me read the getAvailability algorithm again, which triggered a few questions:

I would suggest to wait until these issues are resolved.

Other comments on the pull request:

  1. I believe we should try to create automated tests as much as practical and reduce the scope of manual tests to a bare minimum. Most steps of the getAvailability algorithm should be testable without requiring any user interaction. Typically, I would take for granted that the starting point is an environment with at least one available presentation display and without restrictions (i.e. without "disabled" features). With that in mind, I think you can stick to an automated test here that eventually expects a PresentationAvailability object whose value is true (or a NotSupportedError exception).
  2. Former step 6. has now moved to the monitoring algorithm. To test it, I would create a separate manual test that asks the user to disable monitoring and then calls getAvailability. That test should be marked as "optional" because user agents may not support disabling monitoring.
  3. The first call to getAvailability will create the presentation availability promise. Calling getAvailability a second time will return that promise in step 2. In other words, the test should rather check that the returned Promise is the same in both cases, not that the PresentationAvailability instance is the same.

@tomoyukilabs
Copy link
Member Author

@tidoust Okay, I wait for resolution of the remaining two issues. Now former step 6. has been removed from the getting availability algorithm, the test does not have to ensure that any presentation display is available, and I can safely restore this test to an automated one.

With that in mind, I think you can stick to an automated test here that eventually expects a PresentationAvailability object whose value is true (or a NotSupportedError exception).

Do you mean that the test itself does not have to prompt users to prepare an available presentation display (as "crystal clear on test conditions")?

@tidoust
Copy link
Contributor

tidoust commented Dec 5, 2016

Do you mean that the test itself does not have to prompt users to prepare an available presentation display (as "crystal clear on test conditions")?

Yes, I was thinking that we could add a "Pre-requisites" section to the README of the Presentation API test suite, that warns the tester that there must be at least one presentation display available to the user agent under test before tests may run, or something along these lines.

@tomoyukilabs
Copy link
Member Author

I've updated the tests, which would follow recent changes in the spec.

  • They revert to automated tests, and the file name reverts to getAvailability.html.
  • A test to check if getAvailability returns the same Promise when the PresentationRequest object has an unsettled Promise.
  • A test to check if a Promise from getAvailability called a second time is resolved with the same PresentationAvailability instance remains, since there still exists "If the presentation display availability for presentationRequest is not null, then Resolve P with the request's presentation display availability" in the current spec (step 6. in Getting the presentation displays availability information).

Copy link
Contributor

@tidoust tidoust left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See comment about Promise flow inline which, I think, needs fixing. That looks good otherwise and aligned with latest version of the spec, many thanks!

}, function(err) {
// getAvailability rejects a new Promise with NotSupportedError if the browser can find displays only when starting a connection
assert_equals(err.name, 'NotSupportedError', 'getAvailability() rejects a Promise with a NotSupportedError exception, if the browser can find presentation displays only when starting a connection.')
}).then(function (newAvailability) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I may be lost in promises, but given you catch a possible getAvailability failure before this then, isn't that "then" part also going to be run if a failure occurs? Catching the NotSupportedError error should end the test.

I'm not sure what is the cleanest way to write this:

  • You could catch NotSupportedError in a final .catch(function err) {} block. That would look great, but would make the test succeed in the (unlikely) event when only the second call to getAvailability fails with a NotSupportedError error. I'd be fine with that personally, the implementation would be seriously buggy if it only rejects the second call.
  • You could move that .then part to return newPromise.then(function (newAvailability) { ... };

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, I've fixed it as follows:

  • The .then part is moved to return newPromise.then(function (newAvailability) { ... };.
  • The updated tests catch NotSupportedError thrown by either getAvailability call.

// -----------------------
promise_test(function () {
var request = createRequestObject();
// instance of PresentationRequest
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm fan of comments in the code, but I note all the "assert" comments already appear in the assert calls themselves, so I think you can simply drop them throughout. That's just a suggestion, of course :)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. I totally agree with your comment :-)

@tidoust tidoust merged commit 5be497b into web-platform-tests:master Dec 29, 2016
@tomoyukilabs tomoyukilabs deleted the refine-getavailability branch January 4, 2017 00:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[presentation-api] complement PresentationAvailability tests
3 participants