Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: auto-pass main landmark requirements for no-landmark pages #2644

Merged
merged 24 commits into from
May 14, 2020

Conversation

dbjorge
Copy link
Contributor

@dbjorge dbjorge commented May 8, 2020

Description of changes

#2521 tracks an issue where it wasn't clear what the expectations on a user were for testing the Landmarks "primary content" and "no repeating content" requirements on pages with no landmarks.

The proposed fix for that was a text update + making the requirements pass automatically on pages that have no landmarks.

This fix involves a significant bit of new behavior. We wanted to keep the 2 requirements in question as manual requirements, not assisted requirements, since it's possible for a page to have no main landmarks and for a user to need to manually enter a failure not associated with any instance we might show in an instance list. However, this is the first time we're assigning auto-pass behavior to a manual test; in all other cases where we support auto-pass, it works by looking for an assisted requirement with no matching instances.

This new behavior is especially challenging because the set of instances required for auto-passing ("are there any landmarks on the page?") is different from the set of instances required for visualization ("what are all the main landmarks on the page?").

To support this, I've added 2 new features to AssessmentStore's Requirement processing:

  • A new isVisualizationSupportedForResult property which Requirements can use to indicate that certain results returned by their analyzers should not support being visualized (by default, all results support visualization)
  • A new getInitialManualTestStatus property which manual Requirements can use to infer an initial pass/fail state after scanning for results (by default, a manual requirement is put in the UNKNOWN state regardless of analysis results)

These are used by the updated requirements by updating their analyzers to use the unique-landmark rule to look for all landmarks instead of the old behavior of looking only for main landmarks, use isVisualizationSupportedForResult to only visualize the main landmarks (like before), and use getInitialManualTestStatus to set a "PASS" status if the scan completes and there are no instances with the landmark role data from the unique-landmark rule.

New behavior:

no landmarks: "Primary content" and "No repeating content" automatically pass
screen recording of no-landmarks interactions

only non-main landmarks: "Primary content" and "No repeating content" don't auto-pass and show no matching instances to visualize
screen recording of banner-landmark-only interactions

Mix of main and non-main landmarks: "Primary content" and "No repeating content" don't auto-pass and visualize only the main landmark
screen recording of mixed-landmarks interactions

Pull request checklist

  • Addresses an existing issue: Landmarks requirements confusing for target page with no (or multiple) main landmarks #2521
  • Ran yarn fastpass
  • Added/updated relevant unit test(s) (and ran yarn test)
  • Verified code coverage for the changes made. Check coverage report at: <rootDir>/test-results/unit/coverage
  • PR title AND final merge commit title both start with a semantic tag (fix:, chore:, feat(feature-name):, refactor:). See CONTRIBUTING.md.
  • (UI changes only) Added screenshots/GIFs to description above
  • (UI changes only) Verified usability with NVDA/JAWS

@dbjorge dbjorge marked this pull request as ready for review May 12, 2020 22:25
@dbjorge dbjorge requested a review from a team as a code owner May 12, 2020 22:25
@dbjorge dbjorge changed the title fix: clarify main landmark requirements for no-landmark pages fix: auto-pass main landmark requirements for no-landmark pages May 12, 2020
@dbjorge
Copy link
Contributor Author

dbjorge commented May 14, 2020

Per discussion from Karan+Fer+Peter: we think as a followup from this work, we'd want to look into similar autopass behavior for Sequence > CSS positioning and Semantics > CSS content when they have nothing available to visualize. Leaving them out for the sake of this PR since it's big enough already and already has 1 motivating example. @ferBonnin will double check whether there are any other requirements that might similarly benefit.

@dbjorge
Copy link
Contributor Author

dbjorge commented May 14, 2020

I think this new behavior would benefit from an e2e test validating it; I have it mostly implemented already, but again, leaving it for a separate PR to try to keep the scope of this one more limited, it's already quite large

@ferBonnin
Copy link
Contributor

@ferBonnin will double check whether there are any other requirements that might similarly benefit.

checked and right now, those two are the only two requirements that will benefit from this at the moment.

Copy link
Contributor

@karanbirsingh karanbirsingh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

one question about <main>

@dbjorge dbjorge merged commit 010f34e into microsoft:master May 14, 2020
@dbjorge dbjorge deleted the 2521-clarify-main-landmark-reqs branch May 14, 2020 21:58
dbjorge added a commit that referenced this pull request May 15, 2020
#### Description of changes

Adds new end to end tests that verify the auto-pass-on-no-landmarks behavior introduced in #2644. Updates the product code to add a new data-automation-id, no UI impact.

#### Pull request checklist
<!-- If a checklist item is not applicable to this change, write "n/a" in the checkbox -->
- [x] Addresses an existing issue: e2e test for behavior in #2521
- [x] Ran `yarn fastpass`
- [x] Added/updated relevant unit test(s) (and ran `yarn test`)
- [x] Verified code coverage for the changes made. Check coverage report at: `<rootDir>/test-results/unit/coverage`
- [x] PR title *AND* final merge commit title both start with a semantic tag (`fix:`, `chore:`, `feat(feature-name):`, `refactor:`). See `CONTRIBUTING.md`.
- [n/a] (UI changes only) Added screenshots/GIFs to description above
- [n/a] (UI changes only) Verified usability with NVDA/JAWS
dbjorge added a commit that referenced this pull request May 15, 2020
…ning (#2687)

#### Description of changes

Per discussion in #2644, the new manual auto-pass behavior we added for some of the Landmarks requirements would also be nice to have for the Semantics > CSS content and Sequence > CSS positioning requirements when their visualization analyzers find no relevant elements.

This PR generalizes the "auto-pass if no landmarks" behavior to a "auto-pass if no results relevant to the requirement" behavior and adds it in to the 2 CSS requirements. To do this, it updates the assessment store to only pass instance data with results relevant to the current requirement when it calls a requirement's getInitialManualTestStatus function.

I considered an alternative approach of leaving assessment store as-is and creating a factory function for each requirement to specify its own `autoPassIfNoResultsFor(key)` rather than a single `autoPassIfNoResults`; I liked that because it avoided adding more complexity to `assessment-store`, which is already a very complex class, but I felt that it was better for data encapsulation if requirements were systematically prevented from seeing/depending on other requirements' data, and I thought the encapsulation was more valuable.

![gif of 2 CSS requirements auto-passing on a page with no matching instances](https://user-images.githubusercontent.com/376284/81997746-89da2800-9605-11ea-8617-787a6b4b3f1b.gif)

#### Pull request checklist
<!-- If a checklist item is not applicable to this change, write "n/a" in the checkbox -->
- [n/a] Addresses an existing issue: #0000
- [x] Ran `yarn fastpass`
- [x] Added/updated relevant unit test(s) (and ran `yarn test`)
- [x] Verified code coverage for the changes made. Check coverage report at: `<rootDir>/test-results/unit/coverage`
- [x] PR title *AND* final merge commit title both start with a semantic tag (`fix:`, `chore:`, `feat(feature-name):`, `refactor:`). See `CONTRIBUTING.md`.
- [x] (UI changes only) Added screenshots/GIFs to description above
- [x] (UI changes only) Verified usability with NVDA/JAWS
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants