Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Potential setup script improvements #369

Open
jscholes opened this issue Dec 18, 2020 · 8 comments
Open

Potential setup script improvements #369

jscholes opened this issue Dec 18, 2020 · 8 comments
Assignees
Labels
Agenda+Community Group To discuss in the next workstream summary meeting (usually the last teleconference of the month) enhancement New feature or request Requirements Specified Applied after community group consensus has been reached on an issue. test-runner tests About assistive technology tests

Comments

@jscholes
Copy link
Contributor

On a recent ARIA-AT CG call, we were discussing how page state should be reset between tests (see #358). Tied into that discussion were some thoughts about possible improvements to setup scripts, both in terms of how they are written and executed:

  • We need some method of sharing logic across setup scripts or chaining them for a test? For instance, the updated test plan for the editable combobox example (Create updated tests for APG design pattern example: Editable Combobox With Both List and Inline Autocomplete #355) includes 11 separate scripts which set focus on the combobox, and 8 which also expand the control. It would be great to write shared logic once, and indicate that it applies to a test in addition to any custom code for that test. Sequencing scripts may be the simplest approach.
  • Do we need a "tear down"/"reset to known good state" mechanism? In the related issue about resetting page state, the most recent proposed solution that multiple people agree with is a button on test pages to run the setup scripts. But there is a risk that any steps carried out by a previous script invocation won't be sufficiently undone, such as hiding an element or changing the accessible name of something. Note that we don't currently do the latter in any tests, but we may in the future.
@jscholes jscholes added enhancement New feature or request Agenda+Community Group To discuss in the next workstream summary meeting (usually the last teleconference of the month) tests About assistive technology tests labels Dec 18, 2020
@robfentress
Copy link
Contributor

But there is a risk that any steps carried out by a previous script invocation won't be sufficiently undone, such as hiding an element or changing the accessible name of something. Note that we don't currently do the latter in any tests, but we may in the future.

Do you have an example of where this may be the case or is this a more hypothetical concern?

@jscholes
Copy link
Contributor Author

jscholes commented Mar 4, 2021

@robfentress

Do you have an example of where this may be the case

Changes are made to pages by setup scripts in a number of our test plans to date. E.g.:

  • In select-only or editable combobox, the combobox value is set to something other than the default.
  • For checkboxes and similar controls, the state is set on page load and users are then prompted to change it. But tests are targeting a specific state transition like checked to not checked, so the state must be reset.
  • In disclosure FAQ, one of the setup scripts purposefully hides the "Navigate backwards from here" link so it doesn't get in the way during a test for which it isn't relevant.

@jscholes
Copy link
Contributor Author

Actionable next step for myself: write up an APG issue explaining current problem and suggested route forward. Namely, a global object for interacting with APG components.

@jscholes jscholes self-assigned this Mar 10, 2021
@jscholes
Copy link
Contributor Author

Do we need a "tear down"/"reset to known good state" mechanism?

The consensus from the March 4, 2021 community group meeting seemed to be: yes, we do need a mechanism for resetting page state between commands. But if we try to explicitly create this, it will complicated the test writing process and most likely leave out edge cases anyway.

As such, @mfairchild365 suggested the following approach:

  1. When a test page is initially loaded, provide a button to run the setup script(s). Give it an autofocus attribute.
  2. When that button is activated, execute the setup script(s), and then change the name and function to "Reset" (or something similar e.g. it was discussed whether the labels should be more descriptive; something like "press to reset the page between commands"). Maybe the more descriptive context should be in the accessible description?
  3. When the reset button is activated, completely reload the test page, returning the button to its default state and purpose (i.e. for running the setup script(s)). The autofocus attribute will ensure that it receives focus.

With the above in mind, a tester's journey through a test will look like:

  1. Open the test page.
  2. Activate the button to run the setup script(s).
  3. Carry out the test using the first command.
  4. Return to the button, which now says "Reset", and press it.
  5. When the test page reloads, activate the button to execute the setup script(s) again.
  6. Repeat until all commands have been tested.

@mcking65
Copy link
Contributor

Another thing I love about this approach is that the "shortcut" for the reset button is simply use the browser refresh key. So, the process is explicitly defined on the page, but the experienced tester can easily use the shortcut if they prefer.

@jscholes jscholes added the Requirements Specified Applied after community group consensus has been reached on an issue. label Mar 18, 2021
@mzgoddard
Copy link
Contributor

As I wrote today in #450 (comment), I think we need to change the process of how test plans are created to provide a way to reset a test. A technical solution without a process solution would be hard to create and maintain and likely buggy.

I think there are two non-exclusive process solutions.

The first process solution is to use a copy of the reference page in place of each setup script. Instead of a script modifying the page in the browser a copy of the reference page with those modifications is made by the test author and used with each relevant individual test. Any scripting still needed, such as calling focus method on an element, would be done by the reference copy for that test.

The second process solution is to add a small inline script into the head element of the reference page. This script calls a predetermined callback on the parent window. In effect this script emits an event like the load event but the difference is how the listener is setup. Listeners the parent adds to the test page do not apply to the reloaded test page. A callback on the parent window instead can be set once and the child test page window can call as desired.

I think in either case a change in process is needed. Knowing how to setup or reset a test's reference page is deeply related to that specific reference page and test plan.

@jscholes
Copy link
Contributor Author

The first process solution is to use a copy of the reference page in place of each setup script.

This is a non-starter. It would add a ton of extra work, not only when creating the tests but also when modifying them, because there would be multiple copies of the entire page.

The second process solution is to add a small inline script into the head element of the reference page.

Question: why can't the head section just contain a direct reference to the setup script on the server, plus some code to run it when the button is clicked? Then the example page would be self-contained.

This aspect of the parent window is what is concerning me the most, because it creates a dependency on the page invoking the example, and that's why we're struggling to refresh it. Why is the parent window, i.e. the test runner, 100% required by the example page? Is it so we can close the window automatically when someone navigates to another test, as requested in last week's community group meeting?

@s3ththompson
Copy link
Member

@jscholes I think we would benefit from a discussion of these approaches before writing anything off wholesale. @mzgoddard has put a lot of thought into the architecture here (as I know you have too) and I think he's aware of, and interested in discussing the tradeoffs inherent in the tension between self-contained tests (for simplicity's sake), code-reuse (to lower the cost of contributing new test plans), and modularity (for the sake of technical flexibility / upgradability).

I'll send an email to set up some time for an audio call where we can discuss the above issues in a bit more depth.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Agenda+Community Group To discuss in the next workstream summary meeting (usually the last teleconference of the month) enhancement New feature or request Requirements Specified Applied after community group consensus has been reached on an issue. test-runner tests About assistive technology tests
Projects
None yet
Development

No branches or pull requests

5 participants