-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Scoping automated testing steps #95
Comments
I've started a draft spreadsheet to help me keep track of everything we can do as we scope it down. I've already marked some WCAG guidelines as not being relevant to JupyterLab (like having captions for videos). |
We had a longer discussion around a sub-task for this issue at our April 20 meeting #99; which JupyterLab “pages” (axe-core’s understanding of a single state in JupyterLab) do we want to start testing. The proposal in #97 listed this as 2–5 “pages” for JupyterLab to begin testing with. I asked for feedback on my initial thoughts, and we discussed a few different approaches to this decision making:
After discussion, we agreed to begin this first six weeks of testing with a focus on
since they are the states that precede all others in a user’s interaction. We acknowledge that this approach does not yet include major areas of the interface that will be critical to JupyterLab’s accessibility (such as the top menus or the settings editor) and that it needs to in the future. I in particular want to make sure these are covered, but I also agree with counterarguments that this involves a lot of states sooner than we have the structure to handle. |
I have a first pass at ideas for what @gabalafou called the "Three to five handwritten machine-as-a-user tests" (in #97). This truly is my first attempt, so I expect needing to rework this a ton based on feedback. But now we have something to critique! Also, feedback on format is as welcome as content; I'm not sure this is the best way to communicate this to y'all. How I chose these optionsFor a little background, I chose the following based on:
If you find any issue with this approach, it'd be good to know. (This was all done in the aforementioned draft spreadsheet.) Test proposalsThese are broken up into the WCAG area they reference, how I think we could interpret them as success criteria specifically in JupyterLab (rather than the success criteria they define for all web content), and a list of steps I think would help us test for this success criteria (written from a manual testing perspective). 1.3.4 - OrientationProposed JLab success criteriaJupyterLab is responsive. When switched to portrait orientation or viewed on mobile, no UI content is lost. Proposed step-by-step
Note: I think the current portrait and/or mobile mode could be improved, but we can definitely start testing with what we have now. 2.1.2 No keyboard trapProposed JLab success criteriaFocusable areas in JupyterLab can all be unfocused. This will need to test multiple regions long term. For now, I think our success criteria should be that JupyterLab's menu bar can be focused and unfocused. Proposed step-by-step
2.4.3 Focus OrderProposed JLab success criteriaIn JupyterLab, areas can be focused in the following order:
(Giving credit! I was informed by this discussion in a past JupyterLab accessibility meeting to propose this order.) Proposed step-by-step
2.5.6 Concurrent input mechanismsProposed JLab success criteriaIn JupyterLab, a single task can be completed using mouse, keyboard, and touch screen inputs. This works even when completing a single task. This will need to test multiple regions long term. For now, I think our success criteria should be that JupyterLab can open a new notebook from the launcher with mouse, keyboard, and touch screen inputs. Proposed step-by-step
I'm not totally sure how this works in terms of testing, but it is my understanding that the type of input can be simulated. Please let me know if I'm wrong about this. |
I had a few other thoughts while working on this that I wanted to write down. Most of these are about testing patterns that may help us long term. Categories of automated testsBased on the content we are testing for accessibility, I think I'm seeing a pattern in certain approaches suiting different types of content. This isn't too well thought out yet, but I could see it as helping us know what kind of test we need for what.
What needs to be testedIn my above comment and in last week's team update meeting, we talked a lot about what needs to be tested. Scoping tests is important for both our small team and for respecting contributor time (as has been mentioned in other issues). Between this work, I also think certain accessibility needs fall under certain categories of how they can be optimally tested. I broke this up into tests that need to run an entire "page," tests that do not need to run an entire "page," and tests that would benefit being run on a section of a page or single UI component.
|
I love this! I'm going to jot down a few quick thoughts before our team meeting:
|
Nope, just order of appearance in WCAG. I think we have a number of obstacles to overcome in order to get any handwritten tests like this started, and at this phase I'd rather focus on that infrastructure. Whichever test works the best with the rest of the work is what I'd prioritize for now.
Agreed! Because I don't see it in any other notes, this may also end up fitting nicely with the work already done at ACT rules.
I personally do not see the isolated UI as high priority. I do think long term it is ideal, but I think we're early enough in this process that I don't consider it a blocker or something worth changing your current goals for. I pointed it out only because I wanted to gauge how you all felt about that/just to generally share my thoughts so this is done as openly as possible. Other notes:
|
What's changedThere is repeated content from above just so all the info stays together. Because I know this can feel like a lot at once, I want to point out what changed.
@gabalafou @tonyfast @trallard I'm @/ing you for review. Test proposals1.3.4 - OrientationProposed JLab success criteriaJupyterLab is responsive. When switched to portrait orientation or viewed on mobile, no UI content is lost. Proposed testing script
2.1.2 No keyboard trapProposed JLab success criteriaFocusable areas in JupyterLab can all be unfocused. This will need to test multiple regions long term. For now, I think our success criteria should be that JupyterLab's menu bar can be focused and unfocused. Proposed testing script
2.4.3 Focus OrderProposed JLab success criteriaIn JupyterLab, areas can be focused in the following order:
Proposed testing script
2.5.6 Concurrent input mechanismsProposed JLab success criteriaIn JupyterLab, a single task can be completed using mouse, keyboard, and touch screen inputs. This works even when completing a single task. This will need to test multiple regions long term. For now, I think our success criteria should be that JupyterLab can open a new notebook from the launcher with mouse, keyboard, and touch screen inputs. Proposed testing script
Questions!
|
|
Based on synchronous feedback, this is close to complete (for this first round of test development).
Responding to @trallard's comment, I think any of these scenarios can be tested first based on what's easiest for development to start with. If it's helpful for me to choose, I think
|
@isabela-pf to add the scripts as a PR to the accessibility repo |
Re: myself Current testing script proposals and template are in Quansight-Labs/accessibility #6 |
Summary
Especially for the first iteration of what needs to be tested through CI we need to have a path of what needs testing.
For example - menu bars, contrast
From @gabalafou
Tasks to complete
Roughly:
Format: I would suggest a lightweight version of our RFD template
The text was updated successfully, but these errors were encountered: