diff --git a/conformance-challenges/index.html b/conformance-challenges/index.html index 678e3ccde0..68e2a5ed9a 100644 --- a/conformance-challenges/index.html +++ b/conformance-challenges/index.html @@ -215,79 +215,174 @@

Ensuring keyboard operability of all functionality currently requires a human to manually navigate through content to ensure all interactive elements are in the tab order and can be fully operated using the keyboard.

-

Character Key Shortcuts (Success Criterion 2.1.4)

+

Character Key Shortcuts +

+ +

Single key presses can be applied to content via a script but whether and what these key presses trigger can only be determined by additional human evaluation.

-

Timing Adjustable (Success Criterion 2.2.1)

+

Timing Adjustable +

+ +

There is currently no easy way to automate checking whether timing is adjustable. Ways of controlling differ both in naming, position and approach (including dialogs/popups before the time-out) and depends on how the server registers user interactions (e.g. for automatically extending the time-out).

-

Pause, Stop, Hide (Success Criterion 2.2.2)

+

Pause, Stop, Hide +

+ +

Typically the requirement to control moving content is provided by interactive controls placed in the vicinity of moving content, or occsionally at the beginning of content. Since position and naming vary, this assessment can not currently be automated (this involves checking that the function works as expected).

-

Three Flashes or Below Threshold (Success Criterion 2.3.1)

+

Three Flashes or Below Threshold +

+ +

There are currently no known automated tests that are accurately able to assess areas of flashing on a webpage.

-

Bypass Blocks (Success Criterion 2.4.1)

+

Bypass Blocks +

+ +

While it can be determined that native elements or landmark roles are used, there is currently no automted way to determine whether they are used to adequately structure content (are they missing out on sections that should be included). The same assessment would be needed when other Techniques are used (structure by headings, skip links).

-

Page titled (Success Criterion 2.4.2)

+

Page titled +

+ +

Automating a check for whether the page has a title is simple, ensuring that the title is meaningful and provides adaquate context is not.

-

Focus Order (Success Criterion 2.4.3)

+

Focus Order +

+ +

There is currently no way to automate ensuring that focus handling with dynamic content (e.g. moving focus to a custom dialog, keep focus in dialog, return to trigger) follows a logical order.

-

Link Purpose (Success Criterion 2.4.4)

+

Link Purpose (In Context) +

+ +

Automated tests can check for the existence of links with the same name, as well as check whether links are qualified programmatically, but checking whether the link text adequately describes the link purpose still involves human judgment.

-

Multiple ways (Success Criterion 2.4.5)

+

Multiple ways +

+ +

Automated tests can validate whether pages can be reached with multiple ways(e.g. nav and search), but will miss cases where exceptions hold (all pages can be reached from anywhere) and still require human validation..

-

Headings and Labels (Success Criterion 2.4.6)

+

Headings and Labels +

+ +

The determination of whether headings are descriptive depend on an assessment of the context of web content headed or labeled and is therefore predominantly a human assessment.

-

Pointer Gestures (Success Criterion 2.5.1)

+

Pointer Gestures +

+ +

There are currently no known automated checks that would accurately detect complex gestures - even when a script indicates the presence of particular events like touch-start, the event called would need to be checked in human evaluation.

-

Pointer Cancellation (Success Criterion 2.5.2)

+

Pointer Cancellation +

+ +

When mouse-down events are used (this can be done automatically), checking for one of the four options that make it OK looks definitely like a human task.

-

Motion Actuation (Success Criterion 2.5.4)

+

Motion Actuation +

+ +

Event suspect may be detected automatically but whether there are equivalents for achieving the same thing via user interface components will require a human check.

-

On Focus (Success Criterion 3.2.1)

+

On Focus +

+ +

There is currently no reliable way to accurately automate checking whether a change caused by moving focus should be considered a change of content or context.

-

On Input (Success Criterion 3.2.2)

+

On Input +

+ +

There is currently no reliable way to accurately automate checking whether changing the setting of any user interface component should be considered a change of content or context, or to automatically detect whether relevant advice exists before using the component in question.

-

Error Identification (Success Criterion 3.3.1)

+

Error Identification +

+ +

Insuring whether an error message correctly identifies and describes an error accurately currently requires using human judgment.

-

Labels or Instructions (Success Criterion 3.3.2)

+

Labels or Instructions +

+ +

Edge cases (are labels close enough to a component to be perceived as a visible label) will require a human check. Some labels may be programmatically linked but hidden or visually separated from the element to which they are linked. Whether instructions are necessary and need to be provided will hinge on the content. Human check needed.

-

Error Suggestion (Success Criterion 3.3.3)

+

Error Suggestion +

+ +

Whether a suggestion is helpful / correct currently requires human judgment.

-

Name, Role, Value (Success Criterion 4.1.2)

+

Name, Role, Value +

+ +

Incorrect use of ARIA constructs can be detected automatically but constructs that appear correct may still not work, and widgets that have NO Aria (but need it to be understood) can go undetected. Human post-check of automatic checks is still necessary.