From 1eb5df394432cb3e1bc787cc983938edce62c627 Mon Sep 17 00:00:00 2001 From: sajkaj Date: Thu, 14 Nov 2019 18:59:16 -0500 Subject: [PATCH] Submissions from Detlev Fischer on 15 November. --- conformance-challenges/index.html | 113 ++++++++++++++++++++++++++++++ 1 file changed, 113 insertions(+) diff --git a/conformance-challenges/index.html b/conformance-challenges/index.html index 8fe14196fb..f0dbdc27ff 100644 --- a/conformance-challenges/index.html +++ b/conformance-challenges/index.html @@ -84,6 +84,7 @@

Identify Inpu

Use of Color (Success Criterion 1.4.1)

This poses the same challenges as Sensory Characteristics (Success Criterion 1.3.3).  To discern whether a page fails this criterion programmatically requires understanding the full meaning of the related content on the page and whether any meaning conveyed by color is somehow also conveyed in another fashion (e.g. whether the meaning of the colors in a bar chart is conveyed in the body of associated text or with a striping/stippling pattern as well on the bars, or perhaps some other fashion).

+

Audio Control[36] (Success Criterion 1.4.2)

An automated test tool would be able to identify media/audio content in a website, identify whether auto-play is turned on in the code, and also determine the duration. However, an automated checking tool @@ -124,11 +125,123 @@

Non-text Contrast

Text Spacing[45] (Success Criterion 1.4.12)

This success criterion involves using a tool or method to modify the text spacing and then checking to ensure no content is truncated or overlapping. This is purely a manual process.

+
+

Content on Hover or Focus (Success Criterion 1.4.13)

+

As content needs to be brought up by hovering the pointer over it (or +keyboard-focus it) to then determine whether the 3 criteria of 1.4.13 are met, +this is by necessity a human test.

+

Keyboard Operable[47] (Success Criterion 2.1.1)

To ensure keyboard operability of all functionality, it requires a very manual process of navigating through the content to ensure all interactive elements are in the tab order and can be fully operated using the keyboard.

+
+

Character Key Shortcuts (Success Criterion 2.1.4)

+

single key presses can be applied to content via a script but whether and +what these keypresses trigger can only determined by additional human +checks.

+
+
+

Timing Adjustable (Success Criterion 2.2.1)

+

There is no easy way to check automatically whether timing is adjustable. Ways of controlling differ both in naming, position and approach (including dialogs/popups before the time-out) and depend on the way the + server registers user interactions (e.g. for automatically extending the +time-out).

+
+
+

Pause, Stop, Hide (Success Criterion 2.2.2)

+

Typically the requirement to control moving content id provided by some interactive controls placed in the vicinity of moving content, or rarely on a general level at the beginning of content. The fact that position + and naming vary means that an assessment is mostly human task (it will +involve checking that the function works as expected).

+
+
+

Three Flashes or Below Threshold (Success Criterion 2.3.1)

+

May be automatable but it involved an assessment of the area of flashing, +so probably human. As this Failure hardly ever occurs, I just mention it for +completeness.

+
+
+

Bypass Blocks (Success Criterion 2.4.1)

+

You can determine whether native elements or landmark roles are used but probably not if they are used to adequately structure content (are they missing out on sections that should be included). The same assessment + would be needed when other Techniques are used (structure by headings, skip +links).

+
+
+

Page titled (Success Criterion 2.4.2)

+

Easy to check automatically whether the page has a title but the +descriptive check is a human one. Matching title to h1 will not cover pass +cases where title and h1 differ.

+
+
+

Focus Order (Success Criterion 2.4.3)

+

Focus handling with dynamic content (move content to custom dialog, keep +focus in dialog, return to trigger) do not look like they will be automatable +any time soon.

+
+
+

Link Purpose (Success Criterion 2.4.4)

+

Automated can check for the existence of same named links and probably check whether they are qualified programmatically, but checking whether the context adequately serves to describe link purpose seems like still + involving human judgment.

+
+
+

Multiple ways (Success Criterion 2.4.5)

+

Automatically checking the presence of several ways (e.g. nav and search) helps but may miss cases where exceptions hold (all pages can be reached from anywhere) so needs a human check? But maybe algorithms already + exist to check that.

+
+
+

Headings and Labels (Success Criterion 2.4.6)

+

The determination whether headings are descriptive depend on an assessment +of the context of web content headed or labeled and is therefore +predominantly a human assessment.

+
+
+

Pointer Gestures (Success Criterion 2.5.1)

+

I am not aware of an automated check that would detect complex gestures - +even when a script indicates the presence of particular events like touch-start, +the event called would need to be checked in human evaluation.

+
+
+

Pointer Cancellation (Success Criterion 2.5.2)

+

When mouse-down events are used (this can be done automatically), checking +for one of the four options that make it OK looks definitely like a human +task.

+
+
+

Motion Actuation (Success Criterion 2.5.4)

+

Event suspect may be detected automatically but whether there are +equivalents for achieving the same thing via user interface components will +require a human check.

+
+
+

On Focus (Success Criterion 3.2.1)

+

I suspect it is hard to check automatically whether a change caused by +focusing should be considered a change of content or context.

+
+
+

On Input (Success Criterion 3.2.2)

+

I suspect it is hard to check automatically whether a change caused by inputting stuff should be considered a change of content or context, or to automatically detect whether relevant advice exist before the + component in question.

+
+
+

Error Identification (Success Criterion 3.3.1)

+

Whether the error message correctly identifies and describes the error will +often involve human judgment.

+
+
+

Labels or Instructions (Success Criterion 3.3.2)

+

Edge cases (are labels close enough to a component to be perceived as a visible label) will need human check. Some label may be programmatically linked but hidden or far off. Whether instructions are necessary and + need to be provided will hinge of the content. Human check needed.

+
+
+

Error Suggestion (Success Criterion 3.3.3)

+

Whether a suggestion is helpful / correct will often involve human +judgment.

+
+
+

Name, Role, Value (Success Criterion 4.1.2)

+

Incorrect use of ARIA constructs can be detected automatically but constructs that appear correct may still not work, and widgets that have NO Aria (but would need it to be understood) can go undetected. Human + post-check of automatic checks seems necessary.

+

Challenge #2: Large, complex, and dynamic websites are always “under construction”