Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Human review isn't all bad #986

Closed
sajkaj opened this issue Dec 6, 2019 · 15 comments
Closed

Human review isn't all bad #986

sajkaj opened this issue Dec 6, 2019 · 15 comments
Assignees
Labels
Challenges with Conformance Issues relating to the document at https://w3c.github.io/wcag/conformance-challenges/

Comments

@sajkaj
Copy link
Contributor

sajkaj commented Dec 6, 2019

Note: This issue isolated out of several issues that came up during discussion
in issue 956 where everything else is now addressed in the document:
#956

comment by @alastc on 2019-11-19 16:55:11 +0000 UTC

The challenges are stated as though there is a flaw in having human
review, which I disagree with. (Generally the solutions are likely to be
based on sampling and/or process, rather than the core requirement.)

Each of the challenges has a flip-side, e.g.:

  1. Relying on automated results would result in a poor experience.
  2. Dynamic/personalised areas that don't get tested could be
    problematic. (In the real world we don't let people into the
    under-construction areas!)
  3. 3rd party content is just moving the responsibility of certain
    content, and could open a loophole (e.g. for adverts under the EU
    directive).
  4. Straying out of 'web' is a charter issue, that's another
    conversation.

We need to acknowledge there is another side to each of these issues.

Also, needs to be clear that the goal is still to provide a good
experience for people with disabilities, just without the binary
pass/fail that is applied to the whole site at once.

I can do a suggested PR if that helps?

@sajkaj sajkaj added the Challenges with Conformance Issues relating to the document at https://w3c.github.io/wcag/conformance-challenges/ label Dec 6, 2019
@sajkaj sajkaj added this to the Conformance Challenges FPWD milestone Dec 6, 2019
@sajkaj
Copy link
Contributor Author

sajkaj commented Dec 6, 2019

Hmmm, the mysteries of the assignees form? Let's see whether this comment will assign this to @peterkorn

@alastc
Copy link
Contributor

alastc commented Dec 6, 2019

It doesn't appear that Peter has an account attached to this repo, which is odd... I can't get that account to appear in the assignees anyway.

@sajkaj
Copy link
Contributor Author

sajkaj commented Dec 6, 2019

Alastair, Something's weird. Peter Korn was in the list a few minutes ago. It now numbers 93. It used to be 94. Very odd.

@alastc
Copy link
Contributor

alastc commented Dec 6, 2019

Hmm, I hope it wasn't due to me moving the card around in the project, but I can't see why it would be.

@michael-n-cooper we seem to have lost @peterkorn from our list of assignees, is this something you could help with?

@michael-n-cooper
Copy link
Member

I don't think @peterkorn every was on the list of assignees. I've added him, he will need to accept an invite for it to activate.

@peterkorn
Copy link

peterkorn commented Dec 6, 2019 via email

@peterkorn
Copy link

peterkorn commented Dec 8, 2019

@alastc - I've taken a stab at what you are asking. See the paragraph below, which I propose inserting as the new 2nd paragraph in the new "Goals section" of the 2Dec Editor's Draft at https://w3c.github.io/wcag/conformance-challenges/. I'm not thrilled with the closing sentence of the paragraph, but I personally feel that sentence is good enough for a FPWD.

"It is important to recognize that the places where WCAG 2.x conformance applies poorly, and the places where conformance verification scales poorly, nonetheless remain important to achieving accessibility. For example, while requiring human judgement to validate a page may not scale (the core of Challenge #1 below), absent that human judgement it may not be possible to deliver a fully accessible web page. Similarly, while it may not be possible to ensure that all 3rd party content is fully accessible (the subject of Challenge #3 below), absent review of that content by a human sufficiently versed in accessibility it may again not be possible to ensure that content is fully accessible. Human judgement is a core part of much of WCAG 2.x for good reasons, and the challenges that arise from it important to successfully grapple with."

@detlevhfischer
Copy link
Contributor

Never shy to argue with a native speaker :) let me suggest a simplified version:

"There are many aspects of web content where human assessment is needed for WCAG 2.x conformance judgments. While these judgments may scale poorly, they are nonetheless important for achieving accessibility. When human judgment is needed to validate a page and this validation does not scale (the core of Challenge #1 below), that judgment can nevertheless be critical for ensuring a fully accessible web page. Similarly, while it may not be possible to ensure that all 3rd party content is fully accessible (the subject of Challenge #3 below), if that content cannot be audited by a competent human evaluator it may not be possible to establish whether or not a particular 3rd party content is fully accessible. Human judgement is a core part of much of WCAG 2.x for good reasons and can only partly be replaced by fully automated accessibility evaluation methods."

@alastc
Copy link
Contributor

alastc commented Dec 10, 2019

Hi Peter,

That generally looks good. For some reason the phrase "applies poorly" sticks though; to me that means it doesn't fit, somehow the guidelines are not valid. I think what we mean is that it is difficult (or impossible) apply in some scenarios.

I.e. Some sites have used technology to scale the number of pages to a very large degree without direct human intervention, so needing human assessment of those pages is very challenging.

Another aspect is there is an assumption (including Detlev's version) that post-implementation assessment is the only place that human judgement can be applied. However, if you have very strict templates & processes that prevent issues, you can apply human judgement prior to publication of a page. (In my mind this is one of the potential solutions.)

Looking at the latest version, I would suggest:

  • In the introduction, above the last paragraph (starting "Large") add something like:

"The success criteria in WCAG 2.x describe aspects of content that are known to cause issues for people with disabilities. If a website does not meet any of the success criteria it is very likely that some people with disabilities will that content difficult or impossible to use, therefore the conformance model tries to encourage a scenario where each page is shown to be free of issues."

  • Under Goals, in the 1st para:

"We believe that a better understanding of the situations in which [ins]the WCAG 2.x conformance is difficult or impossible to apply[/ins] can lead to more effective conformance models and testing approaches in the future."

  • Under that, proposing a new version of your additional paragraph:

"It is important to recognize the places where WCAG 2.x conformance is difficult to apply, but is nonetheless important to achieving accessibility. For example, while requiring human judgement to validate a page may not scale (the core of Challenge #1 below), absent human judgement during the content creation process it may not be possible to deliver a fully accessible web page. Similarly, while it may not be possible for the site claiming conformance to ensure that all 3rd party content is fully accessible (the subject of Challenge #3 below), that content could be a source of issues. Ensuring that the content the user receives is accessible was a key aim of the WCAG 2.x conformance model, so the challenges that arise from it are important to successfully grapple with."

@peterkorn
Copy link

Hi Alastair, Detlev,

Hmmm… I don’t intend to suggest that the guidelines lack validity – and especially not the success criteria. Rather, that specifically page-level conformance scales poorly (if at all) to more than just some scenarios. This conformance model is a poor fit for a great many websites (not just some).

Your example of sites that “scale the number of pages to a very large degree without human intervention” doesn’t cover all large sites. I daresay that the largest sites have generally gotten to be very large precisely through human intervention – many millions of humans creating pages and content on existing pages, who haven’t been schooled in how to create accessible content, and for which programmatic filters and templates aren’t sufficient.

I appreciate the ideas for how sites can address some of these challenges, and would like to collect these together for review and inclusion in Silver if not sooner. But I would ask that we hold off on proffering solutions until we’ve done a full round of gather public feedback on all of the challenges.

@detlevhfischer
Copy link
Contributor

detlevhfischer commented Dec 13, 2019 via email

@NeilMilliken
Copy link

NeilMilliken commented Dec 13, 2019 via email

@alastc
Copy link
Contributor

alastc commented Dec 13, 2019

Hi Peter,

I take the point about the scale often being due to human contributors, but I still think there is a point to make about human intervention/decision making being necessary in some way.

I think the crux is that it needs to draw out the difference between the guidelines and the conformance model. This aspect is missing:
“The success criteria in WCAG 2.x describe aspects of content that are known to cause issues for people with disabilities. If a website does not meet any of the success criteria it is very likely that some people with disabilities will find that content difficult or impossible to use.”

I’m not stuck on drawing a conclusion from that, but we need to be clear about the impact.

This conformance model is a poor fit for a great many websites (not just some).

Well, it depends how you use it. In the UK there is (usually) a pragmatic approach, you aim high but accept it’s an ongoing process. However, I can see how that is problematic if you have a dogmatic legal system.

Where the doc says:

situations in which WCAG 2.x conformance applies poorly if at all,

Logically the conformance model can and does apply (conceptually), but it is difficult or impossible to apply in practice. That’s different from it “not applying”. The problem is the difficulty of applying it, not whether it fits.

There are a couple of instances in the doc where it says “applies poorly” that I think should be changed.

Also, I see now there are two different assumptions about “3rd party content” as stated in the doc:

This is especially the case where third parties are actively populating and changing site content.

When it says 3rd party content, I think of external organisations. E.g. ad networks, twitter streams, 3rd party product-review providers etc. In this scenario the 3rd party defines the content & interface.

If this is intended to include individuals (e.g. customers, users, individuals selling something), I think another term is needed, as they are using an interface provided by the site. E.g. “third parties and/or users of the site”.

I would ask that we hold off on proffering solutions

Ok, but where you said “if that content cannot be audited by a competent human evaluator it may not be possible to establish whether or not a particular 3rd party content is fully accessible” you’re baking-in an assumption that it requires post-hoc testing.

@JAWS-test
Copy link

JAWS-test commented Dec 13, 2019

I wonder if it's not possible to include parts of ATAG in WCAG 3.0 to better handle user generated content? The EU standard (Chapter 11.8) has few ATAG rules integrated. I would welcome that, because the WCAG is known and the ATAG is little respected. If ATAG were integrated into WCAG, the accessibility of many pages would be significantly improved.

@peterkorn
Copy link

HI Alastair,

Please see the current editor's draft (17Dec19). I tried to address your points. Specifically:

  • I did a scrub for "poor". The sole references should either be about "verification scales poorly", or the word appears in other contexts (e.g. "text that is of poor contrast"; text in the Silver Research Problem Statements).
  • I updated the discussion 3rd party content in Challenge WCAG Network Graph #3 to make clear this explicitly includes updates from potentially all website visitors
  • There is no mention in the doc. of "audited by a competent human evaluator" (text that I believe was suggested by Detlev in this issue thread).

Please let me know if this addresses your concerns.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Challenges with Conformance Issues relating to the document at https://w3c.github.io/wcag/conformance-challenges/
Projects
None yet
Development

No branches or pull requests

7 participants