Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Challenges: Test Early & Often #1182

Open
mgifford opened this issue Jun 27, 2020 · 5 comments
Open

Challenges: Test Early & Often #1182

mgifford opened this issue Jun 27, 2020 · 5 comments
Labels
Challenges with Conformance Issues relating to the document at https://w3c.github.io/wcag/conformance-challenges/

Comments

@mgifford
Copy link

I've looked at a few of the prior discussions talking about automated testing:

I do think it would be neat to put some SC against which should be manually tested & which could be tested effectively by code, but I don't think that is necessary.

I believe it is obvious that some things can be easily verified by automated testing. It may only be 1/3 of the barriers that affect people, but making it easier to catch 1/3rd of the accessibility errors should be seen to be a good thing for everyone. It is important to note that this rate of effectiveness may change over time and when applied to things other than the web. It is safe to assume that automated tools will never be sufficient on their own.

That said, to scale for large and dynamic websites having guidance on approaches to using automated testing is key.

Doing this is just a best practice to see we're being responsible for how we are using precious human effort. Why spend hours tracking down bad ID's by hand, when computers do this in instants? There are lots of things that people will always be better at than computers, let's focus human testing on those things.

It is also important to note that the EU's Web Accessibility Directive calls on all member states' websites to be monitored and publicly reported by December 23 2021. I suppose that this could be by hand, but this isn't likely.

I would like to see WCAG encourage:

  1. everyone involved in digital content to use some automated accessibility tools to catch the "low hanging fruit"
  2. ensure that everyone knows how to do basic keyboard-only testing
  3. align the role of accessibility auditor, with that of an editor, so that teams realize that they need to bring in expertise from the outside.
  4. include accessibility statements to encourage constant feedback with the users.

People still see accessibility as a QA checkbox exercise, but this needs to be reframed so that people are seeing it as a journey, not a destination. What can the W3C do to get organizations to shift their thinking from "we need to make our site accessible" to "we need to make our site more accessible".

I do think that the W3C can make it clear that everyone involved in creating digital content has a role in accessibility. Accessibility experts shouldn't be pointing out missing alt-text and form labels that a bot can pick up easily. Experts are critical, but we don't need more giant remediation reports.

Appropriate use of technology is going to be key to this. Automated accessibility tools are as important to improving websites as spellcheck is to improving documents.

@patrickhlauke
Copy link
Member

I still struggle to see how you think W3C can "encourage" or shift anybody's thinking. They're not a lobbying group. Outreach / developer relations type activities are expensive and take a lot of effort and coordination with often little end result.

@mgifford
Copy link
Author

Yes, but the W3C writes the guidelines the world is following on accessibility. It would be possible to include a preamble to talk about how digital technology is generally built and a need to look at the systems (#1170, #1171 or #1172) or an appendix to outline some of these ideas about testing.

I appreciate that it will be difficult for the WAI to come to agreement about wording, but this is supposed to be a challenge. Sliver could be one of the best documents to encourage people to start thinking about accessibility in a "less clinical" way and start addressing how technology actually works, and how people engage with the digital ecosystems we build.

@patrickhlauke
Copy link
Member

By the time people/developers seek out W3C documentation/WCAG/etc, they already know they need to do it. You're preaching to the choir at that stage, no?

@mgifford
Copy link
Author

I don't think so.. Most people start seeing WCAG as a giant QA checklist of things that they probably should do before they get onto the next thing.

I have no stats on this, but assume that most people who do accessibility reviews:

  • haven't done more than shake their fist at those that make the libraries, let alone contribute a bug report or patch to a library that caused the problem
  • aren't thinking of the user journey, and supporting actual users in doing something
  • assumes that if something can't be made accessible it should just be ignored or removed from the page.
  • assume that once a site is meets WCAG 2.0 AA, it's good and doesn't need to be tested again

Even simple things like:

  • not having feedback loops (Accessibility statements)
  • claiming to be 100% Accessible

are things I see organizations failing to do, even with professionals who have studied WAI documentation.

@peterkorn
Copy link

@patrickhlauke - unless it is a large amount of work, I don't see any harm in providing additional (likely non-normative) guidance on best practices. At worst, few will look at it. At best, it will help a significant number of web authors do a better job making their sites more accessible.

@alastc alastc added the Challenges with Conformance Issues relating to the document at https://w3c.github.io/wcag/conformance-challenges/ label Jun 29, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Challenges with Conformance Issues relating to the document at https://w3c.github.io/wcag/conformance-challenges/
Projects
None yet
Development

No branches or pull requests

4 participants