-
Notifications
You must be signed in to change notification settings - Fork 1
Accessibility Scorecard
Before you start testing and changing processes, it can be a good idea to take inventory of what you have completed so far and what resources you have available to you. At Truss, we do this using a resource called the Accessibility Scorecard. It gives your team a few different metrics to go through to figure out what your biggest areas of improvement are to help a team to prioritize what to start with. Factors such as where a project is in its lifecycle, what resources a team has access to, and what skills are present amongst team members will contribute to what is most important.
Especially if your project is not at its very start, there is likely a lot of work to be done. It is not realistic to think you will be able to tackle it all in one sprint. Building accessible software is not checking off boxes on a list—it requires changing processes and iterating upon them to do better over time. At Truss, we are big believers in making improvements via marginal gains, which is what this process is all about.
- Good: Team has access to a screen reader and tries out keyboard navigation
- Better: Team has access to the specific assistive tech identified in the accessibility target
- Best: Team has access to the specific assistive tech identified in the accessibility target
When most folks hear about accessibility testing, their first instinct is to go find an automated check tool to look through their code to make sure it is meeting the WCAG criteria. Unfortunately, automated checks can’t even catch half of the issues that are going to cause problems for assistive tech users. The UK government conducted a study in 2017 testing the most popular automated testing tools, which found even the best ones only picked up about 40% of the issues. While that was some time ago and surely these tools have made some strides since, they still can’t be all you count on.
This is why your team to some degree must be doing manual testing. If you have never tried before A good starting place is to try out using a screen reader (VoiceOver on Mac, Chrome Screen Reader, or Narrator for Windows are a place to start) on some of your pages.
Ultimately it’s going to be important to figure out what technologies your end users are using and hone your process towards them. For example, since a lot of the time at Truss we are building administrative systems for government employees that will primarily be used in desktop environments, we usually end up focusing our in house screen reader testing on screen readers that have larger market shares in those environments, such as JAWS or NVDA.
Since many of us work in Mac OS environments, we utilize a tool called Assistiv Labs to access screenreaders and emulate the Windows experience.
- Good: Support from other employees on a best-effort basis
- Better: Dedicated staff at company
- Best: Dedicated staff on project
In an ideal world, every project would have an assigned, trained accessibility specialist present, or at least the company could have one shared across different projects. To date, we have not achieved this ourselves, except in circumstances where our client has been able to bring these people as resources to us.
For now, our remedy to this is the Truss accessibility guild, a company-wide, interdisciplinary group of volunteers that shares knowledge and sets standards on a volunteer basis to try to improve accessibility across our projects.
We meet up once a month and share our wins, challenges, and ideas to improve the accessibility processes across all our teams. It’s not a perfect solution, but since our forming three years ago, we have developed a lot of resources and helped each other make the best choices possible on our projects (Like all the awesome materials in this post!). We
- Good: Everyone on the project has taken a basic accessibility course
- Better: Some members of the team have certifications or more extensive training
- Best: Team includes at least one accessibility expert
You don’t know what you don’t know. We can’t expect practitioners on our teams to inherently know everything about accessibility, if they have never learned about it. At the base level, it helps for everyone to have completed some degree of an Accessibility 101 course, whether that is WebAIM, Deque, a talk at a conference, a good book or article, anything! It is at least a start, and when you take the time to advocate for accessibility on your project moving forward, it will help ensure you are listened to and people will understand what you are talking about.
We gave this a go back in 2021, by hiring WebAIM to teach a several hours, instructor-led seminar, which helped a lot of folks get a foundational understanding!
What is even more helpful though, is having at least one person, ideally a designer or engineer complete a comprehensive accessibility certification. This is something we are currently working towards enforcing at Truss. A few of us are currently working on the DHS Section 508 Trusted Tester Program, a free self paced, online course covering the details of Section 508 compliance and how the US government wants us to test for it. The International Association of Accessibility Professionals (IAAP) also offers certification exams for Web Accessibility Specialists and Accessibility Core Competencies that are worth considering, too.
- Good: Team can hire external contractors as needed for testing
- Better: Team tests with disabled users regularly
- Best: Team includes members with disabilities
The next level after your team is integrating manual testing into your acceptance and release processes, is reflecting on who is doing that testing. Folks who use assistive technology regularly, are power users, and how they use a screen reader or an alternative input device is going to be different (and most likely, better) than how I might as a sighted person sitting in front of a traditional desktop setup with a keyboard and mouse. These folks are going to find what's wrong with your product much more efficiently and reliably.
So why not hire them? A good place to start could be asking your stakeholders if they have any folks using assistive tech who might be interested in joining your usability testing group. You can also, in a more formalized way, hire an organization that employs disabled people and/or assistive technology users to take a swing at what you’ve built. They can even fill out a VPAT/ACR for you, and tell you what issues require the most priority.
Even better, We can ask ourselves why it is that no one at our organization is an assistive technology user. How might we change that, especially considering how much easier all this is when they are at the table and in the room when team processes are developed.
- Good: Specific WCAG version and level
- Better: Specific WCAG level and assistive technology targets
- Best: Project exceeds expected level and targets
Most projects at the government level require section 508 compliance, which at my time of writing is WCAG 2.0 AA. There are newer versions of the WCAG, and I wouldn’t be surprised if in the next few years the law is updated to require them.
While these guidelines are helpful, you are going to be best set up for success if you focus on meeting those requirements on a specific set of assistive technologies that you know most of your potential users are going to be using.
For example if your product is a mobile app for iOS, we would recommend your best bet to do screen reader testing on VoiceOver for iOS. Your end users are not going to care that your VPAT has all its boxes checked if it does not work as expected for them. Not all screen readers behave exactly the same soi it’s important to prioritize whatever one is used the most.
In an ideal world, you build something that works perfectly across all assistive technology, but it is more realistic to pick a set to start with and once you reach your goals there, take that edge and declare it the center, expand 100% support to the next set of assistive tech target.
Consider setting this target as a decision record on your team.
- Good: Ideation considers accessibility
- Better: Intended features incorporate accessibility considerations
- Best: Story definitions include accessibility acceptance criteria
It’s possible you are starting to think about accessibility when a lot of code has been written, designs delivered, and tickets accepted. It is ideal to start your process on day one with accessibility in mind, but it’s important to remember that it’s never too late, and it is still worth it to determine a path to meet compliance on the long term, even if you are not there yet.
If you’re just starting out, make sure to identify accessibility targets and determine processes to meet them as you go. Some things we do on our team include accessibility testing instructions in pull request templates and ticket acceptance criteria, using color contrast checkers in the design phase, and ask questions about how a feature might behave in assistive technology environments at the ideation level. You might summarize these for your team in a decision record.
If you are further along, identify how you can prevent further accessibility tech debt. You’ll likely need to take inventory of the issues you currently have and make sure you are documenting them in your backlog, and addressing them as you go. Maybe you dedicate a couple sprints to it after enough people on the team have completed some education about accessibility.
It can be helpful to track accessibility issues in a spreadsheet, and prioritize them on two levels: how severe the issue is, and how widespread it is through the system. For example, a component that is repeated on 80% of the pages in the system would be worth addressing first over an issue caused by a one off issue on a single interior page that is not critical to the process.
- Good: PR templates include accessibility checks
- Better: CI is configured to check for accessibility issues
- Best: Story acceptance includes manual accessibility testing
At the base level, a project team should be going through a few accessibility centered acceptance criteria on a pull request in order for it to merge, especially if it is for new functionality or a new component. Our accessibility wiki includes a PR template that many of our projects use as a starting point, and then adapt to our individualized accessibility targets.
Another good practice to utilize automated testing at the commit or at least branch level. There are integrations for CI tools such as jest and cypress that can catch the easy issues and call em out to you before you even request a review on your branch, and more importantly, won’t let glaring issues get merged in when they are easily avoided. These tools work in conjunction with automated testing, and are not a replacement for manual testing.
- Good: Filled out once at beginning of project
- Best: Kept up to date as project changes
Most projects that are procured by the government in the US are required to have a VPAT (Voluntary Product Accessibility Template) on file. You may also see a similar document get used, called an Accessibility Conformance Report, especially in the private sector. A lot of the time, the ACR exists so the company building it can sell things like product subscriptions to government clients. This document serves as documentation of each WCAG guideline and prompts your team to report to what extent you have met it at the time of filling out. We keep a basic template of what that can include, and most teams tend to customize
A mistake many teams make is filling this out once, and building and iterating heavily on their system without keeping it up to date, at which point they essentially have to fill it out all over again. This is a problem we ourselves face quite frequently, and it's an open question how we can keep our VPATs as more living documents. Something I hope on my own project team to try is picking a cadence for updating it at a more granular level and sticking to it, but it’s a work in progress.
The Accessibility Scorecard covers a lot of stuff, some of which are targets we ourselves have not achieved yet. The goal of the exercise is to keep improving and getting better over time. Our team is working towards having each project fill it out once a quarter, and reporting on what we improved since last quarter. Some quarters, folks might jump four or five points, some maybe one, and some maybe none at all.
The score itself is not the main measure of success, rather the strides a team takes over time, maybe upping their score 10 points over the course of a year. Some teams have to be more aggressive, as they are on production contracts with impending go live dates. Some of us are doing discovery or are building MVPs that maybe don’t need to be perfect right away. The philosophy of our group at Truss is to focus on improving processes and finding sustainable solutions, rather than checking off lines on a VPAT.
After filling it out, some things may feel more attainable and actionable than others. The intent is for teams to strive to get their score up by a point or two every quarter. A lot of these metrics require fundamental process changes, and they won’t stick if you try to apply too many big changes all at once.