- From: Alastair Campbell <acampbell@nomensa.com>
- Date: Tue, 31 Mar 2020 14:02:11 +0000
- To: Jeanne Spellman <jspellman@spellmanconsulting.com>, Silver Task Force <public-silver@w3.org>, AG WG <w3c-wai-gl@w3.org>
> How do you test your digital products today to see if users with disabilities can actually complete the tasks that the page{s) or screen(s) is intended for?
We have two main levels:
1. Barrier score
This is part of a standard audit, and is the auditors assessment of how big a barrier the issues we found are.
We give a score out of 25 for 4 categories (to give a pseudo-percentage). The categories are quite wide, and we tend to score in 0/5/10/15/20/25 (so not very granular).
- If something is a blocker to a primary task (e.g. can't access 'add to basket' button with keyboard), that's an automatic 25/25.
- If the issues are not blockers (e.g. missing heading levels), 10 or 15, but with consideration that things like colour contrast or language issues can wear you down.
- I don't think we've scored less than 5 in a category, there's always something!
That's explained as 'how likely is someone with a disability going to encounter an issue that prevents task completion'. I.e. not *can* they, but how likely. The main benefits are that it helps us differentiate the better/worse sites, prioritise issues, and explain improvements more clearly.
Typically our clients want to improve their accessibility and/or fill in an accessibility statement. The ability to compare across sites (fairly) is not a big factor.
2. Usability testing
Full usability testing, usually with around 12 participants drawn from the general population (with disabilities, obviously). Working with the client to define likely/desired tasks to set the participants.
I'm sure there is an in-between level that would work well if you are in the internal team, but as external consultants those methods work well together or separately.
HTH,
-Alastair
Received on Tuesday, 31 March 2020 14:02:30 UTC