Re: Aggregated WCAG Conformance Score

Hi Alistair,

This is just my thought but I think we should keep "aggregation" and 
"scoring" separate items, even though they are somewhat related.

A while back we looked into scoring/metrics but could not identify a 
single widely-accepted approach. Instead, we identified some quality 
criteria for such metrics:
  - https://www.w3.org/WAI/RD/2011/metrics/

I'm not sure that this situation changed, but would be delighted to 
discuss it separately with you and anyone else interested.

Best,
   Shadi


On 27/03/2018 14:41, Alistair Garrison wrote:
> Hi,
> 
> The Accessibility Conformance Testing (ACT) working group creates 
> guidance on how to create rules which allow you to determine contents’ 
> conformance with WCAG.
> 
> The issue with web content is that there may be one-to-many states [of 
> the DOM] which each need to be tested in order to understand if a user 
> is actually able to interact fully with that web content.
> 
> Section 9.3 “Rule Aggregation” might be useful for determining the 
> outcome for a single [DOM] state; but presumably you would need to sum 
> the results for each [DOM] state which can exist for a piece of content 
> in order to determine the overall compliance score for that piece of 
> content.
> 
> Continuing with that train of thought – to provide a verifiable 
> conformance score for a website (as a collection of web pages; which 
> themselves are each a collection of pieces of web content, with 
> one-to-many DOM states) – would we have to detail each DOM state that 
> was tested; and provide instructions for how to obtain each of those DOM 
> states; very much like end-to-end tests?  If yes, this tends to concern 
> me a little, as such a mechanism might quickly fall out of sync with the 
> website; just as content changes cause end-to-end tests to become 
> fragile over time unless updated.
> 
> So, I’m starting to wonder if we have a responsibility to look at 
> creating an Aggregated Conformance Score (as 
> https://www.w3.org/TR/WCAG-EM/#step5d).  Something which is probability 
> based; which takes into account a sample size of the total number of 
> states – and the idea that if you currently only test DOM State 0 (for 
> example on page load) you are already really only taking a sample of all 
> the states in which your pages can exist.  Noting, that for content to 
> be considered accessible, really all its states need to be accessible.
> 
> Something like:
> 
> [Estimated] Total number of states which need to be tested = [Estimated] 
> number of web pages * [Estimated / Heuristic] number of states those web 
> pages can appear in;
> 
> Sample = ???
> 
> Aggregated Conformance Score = ???; with a variance of ???
> 
> Looking at states also enables Single Page Applications to be included.
> 
> With European projects looking to benchmark on a broad-scale the above 
> may already have been considered somewhere, however, if not thoughts / 
> comments are most welcome.
> 
> All the best
> 
> Alistair Garrison
> 
> Director of Accessibility Research
> 
> Level Access
> 

-- 
Shadi Abou-Zahra - http://www.w3.org/People/shadi/
Accessibility Strategy and Technology Specialist
Web Accessibility Initiative (WAI)
World Wide Web Consortium (W3C)

Received on Tuesday, 27 March 2018 15:08:17 UTC