RE: Conformance challenges

Hi, Alastair:

I would agree the Challenges doc is highly unbalanced with so much content under Challenge #1, and so very little content under the remaining three challenges. It may be that it will eventually prove sufficient to be simply illustrative rather than comprehensive in #1. However, I don’t believe we should try to cut back at this point in the process.

There are several possible outcome benefits from taking the comprehensive approach, at least for now. Not the least of these is to identify patterns among the various cSC challenges. I’m already seeing some and have had side conversations about possible implications with Mary Jo Mueller, who has contributed to the document. Her suggestion, and I agree, is that we wait to acquire as full an analysis as we can obtain before reorganizing based on patterns.

Another outcome could be the identification of useful AI research, or even user research. Silver has already benefited from a research effort, but this analysis process may point to other valuable research opportunities.

I do agree with you that repetitive text under successive SC is not the best way to finalize this document. But, we’re just not to that point yet, imo.

Hth

Janina


From: Alastair Campbell <acampbell@nomensa.com>
Sent: Wednesday, November 20, 2019 5:44 AM
To: Sajka, Janina <sajkaj@amazon.com>; Korn, Peter <pkorn@lab126.com>
Cc: WCAG list <w3c-wai-gl@w3.org>
Subject: Conformance challenges

Hi Janina, Peter,

I’ve a comment & question from yesterday’s call that doesn’t really suit github, it is a bit more wholistic.

The Challenges doc [1] has a number of SCs listed under the “Needing human involvement” challenge, and people have been adding to that list.

I was wondering how many you think should be covered? To answer that, would it be easier to turn the question around? I.e. which SCs do not need human involvement?

It reminded me of an analysis Karl Groves did:
https://karlgroves.com/2012/09/15/accessibility-testing-what-can-be-tested-and-how


By his analysis (which seems right to me) only 5 SCs from all levels of WCAG 2.0 were 100% automatable, and even those could be considered optimistic. E.g. A language of the page could be present and checked automatically, but on a mixed language page how do you determine the default automaticall? (Without hard-coding the answer for you particular site.)

I was going to say: Why not categorise the SCs and only report on a sample to demonstrate the points you are trying to make.

However, something Janina said yesterday gave me a clue: You want to describe the nature of the human involvement for each one in order to encourage comments & additions to help automate, or sample, or systematise each one.

Is that the idea, to have a complete list of SCs?

Depending on the answer to that, I can follow up with a more specific suggestion in github.

Kind regards,

-Alastair

1] https://raw.githack.com/w3c/wcag/master/conformance-challenges/index.html

Received on Saturday, 23 November 2019 17:33:41 UTC