- From: Alastair Campbell <acampbell@nomensa.com>
- Date: Wed, 20 Nov 2019 10:43:43 +0000
- To: "Sajka, Janina" <sajkaj@amazon.com>, "Korn, Peter" <pkorn@lab126.com>
- CC: WCAG list <w3c-wai-gl@w3.org>
- Message-ID: <3DAC9892-159D-4361-9B2C-9300737F1CC1@nomensa.com>
Hi Janina, Peter, I’ve a comment & question from yesterday’s call that doesn’t really suit github, it is a bit more wholistic. The Challenges doc [1] has a number of SCs listed under the “Needing human involvement” challenge, and people have been adding to that list. I was wondering how many you think should be covered? To answer that, would it be easier to turn the question around? I.e. which SCs do not need human involvement? It reminded me of an analysis Karl Groves did: https://karlgroves.com/2012/09/15/accessibility-testing-what-can-be-tested-and-how By his analysis (which seems right to me) only 5 SCs from all levels of WCAG 2.0 were 100% automatable, and even those could be considered optimistic. E.g. A language of the page could be present and checked automatically, but on a mixed language page how do you determine the default automaticall? (Without hard-coding the answer for you particular site.) I was going to say: Why not categorise the SCs and only report on a sample to demonstrate the points you are trying to make. However, something Janina said yesterday gave me a clue: You want to describe the nature of the human involvement for each one in order to encourage comments & additions to help automate, or sample, or systematise each one. Is that the idea, to have a complete list of SCs? Depending on the answer to that, I can follow up with a more specific suggestion in github. Kind regards, -Alastair 1] https://raw.githack.com/w3c/wcag/master/conformance-challenges/index.html
Received on Wednesday, 20 November 2019 10:43:49 UTC