Re: Automated and manual testing process

Yea - we had  the same problem in WCAG 2.0.   It is one reason we created so many advisory techniques to try to capture great ideas that we could not figure out how to make in to SC.   In one case we even included one SC that is almost always automatically met — just so we could have a place to attach a bunch of advisory techniques in a place where they would be seen. 

Very hard.  (hard to do and hard on us) 

Gregg

Gregg C Vanderheiden
greggvan@umd.edu



> On Jan 30, 2017, at 11:56 PM, Michael Pluke <Mike.Pluke@castle-consult.com> wrote:
> 
> I fully understand and agree with the issues you raise. I recognise that some of the COGA proposals may, in their current form, fail the evaluation reliability criteria that are needed. I also recognise that if we can’t work around these limitations, the proposals that fail to meet these conditions may have to be omitted from WCAG 2.1.
>  
> What I wanted everyone to be very clear on is that if we have to omit COGA proposals we can be certain that very real and significant accessibility barriers will remain and that many websites that meet the new improved WCAG 2.1 will continue to be a real challenge for many users with cognitive and learning disabilities.  <>
>  
> We may be limited in what we can do for these users in WCAG 2.1, but we should not be ignorant of the unresolved accessibility barriers that will remain.
>  
> Best regards
>  
> Mike
>  
> From: David MacDonald [mailto:david100@sympatico.ca] 
> Sent: 30 January 2017 22:00
> To: White, Jason J <jjwhite@ets.org>
> Cc: Milliken, Neil <neil.milliken@atos.net>; Michael Pluke <Mike.Pluke@castle-consult.com>; Wilco Fiers <wilco.fiers@deque.com>; shilpi <shilpi@barrierbreak.com>; WCAG <w3c-wai-gl@w3.org>
> Subject: Re: Automated and manual testing process
>  
> >Authors and reviewers need more concrete and specific criteria than whether they think people with a broad range of learning/cognitive abilities would understand it  not a question that one can reliably answer unless one is a specialist in cognition, I suspect
>  
> I would add that any metrics we come up with should have high "inter relater reliability" between specialists in cognition, which may not be easy because of the range of symptoms.
> Established best practices and unity among experts is key.  Its easier for us to formulate testable statements under those conditions.
>  
> 
> Cheers,
> David MacDonald
>  
> CanAdapt Solutions Inc.
> Tel:  613.235.4902
> LinkedIn 
>  <http://www.linkedin.com/in/davidmacdonald100>
> twitter.com/davidmacd <http://twitter.com/davidmacd>
> GitHub <https://github.com/DavidMacDonald>
> www.Can-Adapt.com <http://www.can-adapt.com/>
>   
>   Adapting the web to all users
>             Including those with disabilities
>  
> If you are not the intended recipient, please review our privacy policy <http://www.davidmacd.com/disclaimer.html>
>  
> On Mon, Jan 30, 2017 at 3:57 PM, White, Jason J <jjwhite@ets.org <mailto:jjwhite@ets.org>> wrote:
>  
>   <>
>  
> From: Michael Pluke [mailto:Mike.Pluke@castle-consult.com <mailto:Mike.Pluke@castle-consult.com>] 
> Sent: Monday, January 30, 2017 1:21 PM
> 
> Wherever possible the COGA Task Force has tried to propose SCs that do not rely on subjective testing, but automatically assessing whether, for example, a label accurately and clearly describes the thing that it labels in a way that users with learning disabilities might be able to understand is currently not something that is easy to automate. For such cases, subjective testing will be the only practical way to assess whether a significant accessibility barrier exists.
> [Jason] Can you offer criteria to be used in making the judgment that would help to achieve reliable results across evaluators?
>  
> I think there are two distinct issues here. The first concerns automation, which I agree is largely irrelevant in these cases except to the extent to which measures of linguistic complexity serve as useful guides (e.g., in characterizing the lower secondary education level as in 3.1.5). The second issue concerns reliability of informed human evaluations, and whether those evaluations distinguish adequately between content that is more and content that is less accessible to people with learning and cognitive disabilities.
>  
> To evaluate the adequacy of a label, as in your example, I can check whether I think it unambiguously identifies the thing labelled, whether the vocabulary used appears in lists of commonly understood words in the language (substituting an alternative that uses such words, if this remains unambiguous regarding what the label’s purpose is), whether it uses vocabulary associated with the relevant discipline/subject-matter, if applicable, and various other criteria that might be appropriate. The problem is to provide the right guidance to evaluators and authors as to what they should be checking for. Authors and reviewers need more concrete and specific criteria than whether they think people with a broad range of learning/cognitive abilities would understand it (not a question that one can reliably answer unless one is a specialist in cognition, I suspect). There is also the role to be played by the author’s intended/assumed audience, which shouldn’t be so defined as to exclude people with disabilities who have the right skills, background and education, for example, to participate in the activity or read the material.
>  
>  
> This e-mail and any files transmitted with it may contain privileged or confidential information. It is solely for use by the individual for whom it is intended, even if addressed incorrectly. If you received this e-mail in error, please notify the sender; do not disclose, copy, distribute, or take any action in reliance on the contents of this information; and delete it from your system. Any other use of this e-mail is prohibited.
> 
>  
> Thank you for your compliance.
> 

Received on Friday, 3 February 2017 09:37:36 UTC