Re: Automated and manual testing process

Michael Pluke schrieb am 30.01.2017 14:20:

> Wherever possible the COGA Task Force has tried to propose SCs that do not rely
> on subjective testing, but automatically assessing whether, for example, a label
> accurately and clearly describes the thing that it labels in a way that users with
> learning disabilities might be able to understand is currently not something that
> is easy to automate. For such cases, subjective testing will be the only practical
> way to assess whether a significant accessibility barrier exists.

I think no one expects we will have any way to *automatically* test (reliably) whether labels or headings meet SC 2.4.6 any time soon. I think the critical difference is not between automatic testing and what you call 'subjective testing' (other terms in this thread have been exopert testing or manual testing) but between conformance oriented testing where a tester, often based on tools or automatable checks, has to decide whether a SC is met or not met, and user testing which requires us to source appropriate users (say, with learning difficulties) to see whether something like a astep by step process, or a label, or a heading, or a help text, can be understod or not. 

So the difference is between having one expert and his /her toolbox checking SCs, and a having a cohort of users with different abilites at hand, people who are pointedly *not* subject matter or technical experts. Alastair has pointed to a good page about that in his contribution to this thread.

> If we exclude new SCs related to how well people understand content, just because
> understandability is difficult to automatically test, then cognitive accessibility
> will continue to be poorly represented in WCAG. Whereas today we may have to rely
> on subjective testing to assess these softer concepts, with the advances in
> machine learning it is probable that more ways of automatically assessing these
> concepts will emerge. It would be good to avoid the situation where we have
> efficient ways of testing these concepts but have nothing in WCAG that relates to
> them.

To the extent that new SCs *require* user testing in the sense of task-oriented, non-expert testing with representative users of different abilities that we would wathc performing actions and where we would then decide on a pass/fail rating for the given new SCs, this would be highly problematic. Would one user's lack of understanding of a particulat label or heading, say, be sufficient for us to set a SC to 'fail'? I guess not. So how far would you go? Thinking of testing content for these SCs requiring user testing with several users with learning difficulties and most likley arriving at a patchwork of issues that may be related to  cognitive ablitiy OR something else (like subject matter expertise) would probably improve the basis for rating (e.g. ALL users did not understand something, so FAIL) but may often prove inconclusive (in terms of conformance rating). And of course, the costs for testing would be MUCH MUCH higher than expert testing, which would in turn mean clients that
  would have taken the effort to comply to something that can be verified by experts by shy way from something that involves user testing.

My hope is that the same way we now have to determine whether alternative text is sufficiently descriptive or headings and labels describe topic or purpose, we can describe expert approaches (in techniques) to evaluate whether content is easy to understand, icons clear and conventional, etc. Failing that (i.e. the SC can in no way be tested by an expert alone) I think we are faced with a difficult trade-off: (A) Testing gets MUCH harder, MUCH more clomplex. MUCH more expensive (B) SCs that cannot be tested without users end up in the AAA bracket.

Detlev
> 
>  
> 
> Best regards
> 
>  
> 
> Mike
> 
> 
>  
> 
> 
> From: Wilco Fiers [mailto:wilco.fiers@deque.com]
> Sent: 30 January 2017 12:09
> To: shilpi <shilpi@barrierbreak.com>
> Cc: WCAG <w3c-wai-gl@w3.org>
> Subject: Re: Automated and manual testing process
> 
> 
>  
> 
> 
> Hi everyone,
> 
> 
> I don't particularly like the use of the phrase "manual testing". I much prefer "expert testing", as it gets rid of this confusion, as well as of the question of: "if I use a accessibility tool, is it still manual testing?". I look at it similarly to how Alistair Garrison grouped it. Although I would label it slightly different.
> 
> 
>  
> 
> 
> 1) Conformance testing: The goal here is to see if minimal requirements are met. This involves expert testing (or manual testing if you prefer), and if that expert is in any way concerned about meeting deadlines, she will be using accessibility test tools for this.
> 
> 
>  
> 
> 
> 2) Usability testing: The goal here is to see where the best opportunities are for improving the user experience.
> 
> 
>  
> 
> 
> Usability testing won't tell you if something meets WCAG, or at least, I've never known any usability tests that could do that. it's a very different kind of animal in my opinion. So I definitely have concerns about some of the new SCs that are based on user testing.
> 
> 
>  
> 
> 
> Wilco
> 
> 
>  
> 
> 
> On Mon, Jan 30, 2017 at 1:25 AM, shilpi <shilpi@barrierbreak.com <mailto:shilpi@barrierbreak.com> > wrote:
>> 
>> 
>> We should specify the criteria to be met but avoid being prescriptive on which testing approach is to be adopted or with how many users, etc. As one can see numerous organization's take different approaches and yet achieve compliance.
>> 
>> 
>>  
>> 
>> 
>> Often this is based on scale of test required, time, budgets, etc.
>> 
>> 
>>  
>> 
>> 
>> The aim is to get more organization's to adopt accessibility. 
>> 
>> 
>>  
>> 
>> 
>> We should look at how to simplify the approaches.
>> 
>> 
>>  
>> 
>> 
>> Regards
>> 
>> 
>> Shilpi
>> 
>> 
>>  
>> 
>> 
>> Sent from my Samsung Galaxy smartphone.
>> 
>> 
>>  
>> 
>> 
>> -------- Original message --------
>> 
>> 
>> From: Alastair Campbell <acampbell@nomensa.com <mailto:acampbell@nomensa.com> >
>> 
>> 
>> Date: 1/30/17 02:29 (GMT+05:30)
>> 
>> 
>> To: Andrew Kirkpatrick <akirkpat@adobe.com <mailto:akirkpat@adobe.com> >, WCAG <w3c-wai-gl@w3.org <mailto:w3c-wai-gl@w3.org> >
>> 
>> 
>> Subject: Re: Automated and manual testing process
>> 
>> 
>>  
>> 
>> 
>> Andrew wrote:
>> > What if testing cannot be done by a single person and requires user testing – does that count as manual testing, or is that something different?
>> 
>> We use, and I've come across quite a few variations, so to focus on the general ones I tend to see main methods as:
>> 
>> - Automated testing, good coverage across pages or integrated with your development, but can't positively pass a page.
>> 
>> - Manual review/audit, where an expert goes through a sample of pages using the guidelines. This can assess 'appropriateness' of things like alt text, headings,  markup and interactions (e.g. scripted events).
>> 
>> - Panel review, where a group of people with disabilities assess pages from their point of view, with the guidelines as reference. (A couple of Charity based organisations offer that in the UK, but not my favoured methodology [1])
>> 
>> - Usability testing with people with disabilities, run as a standard usability test but with allowances for different technologies etc. Tends to find the whole range of usability & accessibility issues, but coverage across a whole website/app is difficult.
>> 
>> - Usability testing with the general public, although not accessibility oriented will often an overlap in issues found.
>> 
>> I would stress that 'manual testing' must be by experts who have a wide understanding of accessibility and can balance different concerns. 
>> Whereas 'usability testing' must not be with people who test for a living. If they are expert in the domain, technology or accessibility then they are not typical users.
>> 
>> If something 'requires' multiple testers then we need to (try to) write the guideline or guidance better. (Is that the question?)
>> 
>> Usability is about the optimisation of an interface or experience, rather than barriers in the interface. I came from a Psychology & HCI background and started work as a Usability Consultant, I've done thousands of test sessions, but it is quite a different thing from testing accessibility...
>> 
>> I hope that helps, but I have a feeling there is a question behind the question!
>> 
>> -Alastair
>> 
>> 1] https://alastairc.ac/2006/07/expert-usability-participants/ <https://alastairc.ac/2006/07/expert-usability-participants/> 
>> 
> 
>  
> 
> 
> --
> 
> 
> Wilco Fiers
> 
> 
> Senior Accessibility Engineer - Co-facilitator WCAG-ACT - Chair Auto-WCAG
> 
>

Received on Monday, 30 January 2017 14:20:45 UTC