Re: Automated and manual testing process

As I see it, we have

  

  1. Automated testing: Use of automated tools to test certain conformance criteria.
  2. Human expert testing (or manual testing, but I prefer the former): A human with proficiency or expertise with a set of conformance criteria looking specifically to identify level of conformance. 
  3. User testing: A user (with or without knowledge of conformance criteria) looking to identify ease of use (regardless of conformance criteria). 

Human expert testing is still essential here as many of these require
judgement calls on things which might otherwise be  passed by existing
automated tools in some cases (false positives etc) or in some cases are
simply not testable yet with existing tools.

  

Having said that, I think we need the make sure that SCs minimize the use of
subjective criteria as much as possible (won't be possible all the time, but
we should continue to strive toward it). Not just for the sake of automated
testting, but also for better clarity and consistency amongst interpretations
by humans.

\--

Shwetank Dixit

Head of Accessibility Innovation and Research

BarrierBreak - www.barrierbreak.com

  

https://twitter.com/shwetank

![](https://link.nylas.com/open/eyu8kf3wcvgjq1tfsbug54erp/local-
eeea55e9-5476?r=dzNjLXdhaS1nbEB3My5vcmc=)

  
On Jan 30 2017, at 6:59 pm, Smith, Jim <smithjs@atos.net> wrote:  

> +1

>

>  
>

> **From:** Michael Pluke [mailto:Mike.Pluke@castle-consult.com]  
**Sent:** Monday, January 30, 2017 1:21 PM  
**To:** Wilco Fiers <wilco.fiers@deque.com>; shilpi <shilpi@barrierbreak.com>  
**Cc:** WCAG <w3c-wai-gl@w3.org>  
**Subject:** RE: Automated and manual testing process
>

>  
>

> Hi everyone

>

>  
>

> To date it has been relatively easy to draw a clearly defined line between
testing accessibility (i.e. meets the WCAG SCs) and usability (i.e. people
find something easy and pleasant to use). One reason it has been relatively
easy to draw such a hard line is that WCAG does not require things to clearly
convey meaning, be understandable and logical (to users) etc. Usability will
be badly impacted if these issues are not properly addressed but users with
many disabilities (e.g. visual, hearing, dexterity, etc.) will not be
adversely impacted and therefore it has been convenient to believe that there
isn’t an accessibility issue here.

>

>  
>

> But when one starts to consider cognitive and learning disabilities this
hard boundary cannot be justified. Whereas many users will be able to
compensate  for poor clarity, understandability and logical design (i.e. they
will just experience sub-optimal usability) some users with cognitive and
learning disabilities will be unable to work around these issues and they will
experience a disproportionate and potentially unsurmountable barrier. Many
users with cognitive and learning disabilities will encounter a serious
accessibility barrier that they may not be able to overcome.

>

>  
>

> Wherever possible the COGA Task Force has tried to propose SCs that do not
rely on subjective testing, but automatically assessing whether, for example,
a label accurately and clearly describes the thing that it labels in a way
that users with learning disabilities might be able to understand is currently
not something that is easy to automate. For such cases, subjective testing
will be the only practical way to assess whether a significant accessibility
barrier exists.

>

>  
>

> If we exclude new SCs related to how well people understand content, just
because understandability is difficult to automatically test, then **cognitive
accessibility will continue to be poorly represented in WCAG**. Whereas today
we may have to rely on subjective testing to assess these softer concepts,
with the advances in machine learning it is probable that more ways of
automatically assessing these concepts will emerge. It would be good to avoid
the situation where we have efficient ways of testing these concepts but have
nothing in WCAG that relates to them.

>

>  
>

> Best regards

>

>  
>

> Mike

>

>  
>

>  
>

> **From:** Wilco Fiers
[[mailto:wilco.fiers@deque.com](mailto:wilco.fiers@deque.com)]  
**Sent:** 30 January 2017 12:09  
**To:** shilpi <[shilpi@barrierbreak.com](mailto:shilpi@barrierbreak.com)>  
**Cc:** WCAG <[w3c-wai-gl@w3.org](mailto:w3c-wai-gl@w3.org)>  
**Subject:** Re: Automated and manual testing process
>

>  
>

> Hi everyone,

>

> I don't particularly like the use of the phrase "manual testing". I much
prefer "expert testing", as it gets rid of this confusion, as well as of the
question of: "if I use a accessibility tool, is it still manual testing?". I
look at it similarly to how Alistair Garrison grouped it. Although I would
label it slightly different.

>

>  
>

> **1) Conformance testing: **The goal here is to see if minimal requirements
are met. This involves expert testing (or manual testing if you prefer), and
if that expert is in any way concerned about meeting deadlines, she will be
using accessibility test tools for this.

>

>  
>

> **2) Usability testing:** The goal here is to see where the best
opportunities are for improving the user experience.

>

>  
>

> Usability testing won't tell you if something meets WCAG, or at least, I've
never known any usability tests that could do that. it's a very different kind
of animal in my opinion. So I definitely have concerns about some of the new
SCs that are based on user testing.

>

>  
>

> Wilco

>

>  
>

> On Mon, Jan 30, 2017 at 1:25 AM, shilpi
<[shilpi@barrierbreak.com](mailto:shilpi@barrierbreak.com)> wrote:

>

>> We should specify the criteria to be met but avoid being prescriptive on
which testing approach is to be adopted or with how many users, etc. As one
can see numerous organization's take different approaches and yet achieve
compliance.

>>

>>  
>>

>> Often this is based on scale of test required, time, budgets, etc.

>>

>>  
>>

>> The aim is to get more organization's to adopt accessibility.

>>

>>  
>>

>> We should look at how to simplify the approaches.

>>

>>  
>>

>> Regards

>>

>> Shilpi

>>

>>  
>>

>> Sent from my Samsung Galaxy smartphone.

>>

>>  
>>

>> \-------- Original message --------

>>

>> From: Alastair Campbell
<[acampbell@nomensa.com](mailto:acampbell@nomensa.com)>

>>

>> Date: 1/30/17 02:29 (GMT+05:30)

>>

>> To: Andrew Kirkpatrick <[akirkpat@adobe.com](mailto:akirkpat@adobe.com)>,
WCAG <[w3c-wai-gl@w3.org](mailto:w3c-wai-gl@w3.org)>

>>

>> Subject: Re: Automated and manual testing process

>>

>>  
>>

>> Andrew wrote:  
> What if testing cannot be done by a single person and requires user testing
– does that count as manual testing, or is that something different?  
  
We use, and I've come across quite a few variations, so to focus on the
general ones I tend to see main methods as:  
  
\- Automated testing, good coverage across pages or integrated with your
development, but can't positively pass a page.  
  
\- Manual review/audit, where an expert goes through a sample of pages using
the guidelines. This can assess 'appropriateness' of things like alt text,
headings,  markup and interactions (e.g. scripted events).  
  
\- Panel review, where a group of people with disabilities assess pages from
their point of view, with the guidelines as reference. (A couple of Charity
based organisations offer that in the UK, but not my favoured methodology [1])  
  
\- Usability testing with people with disabilities, run as a standard
usability test but with allowances for different technologies etc. Tends to
find the whole range of usability & accessibility issues, but coverage across
a whole website/app is difficult.  
  
\- Usability testing with the general public, although not accessibility
oriented will often an overlap in issues found.  
  
I would stress that 'manual testing' must be by experts who have a wide
understanding of accessibility and can balance different concerns.  
Whereas 'usability testing' must not be with people who test for a living. If
they are expert in the domain, technology or accessibility then they are not
typical users.  
  
If something 'requires' multiple testers then we need to (try to) write the
guideline or guidance better. (Is that the question?)  
  
Usability is about the optimisation of an interface or experience, rather than
barriers in the interface. I came from a Psychology & HCI background and
started work as a Usability Consultant, I've done thousands of test sessions,
but it is quite a different thing from testing accessibility...  
  
I hope that helps, but I have a feeling there is a question behind the
question!  
  
-Alastair  
  
1] [ https://alastairc.ac/2006/07/expert-usability-
participants/](https://alastairc.ac/2006/07/expert-usability-
participants/&r=dzNjLXdhaS1nbEB3My5vcmc=)

>

>  

>

>  

>

>  
>

> \--

>

> **Wilco Fiers**

>

> Senior Accessibility Engineer - Co-facilitator WCAG-ACT - Chair Auto-WCAG

Received on Monday, 30 January 2017 16:10:43 UTC