Re: Automated and manual testing process

>What I wanted everyone to be very clear on is that if we have to omit COGA
proposals we can be certain that very real and significant accessibility
barriers will remain ... We may be limited in what we can do for these
users in WCAG 2.1, but we should not be ignorant of the unresolved
accessibility barriers that will remain.

Perhaps the external AT plugin etc is way out of the conundrum of
"untestable SCs". I've for a long time felt that what was missing from the
cognitive world was reliable easy to use AT that simplifies, re-presents
content, etc. This would correspond to AT available  for blindness (i.e.
JAWS), AT for dexterity (i.e. Dragon) etc...  If there is software
available that does this personalization and it works on most web pages
then we could:

1)  We could create an SC that says something like "don't interfere with
the software" listing ways that it can be interfered with...
2) If there are a reasonable number of attributes that this software uses,
we could create a SC that says "use these attributes" and list some ways
that they can be misused as failures.

Of course there are other things that can be done by authors to help people
with cognitive disabilities, and if those things are testable, and meet the
other SC requirements (1) then we can create SCs. However, until there is
reliable AT for the cognitive community,  I think we are all going to be
frustrated that there is not enough that can be done or required in a
formal and in some jurisdictions legal way by authors.

(1) https://www.w3.org/WAI/GL/wiki/WCAG_2.1_Success_Criteria

Cheers,
David MacDonald



*Can**Adapt* *Solutions Inc.*

Tel:  613.235.4902

LinkedIn
<http://www.linkedin.com/in/davidmacdonald100>

twitter.com/davidmacd

GitHub <https://github.com/DavidMacDonald>

www.Can-Adapt.com <http://www.can-adapt.com/>



*  Adapting the web to all users*
*            Including those with disabilities*

If you are not the intended recipient, please review our privacy policy
<http://www.davidmacd.com/disclaimer.html>

On Tue, Jan 31, 2017 at 3:23 AM, lisa.seeman <lisa.seeman@zoho.com> wrote:

> When the first draft in wcag 2.1 comes out I doubt anyone will be expert
> in all of it, straight away. It will take time for us to learn about new
> disability types and hpw they use the web, and/or disabilities and new
> technology.
> We might need to consider ourselves extern in some success criteria and
> not in others - for a while.
>
>
> All the best
>
> Lisa Seeman
>
> LinkedIn <http://il.linkedin.com/in/lisaseeman/>, Twitter
> <https://twitter.com/SeemanLisa>
>
>
>
>
> ---- On Mon, 30 Jan 2017 19:21:45 +0200 *Gregg C
> Vanderheiden<greggvan@umd.edu <greggvan@umd.edu>>* wrote ----
>
> +1
>
> Expert testing  is a much better term.      You need to understand what
> you are doing to get reliable test results.
>
> gregg
>
> Gregg C Vanderheiden
> greggvan@umd.edu
>
>
>
> On Jan 30, 2017, at 7:09 AM, Wilco Fiers <wilco.fiers@deque.com> wrote:
>
> Hi everyone,
>
> I don't particularly like the use of the phrase "manual testing". I much
> prefer "expert testing", as it gets rid of this confusion, as well as of
> the question of: "if I use a accessibility tool, is it still manual
> testing?". I look at it similarly to how Alistair Garrison grouped it.
> Although I would label it slightly different.
>
> *1) Conformance testing:* The goal here is to see if minimal requirements
> are met. This involves expert testing (or manual testing if you prefer),
> and if that expert is in any way concerned about meeting deadlines, she
> will be using accessibility test tools for this.
>
> *2) Usability testing:* The goal here is to see where the best
> opportunities are for improving the user experience.
>
> Usability testing won't tell you if something meets WCAG, or at least,
> I've never known any usability tests that could do that. it's a very
> different kind of animal in my opinion. So I definitely have concerns about
> some of the new SCs that are based on user testing.
>
> Wilco
>
> On Mon, Jan 30, 2017 at 1:25 AM, shilpi <shilpi@barrierbreak.com> wrote:
>
> We should specify the criteria to be met but avoid being prescriptive on
> which testing approach is to be adopted or with how many users, etc. As one
> can see numerous organization's take different approaches and yet achieve
> compliance.
>
> Often this is based on scale of test required, time, budgets, etc.
>
> The aim is to get more organization's to adopt accessibility.
>
> We should look at how to simplify the approaches.
>
> Regards
> Shilpi
>
> Sent from my Samsung Galaxy smartphone.
>
> -------- Original message --------
> From: Alastair Campbell <acampbell@nomensa.com>
> Date: 1/30/17 02:29 (GMT+05:30)
> To: Andrew Kirkpatrick <akirkpat@adobe.com>, WCAG <w3c-wai-gl@w3.org>
> Subject: Re: Automated and manual testing process
>
> Andrew wrote:
> > What if testing cannot be done by a single person and requires user
> testing – does that count as manual testing, or is that something different?
>
> We use, and I've come across quite a few variations, so to focus on the
> general ones I tend to see main methods as:
>
> - Automated testing, good coverage across pages or integrated with your
> development, but can't positively pass a page.
>
> - Manual review/audit, where an expert goes through a sample of pages
> using the guidelines. This can assess 'appropriateness' of things like alt
> text, headings,  markup and interactions (e.g. scripted events).
>
> - Panel review, where a group of people with disabilities assess pages
> from their point of view, with the guidelines as reference. (A couple of
> Charity based organisations offer that in the UK, but not my favoured
> methodology [1])
>
> - Usability testing with people with disabilities, run as a standard
> usability test but with allowances for different technologies etc. Tends to
> find the whole range of usability & accessibility issues, but coverage
> across a whole website/app is difficult.
>
> - Usability testing with the general public, although not accessibility
> oriented will often an overlap in issues found.
>
> I would stress that 'manual testing' must be by experts who have a wide
> understanding of accessibility and can balance different concerns.
> Whereas 'usability testing' must not be with people who test for a living.
> If they are expert in the domain, technology or accessibility then they are
> not typical users.
>
> If something 'requires' multiple testers then we need to (try to) write
> the guideline or guidance better. (Is that the question?)
>
> Usability is about the optimisation of an interface or experience, rather
> than barriers in the interface. I came from a Psychology & HCI background
> and started work as a Usability Consultant, I've done thousands of test
> sessions, but it is quite a different thing from testing accessibility...
>
> I hope that helps, but I have a feeling there is a question behind the
> question!
>
> -Alastair
>
> 1] https://alastairc.ac/2006/07/expert-usability-participants/
>
>
>
>
> --
> *Wilco Fiers*
> Senior Accessibility Engineer - Co-facilitator WCAG-ACT - Chair Auto-WCAG
> <deque_logo_180p.gif>
>
>
>
>
>

Received on Tuesday, 31 January 2017 10:56:05 UTC