Re: Automated and manual testing process

As much as we promote automated testing processes and tools, I think we cannot ignore manual testing. 

 

Manual testing can be with expert testers and these experts might be primarily assistive technology users also. I wouldn’t want to put the emphasis on only screen reader tests.  

 

I agree with Gregg that User testing is a whole other thing. But we need to ensure that manual testing approaches are not ignored. I yet am to find one tool that gets it all right and often it is the manual testing that normalizes the findings. 

 

Thanks & Regards

Shilpi Kapoor | Managing Director

BarrierBreak

 

 

From: Gregg C Vanderheiden <greggvan@umd.edu>
Date: Sunday, 29 January 2017 at 10:03 AM
To: Andrew Kirkpatrick <akirkpat@adobe.com>
Cc: GLWAI Guidelines WG org <w3c-wai-gl@w3.org>
Subject: Re: Automated and manual testing process
Resent-From: <w3c-wai-gl@w3.org>
Resent-Date: Sun, 29 Jan 2017 04:34:15 +0000

 

 

I will speak from where we were in WCAG 2.0

 

Manual testing — is testing by people who know the technology and the guidelines.   Expert testers.   It is not user testing.       In order to be “testable” or “objective”   (our criteria for making it into WCAG 2 ) it had to be something that most knowledgable testers skilled in the art would agree on the outcome.  80% or more would all agree on outcome.   We strove for 95% or greater - but allowed for …  well .. sticklers.

 

 

User testing is a whole other thing — and although we GREATLY encourage user testing of any website— we did not require it for conformance. 

 

 

In WCAG 2.0   we required Alt text — but did not require that it be GOOD alt text because we found quickly that there was no definition of good alt text where we could get 80% or better consistent judgement with ALL alt text samples.     Easy for very good and very bad.   But when you get in the middle — it got in a muddle.     it was easy to find samples where we didnt get 80%  so - failed our test that  WORST CASE was only 80% agreed. 

 

 

 

Gregg

 

 

 

 

 

Gregg C Vanderheiden

greggvan@umd.edu

 

 

 

On Jan 28, 2017, at 5:36 PM, Andrew Kirkpatrick <akirkpat@adobe.com> wrote:

 

AGWGer’s,

I’d like to get the thoughts from the group on what constitutes “manual testing” (I’m more comfortable with what counts as automated testing).

 

Testing the presence of alternative text on an image in HTML or other formats can be done with automated testing, but testing for the presence of good alternative text requires (at least for now) human involvement in the test process (manual testing).

 

What if testing cannot be done by a single person and requires user testing – does that count as manual testing, or is that something different?

 

Thanks,

AWK

 

Andrew Kirkpatrick

Group Product Manager, Standards and Accessibility

Adobe 

 

akirkpat@adobe.com

http://twitter.com/awkawk

 

Received on Sunday, 29 January 2017 11:15:07 UTC