Re: Automated and manual testing process

I will speak from where we were in WCAG 2.0

Manual testing — is testing by people who know the technology and the guidelines.   Expert testers.   It is not user testing.       In order to be “testable” or “objective”   (our criteria for making it into WCAG 2 ) it had to be something that most knowledgable testers skilled in the art would agree on the outcome.  80% or more would all agree on outcome.   We strove for 95% or greater - but allowed for …  well .. sticklers.


User testing is a whole other thing — and although we GREATLY encourage user testing of any website— we did not require it for conformance. 


In WCAG 2.0   we required Alt text — but did not require that it be GOOD alt text because we found quickly that there was no definition of good alt text where we could get 80% or better consistent judgement with ALL alt text samples.     Easy for very good and very bad.   But when you get in the middle — it got in a muddle.     it was easy to find samples where we didnt get 80%  so - failed our test that  WORST CASE was only 80% agreed. 



Gregg





Gregg C Vanderheiden
greggvan@umd.edu



> On Jan 28, 2017, at 5:36 PM, Andrew Kirkpatrick <akirkpat@adobe.com> wrote:
> 
> AGWGer’s,
> I’d like to get the thoughts from the group on what constitutes “manual testing” (I’m more comfortable with what counts as automated testing).
> 
> Testing the presence of alternative text on an image in HTML or other formats can be done with automated testing, but testing for the presence of good alternative text requires (at least for now) human involvement in the test process (manual testing).
> 
> What if testing cannot be done by a single person and requires user testing – does that count as manual testing, or is that something different?
> 
> Thanks,
> AWK
> 
> Andrew Kirkpatrick
> Group Product Manager, Standards and Accessibility
> Adobe 
> 
> akirkpat@adobe.com
> http://twitter.com/awkawk

Received on Sunday, 29 January 2017 04:34:09 UTC