W3C home > Mailing lists > Public > public-wai-evaltf@w3.org > January 2013

RE: Aim and impact of random sampling

From: Katie Haritos-Shea EARTHLINK <ryladog@earthlink.net>
Date: Thu, 24 Jan 2013 17:02:34 -0500
To: "'Velleman, Eric'" <evelleman@bartimeus.nl>, 'Aurélien Levy' <aurelien.levy@temesis.com>, <public-wai-evaltf@w3.org>
Message-ID: <034a01cdfa7e$85985a30$90c90e90$@earthlink.net>
And, this might also make the WCAG 2 working group a bit more
satisfied........

-----Original Message-----
From: Velleman, Eric [mailto:evelleman@bartimeus.nl] 
Sent: Thursday, January 24, 2013 1:40 PM
To: Aurélien Levy; public-wai-evaltf@w3.org
Subject: RE: Aim and impact of random sampling

Hi all,

In my opinion there was another good argument in the call that we should
consider: A random sample (even if it is small) can act as a simple sort of
verification indicator of the results found with the structured sample. In
that case, a few web pages would then be sufficient and add to the
reasonable confidence of the results of the evaluation. Not sure if this
needs to be optional or very academic.
Kindest regards,

Eric


________________________________________
Van: Aurélien Levy [aurelien.levy@temesis.com]
Verzonden: donderdag 24 januari 2013 17:36
Aan: public-wai-evaltf@w3.org
Onderwerp: Re: Aim and impact of random sampling

+1 that the sense of the comment I made on the survey I think this need
to be an option

Aurélien
> The assumption has been that an additional random sample will make 
> sure that a tester's intitial sampling of pages has not left out pages 
> that may expose problems no present in the intitial sample.
>
> That aim in itself is laudable, but for this to work, the sampling 
> would need to be
>
> 1. independent of individual tester choices (i.e., automatic) -
>    which would need a definition, inside the methodology, of a
>    valid approach for truly random sampling. No one has even hinted on
>    a reliable way to do that - I believe there is none.
>    A mere calculaton of sample size for a desired level of confidence
>    would need to be based to the total number of a site's pages *and*
>    page states - a number that will usually be unknown.
>
> 2. Fairly represent not just pages, but also page states.
>    But crawling a site to derive a collection of URLS for
>    random sampling is not doable since many states (and there URLs or
>    DOM states) only come about as a result of human input.
>
> I hope I am not coming across as a pest if I say again that in my 
> opinion, we are shooting ourselves in the foot if we make random 
> sampling a mandatory part of the WCAG-EM. Academics will be happy, 
> practitioners working to a budget will just stay away from it.
>
> Detlev
>





-----
No virus found in this message.
Checked by AVG - www.avg.com
Version: 2013.0.2890 / Virus Database: 2639/6052 - Release Date: 01/23/13
Received on Thursday, 24 January 2013 22:03:03 GMT

This archive was generated by hypermail 2.3.1 : Friday, 8 March 2013 15:52:16 GMT