W3C home > Mailing lists > Public > public-wai-evaltf@w3.org > January 2013

Re: Aim and impact of random sampling

From: Aurélien Levy <aurelien.levy@temesis.com>
Date: Thu, 24 Jan 2013 11:36:04 -0500
Message-ID: <510162F4.90301@temesis.com>
To: public-wai-evaltf@w3.org
+1 that the sense of the comment I made on the survey I think this need 
to be an option

Aurélien
> The assumption has been that an additional random sample will make 
> sure that a tester's intitial sampling of pages has not left out pages 
> that may expose problems no present in the intitial sample.
>
> That aim in itself is laudable, but for this to work, the sampling 
> would need to be
>
> 1. independent of individual tester choices (i.e., automatic) -
>    which would need a definition, inside the methodology, of a
>    valid approach for truly random sampling. No one has even hinted on
>    a reliable way to do that - I believe there is none.
>    A mere calculaton of sample size for a desired level of confidence
>    would need to be based to the total number of a site's pages *and*
>    page states - a number that will usually be unknown.
>
> 2. Fairly represent not just pages, but also page states.
>    But crawling a site to derive a collection of URLS for
>    random sampling is not doable since many states (and there URLs or
>    DOM states) only come about as a result of human input.
>
> I hope I am not coming across as a pest if I say again that in my 
> opinion, we are shooting ourselves in the foot if we make random 
> sampling a mandatory part of the WCAG-EM. Academics will be happy, 
> practitioners working to a budget will just stay away from it.
>
> Detlev
>
Received on Thursday, 24 January 2013 16:36:28 GMT

This archive was generated by hypermail 2.3.1 : Friday, 8 March 2013 15:52:16 GMT