W3C home > Mailing lists > Public > public-wai-evaltf@w3.org > January 2013

RE: Aim and impact of random sampling

From: Boland Jr, Frederick E. <frederick.boland@nist.gov>
Date: Thu, 24 Jan 2013 11:34:04 -0500
To: Detlev Fischer <fischer@dias.de>, EVAL TF <public-wai-evaltf@w3.org>
Message-ID: <D7A0423E5E193F40BE6E94126930C4930BF2467FC3@MBCLUSTER.xchange.nist.gov>
I am seeking permission to send out the document I mentioned in the telecon today from the author.  The document  develops a statistically-based criterion for passing and a general test method The statistics person also said it may be important to: (1) "scope down the problem" into "subproblems" with similar characteristics/attributes (maybe in our case certain "types" of web pages or web states, (2) write statistically-based assertions/requirements for each subproblem such that statistical methods (for example, sampling) may be used against each to assess, and (3) then combine these subproblems in some way to address the larger issue.    Just a thought..
Thanks and best wishes
Tim Boland NIST

-----Original Message-----
From: Detlev Fischer [mailto:fischer@dias.de] 
Sent: Thursday, January 24, 2013 11:23 AM
Subject: Aim and impact of random sampling

The assumption has been that an additional random sample will make sure that a tester's intitial sampling of pages has not left out pages that may expose problems no present in the intitial sample.

That aim in itself is laudable, but for this to work, the sampling would need to be

1. independent of individual tester choices (i.e., automatic) -
    which would need a definition, inside the methodology, of a
    valid approach for truly random sampling. No one has even hinted on
    a reliable way to do that - I believe there is none.
    A mere calculaton of sample size for a desired level of confidence
    would need to be based to the total number of a site's pages *and*
    page states - a number that will usually be unknown.

2. Fairly represent not just pages, but also page states.
    But crawling a site to derive a collection of URLS for
    random sampling is not doable since many states (and there URLs or
    DOM states) only come about as a result of human input.

I hope I am not coming across as a pest if I say again that in my opinion, we are shooting ourselves in the foot if we make random sampling a mandatory part of the WCAG-EM. Academics will be happy, practitioners working to a budget will just stay away from it.


Detlev Fischer PhD
DIAS GmbH - Daten, Informationssysteme und Analysen im Sozialen
Geschäftsführung: Thomas Lilienthal, Michael Zapp

Telefon: +49-40-43 18 75-25
Mobile: +49-157 7-170 73 84
Fax: +49-40-43 18 75-19
E-Mail: fischer@dias.de

Anschrift: Schulterblatt 36, D-20357 Hamburg Amtsgericht Hamburg HRB 58 167
Geschäftsführer: Thomas Lilienthal, Michael Zapp
Received on Thursday, 24 January 2013 16:34:30 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:40:23 UTC