Re: Aim and impact of random sampling

Ensuring that clients will render their entire site accessible since they do not know what exact pages will be tested is important. But setting up the rule (once proposed by Léonie, I believe) that in any re-test after remedial action, some pages are replaced by other pages would do the same trick. No need for randomness here.

For all cases of testing where we will not fimd 100% conformance (the overwhelming majority of sites, in our experience), having extra random pages as a verification exercise wouldn't make much difference - these would usually just reveal yet other instances of some SC not met that are not met anyway elsewhere. The verification aim Eric alluded to in his mail would mainly apply to those rare sites that are picture-perfect paragons of full compliance.

On 24 Jan 2013, at 20:55, Ramón Corominas wrote:

> Although I did not use the words "optional/mandatory", I also commented in the survey that some Euracert partners will probably dislike the idea of having to include more pages (= more time and resources), since they consider that the initial structured sampling is enough in most cases, (that is, no significant change in the results will be obtained).
> 
> We at Technosite include the "random" part just because the website is evaluated over time, and thus we make clear to the clients that the sample will not always be the same, and therefore they will have to apply the accessibility criteria to the whole website. However, I agree that our "method" to select random pages is certainly not very scientific.
> 
> In any case, I assume that the "filter the sample" should be enough to eliminate the problem of time/resources. However,
> 
> My vote: it should be an optional step.
> 
> Regards,
> Ramón.
> 
> Aurélien wrote:
> 
>> +1 that the sense of the comment I made on the survey I think this need to be an option
> >
> > Detlev wrote:
> >
>>> The assumption has been that an additional random sample will make sure that a tester's intitial sampling of pages has not left out pages that may expose problems no present in the intitial sample.
>>> 
>>> That aim in itself is laudable, but for this to work, the sampling would need to be
>>> 
>>> 1. independent of individual tester choices (i.e., automatic) -
>>>   which would need a definition, inside the methodology, of a
>>>   valid approach for truly random sampling. No one has even hinted on
>>>   a reliable way to do that - I believe there is none.
>>>   A mere calculaton of sample size for a desired level of confidence
>>>   would need to be based to the total number of a site's pages *and*
>>>   page states - a number that will usually be unknown.
>>> 
>>> 2. Fairly represent not just pages, but also page states.
>>>   But crawling a site to derive a collection of URLS for
>>>   random sampling is not doable since many states (and there URLs or
>>>   DOM states) only come about as a result of human input.
>>> 
>>> I hope I am not coming across as a pest if I say again that in my opinion, we are shooting ourselves in the foot if we make random sampling a mandatory part of the WCAG-EM. Academics will be happy, practitioners working to a budget will just stay away from it.
> 

-- 
Detlev Fischer
testkreis - das Accessibility-Team von feld.wald.wiese
c/o feld.wald.wiese
Thedestraße 2
22767 Hamburg

Tel   +49 (0)40 439 10 68-3
Mobil +49 (0)1577 170 73 84
Fax   +49 (0)40 439 10 68-5

http://www.testkreis.de
Beratung, Tests und Schulungen für barrierefreie Websites

Received on Thursday, 24 January 2013 20:23:40 UTC