W3C home > Mailing lists > Public > public-wai-evaltf@w3.org > January 2013

RE: Aim and impact of random sampling

From: Vivienne CONWAY <v.conway@ecu.edu.au>
Date: Fri, 25 Jan 2013 16:20:38 +0800
To: "ryladog@earthlink.net" <ryladog@earthlink.net>, "'Velleman, Eric'" <evelleman@bartimeus.nl>, 'Aurélien Levy' <aurelien.levy@temesis.com>, "public-wai-evaltf@w3.org" <public-wai-evaltf@w3.org>
Message-ID: <8AFA77741B11DB47B24131F1E38227A9FB77CA88A6@XCHG-MS1.ads.ecu.edu.au>
Hi all

Sorry to have missed the telecom.I've been doing lots of reading andfind the discussion on random page sampling very interesting.

One of the recent large evaluations we did included randomly sampled pages.  We went by the WCAG group's suggestion that a percentage, say 25% of the sample size, should be randomly generated.  We agreed with the client to thoroughly test 30 pages and decided that 6 of these would be random.  We then ran a crawler over the site to find the pages, exported the results into Excel and then added a column for 'random' and took the first six that weren't already in the selected sample.  As this page is to be tested again in six months, the client was told the random portion would be changed to verify that they had fixed the system-wide issues rather than just the ones in the sample

I think this agreed with a lot of the comments I've seen on the list.  It keeps the client aware of the need to fix all errors - not just the ones on the sampled pages, and serves as a verification of the results.  For example, if the tester purposely tested only the most obviously inaccessible or difficult the website owner may feel justified in complaining.  However I think this is a mute point - if there is an error anywhere it should be fixed.

I support a recommendation to included a random selection of pages of a percentage of the total sample size.  I'd be happy with 10-25% of the sample size.

Hope this all makes sense.


Vivienne L. Conway, B.IT(Hons), MACS CT, AALIA(cs)
PhD Candidate & Sessional Lecturer, Edith Cowan University, Perth, W.A.
Director, Web Key IT Pty Ltd.
Mob: 0415 383 673

This email is confidential and intended only for the use of the individual or entity named above. If you are not the intended recipient, you are notified that any dissemination, distribution or copying of this email is strictly prohibited. If you have received this email in error, please notify me immediately by return email or telephone and destroy the original message.
From: Katie Haritos-Shea EARTHLINK [ryladog@earthlink.net]
Sent: Friday, 25 January 2013 7:02 AM
To: 'Velleman, Eric'; 'Aurélien Levy'; public-wai-evaltf@w3.org
Subject: RE: Aim and impact of random sampling

And, this might also make the WCAG 2 working group a bit more

-----Original Message-----
From: Velleman, Eric [mailto:evelleman@bartimeus.nl]
Sent: Thursday, January 24, 2013 1:40 PM
To: Aurélien Levy; public-wai-evaltf@w3.org
Subject: RE: Aim and impact of random sampling

Hi all,

In my opinion there was another good argument in the call that we should
consider: A random sample (even if it is small) can act as a simple sort of
verification indicator of the results found with the structured sample. In
that case, a few web pages would then be sufficient and add to the
reasonable confidence of the results of the evaluation. Not sure if this
needs to be optional or very academic.
Kindest regards,


Van: Aurélien Levy [aurelien.levy@temesis.com]
Verzonden: donderdag 24 januari 2013 17:36
Aan: public-wai-evaltf@w3.org
Onderwerp: Re: Aim and impact of random sampling

+1 that the sense of the comment I made on the survey I think this need
to be an option

> The assumption has been that an additional random sample will make
> sure that a tester's intitial sampling of pages has not left out pages
> that may expose problems no present in the intitial sample.
> That aim in itself is laudable, but for this to work, the sampling
> would need to be
> 1. independent of individual tester choices (i.e., automatic) -
>    which would need a definition, inside the methodology, of a
>    valid approach for truly random sampling. No one has even hinted on
>    a reliable way to do that - I believe there is none.
>    A mere calculaton of sample size for a desired level of confidence
>    would need to be based to the total number of a site's pages *and*
>    page states - a number that will usually be unknown.
> 2. Fairly represent not just pages, but also page states.
>    But crawling a site to derive a collection of URLS for
>    random sampling is not doable since many states (and there URLs or
>    DOM states) only come about as a result of human input.
> I hope I am not coming across as a pest if I say again that in my
> opinion, we are shooting ourselves in the foot if we make random
> sampling a mandatory part of the WCAG-EM. Academics will be happy,
> practitioners working to a budget will just stay away from it.
> Detlev

No virus found in this message.
Checked by AVG - www.avg.com
Version: 2013.0.2890 / Virus Database: 2639/6052 - Release Date: 01/23/13

This e-mail is confidential. If you are not the intended recipient you must not disclose or use the information contained within. If you have received it in error please return it to the sender via reply e-mail and delete any record of it from your system. The information contained within is not the opinion of Edith Cowan University in general and the University accepts no liability for the accuracy of the information provided.

Received on Friday, 25 January 2013 08:21:16 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:40:23 UTC