W3C home > Mailing lists > Public > w3c-wai-gl@w3.org > July to September 2011

Re: some comments/questions on techniques instructions document for submitters

From: Denis Boudreau <dboudreau@accessibiliteweb.com>
Date: Sun, 11 Sep 2011 16:17:41 -0400
Cc: Eval TF <public-wai-evaltf@w3.org>
Message-id: <9F55DD88-85CD-4A14-8995-EF1B9120BCA1@accessibiliteweb.com>
To: WCAG WG <w3c-wai-gl@w3.org>
Hi Gregg,

Sorry for the (very) late response,

While this thread has kept going on the EvalTF mailing list, I thought I'd answer some of your questions here as well for discussion's sake.

On 2011-08-20, at 10:22 AM, Gregg Vanderheiden wrote:

> The sampling approach is also a good idea.   If you find nothing -- then one could sample more if one wanted to be thorough.  But that is usually enough to find any systematic errors or issues.  

This is also my belief, yes.

> When you talk about atomic tests - are these automated or both automated and human?  If so - approx what percent of each or both?

I haven't gone back to really count them but an estimate would be around 27-30% automated. The balance are human-made tests. Most are done with tools like browser extensions or toolbars, but some are done simply by going through the source code (or screen readers).

> Also, when the web site doesn’t do any of the techniques listed in WCAG - what do you do?  

When a SC is irrelevant (because it does not apply to the audited page), we simply do not take it into account. Our notes would say either "pass", "fail" or "n/a" (not applicable). 

So on some pages, we will apply WCAG in full, while on others, we might apply only some of the SC. Pages with video content would not be measured against 1.2 for instance.

> Finally, how do you detect information that is presented only visually by page layout?     And then how would you associate that with programmatically determined text?

A combination of screen reader testing and CSS disabling usually does the trick.

> Does the "just 12 pages" approach allow you to use humans - so that does the trick? 

We resort to user testing on most evaluations we run. Sometimes, if we're lucky enough and the client has the required budget, those tests will be made by "real" people with disabilities, not by us who only "pretend" for testing purposes. 

But yes, in all cases, this approach usually does the trick.

> I presume this 12 pages is for a rather modest (hundreds vs hundreds of thousands of web pages) or highly templated web site.   Some companies have dozens or scores of "home" pages - that are all different in format. 

Yes, it's mostly based on the templates we find on any given website (but could also be motivated by the client's budget of course). 

Most sites we work on have between 4 to 12 templates, which explains my figures. On bigger websites with many more templates, then we'd be tempted to evaluate more, but the costs increase accordingly so we would not necessarily be allowed to do much more.

Best regards,

Received on Sunday, 11 September 2011 20:18:05 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 16 January 2018 15:34:08 UTC