W3C home > Mailing lists > Public > w3c-wai-ig@w3.org > October to December 2010

Re: Seeking for advices on results from automated web evaluation tools

From: Charles McCathieNevile <chaals@opera.com>
Date: Mon, 04 Oct 2010 12:12:25 +0200
To: w3c-wai-ig@w3.org, "Salinee Kuakiatwong" <salinee20@gmail.com>
Message-ID: <op.vj1qmzmbwxe0ny@widsith.local>
On Mon, 04 Oct 2010 10:14:36 +0200, Salinee Kuakiatwong  
<salinee20@gmail.com> wrote:

> Dear All,
> I'm writing a research paper to investigate the inter-reliability of
> automated evaluation tools. I used two automated web evaluation tools to
> scan the same web pages. The findings indicates there are highly
> discrepancies in the results between both tools although they're based on
> the same standard (WCAG 2.0).
> I'm new to the field. Any explanation for such a case?

Yes. Automated evaluation is pretty limited - each tool will use its own  
set of algortihms and heuristics and therefore probably not even test  
exactly the same things, let alone get the same results. You should do a  
manual evaluation yourself as part of the research paper, which will give  
you more insight into the particular issues that have arisen with the two  
automatic evaluations.



Charles McCathieNevile  Opera Software, Standards Group
     je parle français -- hablo español -- jeg lærer norsk
http://my.opera.com/chaals       Try Opera: http://www.opera.com
Received on Monday, 4 October 2010 10:13:34 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 13 October 2015 16:21:41 UTC