W3C home > Mailing lists > Public > w3c-wai-ig@w3.org > October to December 2010

Re: Seeking for advices on results from automated web evaluation tools

From: David Poehlman <poehlman1@comcast.net>
Date: Mon, 4 Oct 2010 06:40:49 -0400
Cc: w3c-wai-ig@w3.org, "Salinee Kuakiatwong" <salinee20@gmail.com>
Message-Id: <92240C20-4C0E-4A00-AA2F-4037884BFA64@comcast.net>
To: "Charles McCathieNevile" <chaals@opera.com>
I'd like to ad that good automation tools also have provisions for evening up the playing field and may be set with different results from tool to tool.

On Oct 4, 2010, at 6:12 AM, Charles McCathieNevile wrote:

On Mon, 04 Oct 2010 10:14:36 +0200, Salinee Kuakiatwong <salinee20@gmail.com> wrote:

> Dear All,
> I'm writing a research paper to investigate the inter-reliability of
> automated evaluation tools. I used two automated web evaluation tools to
> scan the same web pages. The findings indicates there are highly
> discrepancies in the results between both tools although they're based on
> the same standard (WCAG 2.0).
> I'm new to the field. Any explanation for such a case?

Yes. Automated evaluation is pretty limited - each tool will use its own set of algortihms and heuristics and therefore probably not even test exactly the same things, let alone get the same results. You should do a manual evaluation yourself as part of the research paper, which will give you more insight into the particular issues that have arisen with the two automatic evaluations.



Charles McCathieNevile  Opera Software, Standards Group
   je parle franšais -- hablo espa˝ol -- jeg lŠrer norsk
http://my.opera.com/chaals       Try Opera: http://www.opera.com

Jonnie Appleseed
with his
Hands-On Technolog(eye)s
reducing technology's disabilities
one byte at a time
Received on Monday, 4 October 2010 10:41:21 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 13 October 2015 16:21:41 UTC