Re: Seeking for advices on results from automated web evaluation tools

I'd like to ad that good automation tools also have provisions for evening up the playing field and may be set with different results from tool to tool.

On Oct 4, 2010, at 6:12 AM, Charles McCathieNevile wrote:

On Mon, 04 Oct 2010 10:14:36 +0200, Salinee Kuakiatwong <salinee20@gmail.com> wrote:

> Dear All,
> 
> I'm writing a research paper to investigate the inter-reliability of
> automated evaluation tools. I used two automated web evaluation tools to
> scan the same web pages. The findings indicates there are highly
> discrepancies in the results between both tools although they're based on
> the same standard (WCAG 2.0).
> 
> I'm new to the field. Any explanation for such a case?

Yes. Automated evaluation is pretty limited - each tool will use its own set of algortihms and heuristics and therefore probably not even test exactly the same things, let alone get the same results. You should do a manual evaluation yourself as part of the research paper, which will give you more insight into the particular issues that have arisen with the two automatic evaluations.

cheers

Chaals

-- 
Charles McCathieNevile  Opera Software, Standards Group
   je parle français -- hablo español -- jeg lærer norsk
http://my.opera.com/chaals       Try Opera: http://www.opera.com


-- 
Jonnie Appleseed
with his
Hands-On Technolog(eye)s
reducing technology's disabilities
one byte at a time

Received on Monday, 4 October 2010 10:41:21 UTC