- From: Charles McCathieNevile <charles@sidar.org>
- Date: Fri, 18 Feb 2005 13:41:07 +0100
- To: "Myriam Arrue" <myriam@si.ehu.es>, public-wai-ert@w3.org
Aupa Myriam! Hi folks
In fact Fundacion Sidar is working on just such a use case. We have a tool
(Hera) which uses the results of a number of different evaluations - some
automatic done by the tool, some manual done by a person, some automatic
and done by an external tool.
So a basic use case is to combine these results.
In order to help us in developing the tools, we also compare results to
see if a new tester or automated test gives the same results as a tester
we trust.
Cheers
Chaals
On Fri, 18 Feb 2005 12:33:28 +0100, Myriam Arrue <myriam@si.ehu.es> wrote:
> Hi everybody!
> I'm Myriam Arrue from the Laboratory of Human-Computer Interaction in
> the University of the Basque Country. I'd like to start the discussion
> about the scenarios of EARL.
> One of the main objective of EARL is to combine different tools'
> evaluation results in order to compare them. Another important feature
> of EARL is that it can be used for exchanging data between tools.
> In my opinion, these two objectives can be integrated in a scenario in
> order to clearly describe to evaluation tools developers the advantages
> of using EARL as the errors reporting format.
> A scenario where a tool or software application invokes different
> evaluation tools combining the different results obtained in EARL in one
> complete evaluation result report can be useful for this purpose. This
> scenario would highlight the need of interaction between evaluation
> tools and also the importance (as described in Evaluation Suite
> (http://www.w3.org/WAI/eval/ ) of evaluating the web content with at
> least two different evaluation tools.
> Waiting for your opinions,
> Myriam
>
>
--
Charles McCathieNevile - Vice Presidente - Fundacion Sidar
charles@sidar.org http://www.sidar.org
(chaals is available for consulting at the moment)
Received on Friday, 18 February 2005 12:49:39 UTC