W3C home > Mailing lists > Public > public-wai-ert@w3.org > March 2005

ERT Action Item: Use Case Scenarios for EARL

From: Johannes Koch <johannes.koch@fit.fraunhofer.de>
Date: Tue, 22 Mar 2005 17:46:37 +0100
Message-ID: <42404BED.5010300@fit.fraunhofer.de>
To: public-wai-ert@w3.org
Hi there,

see attached a first attempt to compile and write out some use case 
scenarios for EARL.

Because most of you don't know me yet, I would like to introduce me: I'm 
an aerospace engineer now working as developer of the evaluation tool 
imergo (aka RedDot Web Compliance Manager) at the BIKA competence center 
(<http://access.fit.fraunhofer.de/>) of the Fraunhofer institute FIT. 
Before working for FIT, I was a web/IT developer at a german internet 
company called Pixelpark.
Johannes Koch - Competence Center BIKA
Fraunhofer Institute for Applied Information Technology (FIT.LIFE)
Schloss Birlinghoven, D-53757 Sankt Augustin, Germany
Phone: +49-2241-142628

EARL Use Case Scenarios:

1. Evaluating a Web site using tools in different languages

A group of people speaking different languages are evaluating a web site.
a) EARL allows for detailed messages in different languages. The report can
contain messages in the languages spoken by the evaluators so that each of them
understands the messages.
b) EARL allows for "keywords" for the validity level that are language-
independent. So a software tool can translate the validity levels in different

2. Combining results from different evaluation tools

A web site evaluator uses different tools for evaluation. Each tool can perform
specific tests that the other tools cannot do. The evaluator's client wants a
complete evaluation report. All the evaluation tools used produce a report in
EARL format. So the evaluator can combine the separate reports into one bigger

3. Comparing results from different tools against each other

A web site evaluator uses different tools for evaluation. The tools perform the
same tests. All the evaluation tools used produce a report in EARL format. So
the evaluator can compare the results from different tools to increase the
confidence level of the test results.

4. Comparing results from an evaluation tool against a test suite

For a benchmarking test different tools perform their tests on sample documents
from a test suite. Some evaluation tools may produce false positives or false
negatives. So evaluation tools can be rated according to accuracy against the
test suite.

5. Monitoring a Web site over time

A Web project manager wants to track the accessibility of a Web document over
time by comparing current test results with previous ones. The reports contain
the date/time of the tests and a way to locate the parts of the document the
messages refer to. By compairing messages refering to the same locations the
project manager can monitor possible changes.

6. Exchanging data with repair tools

A repair tool uses the results of an evaluation tool to identify the parts of
the document that need to be fixed. For each instance of an error it provides a
way for the user to notice the error and fix the document.

7. Exchanging data with search engines

A search engine uses a third-party service which publishes EARL reports of Web
a) The user interface lets the user choose between different levels of
accessibility. The list of search results contains only documents with a chosen
accessibility level.
b) The search engine uses the test results in the calculation of the ranking /
relevance, so that it affects the search results order.

8. Extending EARL statements

A tool developer wants to have more information in the report than is defined
for standard EARL. Because she still wants to be compatible with existing EARL
consuming tools, she subclasses the EARL result types to provide more
granularity within the tool.

Editor: Johannes Koch
Received on Tuesday, 22 March 2005 16:48:25 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:55:52 UTC