W3C home > Mailing lists > Public > public-wai-ert@w3.org > February 2013

Descriptors for accessibility tools.

From: <samuelm@dit.upm.es>
Date: Wed, 13 Feb 2013 14:25:25 +0100
Message-ID: <e712acd29e8d11ab4d628fb11fc9427c.squirrel@correo.dit.upm.es>
To: "ERT WG" <public-wai-ert@w3.org>
Dear ERT,

I have supervised two M.Sc. theses which included a survey on
accessibility evaluation tools. For that, a set of descriptors were
defined, which were then applied to the different tools. I have quickly
compiled them and provide a summary below, in case they might be helpful
as an input for the Requirements Analaysis for Techniques for Automated
and Semi-Automated Evaluation Tools. Note this list is descriptive, not
prescriptive: it was just created as a framework to describe more easily
the different tools, but it does not imply any choice is superior above
others.

Regards,

Samuel.

Features of evaluation tools:
- Deployment:
	 online service
	 browser-triggered remote service (scriptlet, favelet, menu add-on)
	 server-side module (i.e. web application)
	 rich-client editor module (e.g. CMS, etc. maybe relying on remote
server support)
	 browser plug-in
	 installable desktop software
	 stand-alone (no installation) desktop software
- Platform requirements: OS, environment, dependencies, etc.
- Retrieval of evaluated contents:
	 capture rendered presentation directly from the browser
	 access to public URI from a remote server
	 access to a URI directly from the evaluator's equipment
	 access to local file system: either accessing a file:/// URI, or
directly accessing the local file-system, or uploading a form-data
encoded file to a service
	 direct user input.
- Analysis depth:
	 single document,
	 follow-links constrained to depth level
	 follow-links constrained to path filter (i.e. set of directories,
subdirectories)
	 follow-links constrained to domain filter (e.g. same domain, subdomains).
- Accessibility requirements tested:
	 guideline families (here, usually WCAG 2.0)
	 success criteria selection: one by one, by conformance level
	 technique selection: automatic (depending on the content type),
partially manual.
	 user-defined techniques (using formal languages, plugins, etc.)
- Reporting:
	 summarized: scores -and specific metric used-, aggregated tables, radar
chart.
	 detailed: table, tree-like, linear
	 grouping: by criteria, level, result...
	 visual annotation: on top of the original rendering of the content, on
top of the original source code
	 output formats (e.g. HTML, PDF)
	 EARL support, including any vocabulary extensions (e.g.
- Manual revision: manual annotation of the report, adding the results of
evaluation tests.

Apart from those features, other, more targeted tools were identified:
- Browser toolbars, characterized by their functionalities. These can
mainly be grouped in: content manipulation, content summarization, and
browser reconfiguration.
- Specific criteria: contrast analyzers, readability analyzers, formal
validators, etc.
- Emulators of specific user ability profiles ("disability simulators").
Received on Wednesday, 13 February 2013 13:30:03 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 13 February 2013 13:30:04 GMT