Descriptors for accessibility tools.

Dear ERT,

I have supervised two M.Sc. theses which included a survey on
accessibility evaluation tools. For that, a set of descriptors were
defined, which were then applied to the different tools. I have quickly
compiled them and provide a summary below, in case they might be helpful
as an input for the Requirements Analaysis for Techniques for Automated
and Semi-Automated Evaluation Tools. Note this list is descriptive, not
prescriptive: it was just created as a framework to describe more easily
the different tools, but it does not imply any choice is superior above
others.

Regards,

Samuel.

Features of evaluation tools:
- Deployment:
	· online service
	· browser-triggered remote service (scriptlet, favelet, menu add-on)
	· server-side module (i.e. web application)
	· rich-client editor module (e.g. CMS, etc. maybe relying on remote
server support)
	· browser plug-in
	· installable desktop software
	· stand-alone (no installation) desktop software
- Platform requirements: OS, environment, dependencies, etc.
- Retrieval of evaluated contents:
	· capture rendered presentation directly from the browser
	· access to public URI from a remote server
	· access to a URI directly from the evaluator's equipment
	· access to local file system: either accessing a file:/// URI, or
directly accessing the local file-system, or uploading a form-data
encoded file to a service
	· direct user input.
- Analysis depth:
	· single document,
	· follow-links constrained to depth level
	· follow-links constrained to path filter (i.e. set of directories,
subdirectories)
	· follow-links constrained to domain filter (e.g. same domain, subdomains).
- Accessibility requirements tested:
	· guideline families (here, usually WCAG 2.0)
	· success criteria selection: one by one, by conformance level
	· technique selection: automatic (depending on the content type),
partially manual.
	· user-defined techniques (using formal languages, plugins, etc.)
- Reporting:
	· summarized: scores -and specific metric used-, aggregated tables, radar
chart.
	· detailed: table, tree-like, linear
	· grouping: by criteria, level, result...
	· visual annotation: on top of the original rendering of the content, on
top of the original source code
	· output formats (e.g. HTML, PDF)
	· EARL support, including any vocabulary extensions (e.g.
- Manual revision: manual annotation of the report, adding the results of
evaluation tests.

Apart from those features, other, more targeted tools were identified:
- Browser toolbars, characterized by their functionalities. These can
mainly be grouped in: content manipulation, content summarization, and
browser reconfiguration.
- Specific criteria: contrast analyzers, readability analyzers, formal
validators, etc.
- Emulators of specific user ability profiles ("disability simulators").

Received on Wednesday, 13 February 2013 13:30:03 UTC