W3C home > Mailing lists > Public > public-wai-ert@w3.org > February 2013

RE: Descriptors for accessibility tools.

From: <samuelm@dit.upm.es>
Date: Wed, 13 Feb 2013 20:31:15 +0100
Message-ID: <aff8760143976e9a07a488b32ad8c0ac.squirrel@correo.dit.upm.es>
To: "Emmanuelle Gutiérrez y Restrepo" <emmanuelle@sidar.org>
Cc: "'ERT WG'" <public-wai-ert@w3.org>
Thanks for the feedback, Emmanuelle! You are right that contextual help
may also be a distinctive feature of a tool. Sure, the list of features
below is not deemed to be exhaustive, it was just my two cents to be used
as another input for the Requirement Analysis document. Even if we
eventually follow the "feature list + profiles" approach that was
mentioned today, I am sure the final list will be very much longer. I
guess we may comment on that when we retake the subject on the next
conference.

Regards,

Samuel.


> Very good job, Samuel!
>
> I miss an item: Contextual help.
>
> For HERA (http://sidar.org/hera/) there are two types of contextual help:
> - Contextual help short (For experienced reviewers and reminding them they
> have to pay particular attention being when reviewing each point)
> - Extensive context-sensitive Help (For novice reviewers and gives further
> information on why the test is useful, who benefits, etc.)
>
> All the best,
> Emmanuelle
>
> -----Mensaje original-----
> De: samuelm@dit.upm.es [mailto:samuelm@dit.upm.es]
> Enviado el: miércoles, 13 de febrero de 2013 14:25
> Para: ERT WG
> Asunto: Descriptors for accessibility tools.
>
> Dear ERT,
>
> I have supervised two M.Sc. theses which included a survey on
> accessibility
> evaluation tools. For that, a set of descriptors were defined, which were
> then applied to the different tools. I have quickly compiled them and
> provide a summary below, in case they might be helpful as an input for the
> Requirements Analaysis for Techniques for Automated and Semi-Automated
> Evaluation Tools. Note this list is descriptive, not
> prescriptive: it was just created as a framework to describe more easily
> the
> different tools, but it does not imply any choice is superior above
> others.
>
> Regards,
>
> Samuel.
>
> Features of evaluation tools:
> - Deployment:
> 	· online service
> 	· browser-triggered remote service (scriptlet, favelet, menu add-on)
> 	· server-side module (i.e. web application)
> 	· rich-client editor module (e.g. CMS, etc. maybe relying on remote
> server support)
> 	· browser plug-in
> 	· installable desktop software
> 	· stand-alone (no installation) desktop software
> - Platform requirements: OS, environment, dependencies, etc.
> - Retrieval of evaluated contents:
> 	· capture rendered presentation directly from the browser
> 	· access to public URI from a remote server
> 	· access to a URI directly from the evaluator's equipment
> 	· access to local file system: either accessing a file:/// URI, or
> directly accessing the local file-system, or uploading a form-data encoded
> file to a service
> 	· direct user input.
> - Analysis depth:
> 	· single document,
> 	· follow-links constrained to depth level
> 	· follow-links constrained to path filter (i.e. set of directories,
> subdirectories)
> 	· follow-links constrained to domain filter (e.g. same domain,
> subdomains).
> - Accessibility requirements tested:
> 	· guideline families (here, usually WCAG 2.0)
> 	· success criteria selection: one by one, by conformance level
> 	· technique selection: automatic (depending on the content type),
> partially manual.
> 	· user-defined techniques (using formal languages, plugins, etc.)
> - Reporting:
> 	· summarized: scores -and specific metric used-, aggregated tables,
> radar chart.
> 	· detailed: table, tree-like, linear
> 	· grouping: by criteria, level, result...
> 	· visual annotation: on top of the original rendering of the
> content, on top of the original source code
> 	· output formats (e.g. HTML, PDF)
> 	· EARL support, including any vocabulary extensions (e.g.
> - Manual revision: manual annotation of the report, adding the results of
> evaluation tests.
>
> Apart from those features, other, more targeted tools were identified:
> - Browser toolbars, characterized by their functionalities. These can
> mainly
> be grouped in: content manipulation, content summarization, and browser
> reconfiguration.
> - Specific criteria: contrast analyzers, readability analyzers, formal
> validators, etc.
> - Emulators of specific user ability profiles ("disability simulators").
>
>
>
Received on Thursday, 14 February 2013 08:48:55 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Thursday, 14 February 2013 08:48:55 GMT