RE: [for review] updated draft AERT

Dear all,

Another comment I just came across: 

2 Features of an accessibility evaluation tool; Introduction to the section
Make sure the last paragraph matches the structure of the rest of the
section: a reference to the last subsection ("tool usage") is missing.

Regards,

Samuel.

-----Mensaje original-----
De: Emmanuelle Gutiérrez y Restrepo [mailto:emmanuelle@sidar.org] 
Enviado el: miércoles, 12 de marzo de 2014 14:21
Para: 'Yod Samuel Martín'; 'ERT WG'
Asunto: RE: [for review] updated draft AERT

Dear all,

Below you may find my comments to the latest draft of AERT:

2.1. Multimedia Resources
(technical)
Original text: " Multimedia resources. These are images, movies or audio
tracks"
Suggestion: Multimedia resources. These are animations, movies or audio
tracks...
Reason: An "image" is not a multimedia resource. Multimedia means multiple
media. 

2.1 Resources with other ...
(editorial)
Original text: Depending on the tool customers' needs, it may be necessary
to parse and evaluate this type of resources.
Suggestion: Depending on the customers' needs, it may be necessary to parse
and evaluate this type of resources.
Reason: Depends on the needs of the user, does not depend on the needs of
the tool.

2.3.6 Conformance
(editorial)
Suggestion: Add a sentence recalling that compliance can only be determined
after evaluation by a human. 
Reason: As it say in 2.2.2: " Since it is a known fact that automatic tests
only cover a small set of accessibility issues, full accessibility
conformance can only be ensured by supporting developers and accessibility
experts while testing in manual and semiautomatic mode."

2.5.2 Functionality...
(editorial)
Suggestion: Add a case: Users who need to report an accessibility barrier to
the webmaster of the site or any agency of defending their rights.
Reason: Is another case and there are a Sidar tool (in development from long
time ago) for that.

Hear you soon,

Emmanuelle Gutiérrez y Restrepo
Patrono y Directora General
Fundación Sidar - Acceso Universal
Email: coordina@sidar.org
Personal: Emmanuelle@sidar.org
Web: http://sidar.org


-----Mensaje original-----
De: Yod Samuel Martín [mailto:samuelm@dit.upm.es] Enviado el: miércoles, 12
de marzo de 2014 12:38
Para: 'ERT WG'
Asunto: RE: [for review] updated draft AERT

Dear all,

Below you may find my comments to the latest draft of AERT:

Title
(great)
Explicit support to the new title: including "features" as the head of the
title, and a subtitle explaining it goal ("guidance for developers"). Maybe
"evaluation features" is redundant with "evaluation tools" and just
"Features of web accessibility evaluation tools" could suffice.

Introduction
(terminology clarification)
"Commissioner", when referring to the person or entity which entrusts the
developers with website creation (i.e. "the client") seems specific jargon
to me. I understand the concept, but I am not sure whether it might sound
awkward to non-specialized readers. If the word is kept, I would suggest
explicitly defining it.

2.1.5 Content negotiation
(editorial comment)
It seems there's something missing on the first sentence: "Content
negotiation is a characteristic (...) to allow web servers to customers the
sent resources". Maybe it was intended to read "to allow web servers to
*customize* the sent resources"? Besides, I would prefer something like "to
allow web servers to customize the representation sent of the requested
resources" (following the usual distinction between a resource and its
representation).

2.1.6 Cookies / 2.1.7 Authentication
(add explanations to each)
I would suggest adding a sentence at the end that explains how the tool
deals with this feature (after the feature has been described in the first
part), like what has been done in 2.1.5 Content negotiation. 
For instance
- "A cookie is (...) Cookies contain (...) *A tool that supports cookies may
store the cookie value provided by the server in an http response an reuse
it in subsequent requests. Or it may allow the user to manually set the
cookie value to be used in a request* "
- "Websites require sometimes authentication (...) to control (...). *A tool
that supports authentication allows the user to provide their credentials
beforehand, so that they are used when accessing protected resources, or it
prompts the user to enter the credentials upon the server request* ".

2.1.9 Crawling
(add filter examples)
Under "Capability to define inclusion and exclusion filters" I would suggest
adding specific filter examples. For instance: stop recursively follow links
after a number of levels, only crawl those resources whose URL matches a
specific template. Sorry if this had been raised before, it is something
that has come to my mind several times, but I honestly think I had not
eventually commented on it.

2.2.1  Selection of evaluation tests
It might be clarified there are two aspects involved: 1) which evaluation
tests are supported?; 2) can they be configured (filtered)? For instance, a
color contrast analyzer only supports a specific criterion / test. That is a
defining feature of that evaluation tool, and is different from saying you
can choose only that test.

2.2.2  Automatic, semiautomatic and manual testing After rereading the
section, I do not think the difference between manual and semiautomatic
tests is clear enough, from the point of view of the evaluation tool
features.
For instance, a tool detects all the images with an "alt" attribute, and
highlights them together with the alt value, so that the evaluator may judge
the alternative texts. Is that semiautomatic (because the tool automatically
detected all the images plus their alternatives) or is it manual (because
human input is still required to decide the outcome)? Besides, the paragraph
explaining that case encompasses both "semiautomatic or manual tests"
without distinction.

I understand this might been clarified for EARL's TestMode long ago, but I
just don't think the current text ir AERT is clear enough.

2.3 Reporting and monitoring
(new features suggested)
I would suggest adding these two features:
- Overimposed report. Render evaluation results superimposed on top of the
evaluated content or source code. This is a common report type which is not
covered by current items.
- Report modification. The evaluator may manually add their own judgments to
the report, specifically when they need to perform manual tests, or when the
user's evaluation differ from that of the tool. Tools may keep provenance
information (i.e. which part of the report was automatically generated by
the tool and which was manually modified).

2.4 Workflow integration
(organization)
Question: is this section expected to be expanded into different items (e.g.
2.4.1 Web browsers, 2.4.2. IDEs, etc.)? If that is the case, it is ok for
me; otherwise, maybe it could be subsumed again under 2.5. tool usage (to
keep balance among sections).

Hear you later.

Regards,

Samuel.


-----Mensaje original-----
De: Shadi Abou-Zahra [mailto:shadi@w3.org] Enviado el: lunes, 10 de marzo de
2014 15:41
Para: ERT WG
Asunto: [for review] updated draft AERT

Dear Group,

The latest draft of AERT (working title) for review is here:
  - http://www.w3.org/WAI/ER/WD-AERT/ED-AERT

Please send comments for discussion to the list!

Regards,
   Shadi

--
Shadi Abou-Zahra - http://www.w3.org/People/shadi/ Activity Lead, W3C/WAI
International Program Office Evaluation and Repair Tools Working Group (ERT
WG) Research and Development Working Group (RDWG)

Received on Wednesday, 12 March 2014 15:00:17 UTC