- From: Samuel Martín <samuelm@dit.upm.es>
- Date: Mon, 30 Sep 2013 12:23:44 +0200
- To: "Carlos A Velasco" <carlos.velasco@fit.fraunhofer.de>
- Cc: "Shadi Abou-Zahra" <shadi@w3.org>, "ERT WG" <public-wai-ert@w3.org>
Hi all, Below you may find some comments regarding the latest draft of AERT: - Organization: the table in section 3 seems mostly clear to me. I would go beyond and suggest reordering the features in section 2, and grouping them into different subsections, according to the same categories used in the table. - Tool audience category: I would explicitly include tool accessibility as a nother (desirable) feature. I am sure most agree on the relevance of the accessibility of evaluation tools, which should abide by general authoring tools accessibility criteria (better described in section A of ATAG 2.0 <http://www.w3.org/WAI/ER/WD-AERT/ED-AERT20130906> ). But there is a more specific rationale for this point: in many companies, people with disabilities work as accessibility-specialyzed consultants, and they need authoring and evaluation tools that fit their ability profile. - Web testing APIs, I'm not sure if they only apply to "Test customization", or to "Subject being tested" as well. Tools that offer this kind of APIs (e.g. Selenium) are also used to bring the web application under testing to a predetermined state (to access a specific "Point of Observation"). For instance, APIs can also be used to start a session on a web site, add some products to a shopping cart, and then go to the "cart summary" page, which will be a subject under test that could have not been otherwise generated (as it does not correspond merely to e.g. a predefined URI). - Repair: I agree that automatic repair should be discouraged (basically, if user agents cannot provide an accessible representation or control, there is no reason that makes us think other software such as an accessibility evaluation tool is going to be smart enough to "mend" that). However, that should not preclude accessibility evaluation tools from automatically suggesting potential fixes. These fixes can even depend on the input of the evaluator, yet they are provided with some guidance nonetheless. Think, e.g. about the "quick fix" functionalities usually integrated in IDEs, which guide developers on how to fix a code problem, while still leaving the final choice in the hands of the developer. Regards, Samuel. > Hi all, > > Just to complement Shadi's comment, I would be very interested in > hearing your opinion about the meta-categories of the different tool > characteristics presented in the table of section 3. If people comment > this week online, I could try to rework the document before the meeting > next week. > > Thanks. > > On 23/09/13 22:20, Shadi Abou-Zahra wrote: >> Dear Group, >> >> The latest draft of AERT (working title) for review is here: >> - http://www.w3.org/WAI/ER/WD-AERT/ED-AERT >> >> Note in particular the slight change in title. All up for commenting. >> >> Regards, >> Shadi >> > > -- > Best Regards, Mit freundlichen Grüßen, Saludos, > carlos > > Dr Carlos A Velasco > Fraunhofer Institute for Applied Information Technology FIT > Web Compliance Center: http://imergo.com/ ˇ http://imergo.de/ > Schloss Birlinghoven, D53757 Sankt Augustin (Germany) > Tel: +49-2241-142609 ˇ Fax: +49-2241-1442609 >
Received on Monday, 30 September 2013 10:46:47 UTC