W3C home > Mailing lists > Public > public-wai-evaltf@w3.org > September 2012

comments for WCAG-EM

From: lisa.seeman <lisa.seeman@zoho.com>
Date: Fri, 21 Sep 2012 05:09:58 +0300
To: <public-wai-evaltf@w3.org>
Cc: "w3c-wai-pf" <w3c-wai-pf@w3.org>
Message-ID: <139e696ae31.-6190036865910075958.-4128711055431828810@zoho.com>
Hi,
Looking at http://www.w3.org/TR/WCAG-EM/ methodology for evaluating and reporting the conformance of websites. I suggest stressing context and impact of accessibility violations, and also make the resulting conformant reports more usable. For example: 
 
1: In the current draft the resulting reports will be hard to read and high impact issues will be lost in all the text. (I know of sites that commissioned an audit and have never gone through it as the thing just looked to hard to address. The result was that nothing was fixed.)

We could recommend a different order for the resulting evaluation reports. Reports could start with a cover page followed by an overview that is easy to understand by the documents target audience. An overview could summarize the results in easy to understand terms and bring to the readers attention to any high impact issues. Possibly it could identify any high impact issues that are also easy to fix. 
(High impact issues could include: Any accessibility violations that interfere with the user completing important tasks from the website or user perspective, violations that affect navigation and orientation, violations that are occur very frequently; and any violation that can cause any loss or harm to a user)

A lot of the important but technical information could be put in an appendix (such as documentation of each outcome of the steps defined in 3.1.). 

2: Samples could also stress context and impact. Could you clarify that samples should contain:
2.1, The main user tasks (the website probably have a list of these from the original specifications used to build the web site);
2.2, the most important tasks from the website perspective, all important tasks from a user perspective (such as getting help, knowing your rights etc) ;
2.3, content for navigation and orientation; 
2.4, high traffic / main pages and tasks;
2.5, tasks where there is a noticeable loss to the user when they do not function (e.g.; an inaccessible form submit button).

3: Reports could also state the context of the violation and if it is high impact or important for completing user tasks. This will help the reader prioritize the repairs, create bug reports and understand why these violations are important, so that (hopefully) the website will be more likely to fix them. Also reports could identify mistakes that propagate (such as on a template) or that can be fixed one time or in one place with maximum impact.
Further, I think reports should contain all the information needed to create bug reports in whatever system the website uses. If you can not use a report to easily make bug reports the website is a lot less likely to do any repair. 
 
4: A detailed report or in-depth analyses should explain reasons for failure, and not just rely on referencing success and failure techniques. For example, an image may have an alternative text, but the text does not give the same information to the user as the image does. Further, an in-depth report could explain what information is missing. 
Reports will be much more usable if they also contain examples on how to fix errors. I think at least an an "in-depth analyses" should contain repair examples. Repair examples that are based on the actual website will often be a lot more useful and practical then generic examples from the WCAG techniques documents.

5: We could also suggest providing advice for guidance for the future - such as using ARIA in new scripts etc

6: Where failures in meeting WCAG 2.0 Success Criteria on a web page are identified, each identified occurrence of such a failure must be indicated in the report.
I would find this extremely cumbersome to do, and I question if it is useful.. For example, a single web page may be missing over a hundred alt attributes on presentational images. Maybe a sampling would suffice and two examples of each type of failure.

7: When providing machine-readable EARL reports, I suggest also providing instructions on how and why to use it.

All the best

Lisa
Received on Friday, 21 September 2012 16:09:02 GMT

This archive was generated by hypermail 2.3.1 : Friday, 8 March 2013 15:52:15 GMT