W3C home > Mailing lists > Public > public-wai-evaltf@w3.org > October 2011

Re: Alternative concise requirements

From: Denis Boudreau <dboudreau@accessibiliteweb.com>
Date: Thu, 06 Oct 2011 00:30:42 -0400
Message-id: <F8A6E844-3433-4AE9-B463-DB57DE6FC2C9@accessibiliteweb.com>
To: Eval TF <public-wai-evaltf@w3.org>
Hello EvalTF,

I'm all in favor of reducing the number of requirements, if we can include everything in just 10.

My comments follow.


On 2011-10-04, at 1:05 AM, Shadi Abou-Zahra wrote:


>> RQ 01 : Define methods for evaluating WCAG 2.0 conformance
>> The Methodology provides methods to measure conformance with WCAG 2.0. that can be used by the target audience (see section 2 above) for evaluating small or large websites, sections of websites or web-based applications.
> 
> Minor: "for evaluating small or large websites, sections of websites and web-based applications" (changed "or" to "and").

Minor: remove the dot (.) after the 2.0


>> RQ 02  Unambiguous Interpretation
>> The methodology is written in clear language, understandable to the target audience and capable of translation to other languages.
> 
> I think the title "Unambiguous Interpretation" does not match the description. Maybe something like "Clear, understandable, and translatable language" instead?

+1


>> RQ 03  Reliable
>> Different Web accessibility evaluators using the same methods on the same website(s) should get the same results. Evaluation process and results are documented to support independent verification.
> 
> Maybe "equivalent results" rather than "*same* results"?

Equivalent results or similar results.


>> RQ 04 - Tool and browser independent
>> The use and application of the Methodology is vendor-neutral and platform-independent. It is not restricted to solely manual or automated testing but allows for either or a combination of approaches.
> 
> I think we need to clarify "vendor-neutral" and "platform-independent". I also think that the Methodology as a whole will have to rely on a combined manual and automated approach. My suggestion is:
> 
> [[
> The use and application of the Methodology is independent of any particular evaluation tools, browsers, and assistive technology. It requires combined use of manual and automated testing approaches to carry out a full evaluation according to the Methodology.
> ]]

+1


>> RQ 05 -  QA framework specification guidelines
>> The Methodology will conform to the Quality Assurance framework specification guidelines as set in: http://www.w3.org/TR/qaframe-spec/.

+1



>> RQ 06 - Machine-readable reporting
>> The Methodology includes recommendations for harmonized (machine-readable) reporting. It provides a format for delivering machine-readable reports using Evaluation and Report Language (EARL) in addition to using the standard template as at http://www.w3.org/WAI/eval/template.html
> 
> I think that the focus on human-readable reporting is more important than on machine-readable ones. Here is my suggestion:
> 
> [[
> RQ 06 - Reporting
> The Methodology includes recommendations for reporting evaluation findings. It will be based on the [href=http://www.w3.org/WAI/eval/template.html standard template] and supplemented with machine-readable [href=http://www.w3.org/WAI/intro/earl reports using Evaluation and Report Language (EARL)].
> ]]

Sounds very good.


>> RQ 07 -  Use of existing WCAG 2.0 techniques
>> Wherever possible the Methodology will employ existing testing procedures in the WCAG 2.0 Techniques documents rather than replicate them.

+1



>> RQ 08 -  Recommendations for scope and sampling
>> It includes recommendations for methods of sampling web pages in large websites and how to ensure that complete processes (such as for a shopping site where all the pages that are part of the steps in an ordering process) are included.  Such selections would be reflected in any conformance claim.
> 
> Minor: I stumbled over "large" -- is a website with say 50 or 100 pages considered large? It would still need sampling to evaluate...

I don't think the sampling of pages need to be different if the site is large or small. Obviously, we'd take less pages on a smaller site, but in each case, the methodology should encourage to look for significant and representative pages.


>> RQ 09 -  Includes tolerance metrics
>> It includes calculation methods for determining nearness of conformance.  Depending on the amount of tolerance, a failure could fall within a certain tolerance level meaning that the page or website might be considered conformant even though there is a failure. Such tolerances would be reflected in any conformance claim.

While I agre conceptually, I still have to understand how we could come around to doing this without falling into subjectivity.


>> RQ 10 - Support documentation
>> The document will give a short description of the knowledge necessary for using the Methodology for evaluations.

Short description and provide links to said support documentation?

/Denis
Received on Thursday, 6 October 2011 04:31:05 GMT

This archive was generated by hypermail 2.3.1 : Friday, 8 March 2013 15:52:12 GMT