W3C home > Mailing lists > Public > public-wai-evaltf@w3.org > October 2011

Re: Alternative concise requirements

From: Elle <nethermind@gmail.com>
Date: Tue, 4 Oct 2011 07:53:26 -0400
Message-ID: <CAJ=fddMTUSYmBSZj7p_bi69Upanze7Q_jfsDnGmxGTHtAe3fjg@mail.gmail.com>
To: RichardWarren <richard.warren@userite.com>
Cc: Shadi Abou-Zahra <shadi@w3.org>, Eval TF <public-wai-evaltf@w3.org>
Hi, all.

It's been difficult for me to contribute so far, coming into the group later
than I hoped and trying to understand the process.  So, I've just been
reading for the most part. With that said, I wanted to thank you, Richard
and Shadi, for recommending that the 12 requirements become 10 more succinct
requirements.  This thread has made the most sense so far!


Much appreciated,
Elle



On Tue, Oct 4, 2011 at 5:38 AM, RichardWarren <richard.warren@userite.com>wrote:

> Dear Shadi,
>
> Thank you for your comments. Two heads are definitely better than one and I
> fully agree with all your suggestions.  I am particularly glad to get rid of
> the word "Unambiguous" <G>.
>
> Kind regards
> Richard
>
> -----Original Message----- From: Shadi Abou-Zahra
> Sent: Tuesday, October 04, 2011 6:05 AM
> To: Richard Warren
> Cc: Eval TF
> Subject: Re: Alternative concise requirements
>
>
> Hi Richard,
>
> Thank you for your contribution, I like the approach a lot. I agree that
> several requirements seem related and sometimes even overlap. I have not
> checked back to see if we dropped anything but have some comments on
> your suggestion:
>
>
> On 4.10.2011 02:13, RichardWarren wrote:
>
>> Hi,
>> I am a little concerned that we are making life needlessly difficult by
>> having produced eighteen requirements for a methodology to evaluate websites
>> against twelve guidelines!
>>
>> Having read through the proposed requirements carefully I notice that some
>> are very similar (such as supporting verification and supporting validity,
>> or using unambiguous language and making it translatable).  Others are
>> confusing, such as saying it will not stretch into techniques and tests
>> (R01) and yet use existing testing techniques (R06).
>>
>> So I sat down and had a go at rationalising what we had produced and came
>> up with ten requirements that cover (I hope) all the points raised in our
>> various discussions. I am sure that it is not a perfect set of requirements,
>> but I would like to share it with you just in case you think it might be the
>> basis for a more concise list of requirements.
>>
>> ------------------------------**------------------------------**--
>>
>> RQ 01 : Define methods for evaluating WCAG 2.0 conformance
>> The Methodology provides methods to measure conformance with WCAG 2.0.
>> that can be used by the target audience (see section 2 above) for evaluating
>> small or large websites, sections of websites or web-based applications.
>>
>
> Minor: "for evaluating small or large websites, sections of websites and
> web-based applications" (changed "or" to "and").
>
>
>  RQ 02  Unambiguous Interpretation
>> The methodology is written in clear language, understandable to the target
>> audience and capable of translation to other languages.
>>
>
> I think the title "Unambiguous Interpretation" does not match the
> description. Maybe something like "Clear, understandable, and
> translatable language" instead?
>
>
>  RQ 03  Reliable
>> Different Web accessibility evaluators using the same methods on the same
>> website(s) should get the same results. Evaluation process and results are
>> documented to support independent verification.
>>
>
> Maybe "equivalent results" rather than "*same* results"?
>
>
>  RQ 04 - Tool and browser independent
>> The use and application of the Methodology is vendor-neutral and
>> platform-independent. It is not restricted to solely manual or automated
>> testing but allows for either or a combination of approaches.
>>
>
> I think we need to clarify "vendor-neutral" and "platform-independent".
> I also think that the Methodology as a whole will have to rely on a
> combined manual and automated approach. My suggestion is:
>
> [[
> The use and application of the Methodology is independent of any
> particular evaluation tools, browsers, and assistive technology. It
> requires combined use of manual and automated testing approaches to
> carry out a full evaluation according to the Methodology.
> ]]
>
>
>  RQ 05 -  QA framework specification guidelines
>> The Methodology will conform to the Quality Assurance framework
>> specification guidelines as set in: http://www.w3.org/TR/qaframe-**spec/<http://www.w3.org/TR/qaframe-spec/>
>> .
>>
>> RQ 06 - Machine-readable reporting
>> The Methodology includes recommendations for harmonized (machine-readable)
>> reporting. It provides a format for delivering machine-readable reports
>> using Evaluation and Report Language (EARL) in addition to using the
>> standard template as at http://www.w3.org/WAI/eval/**template.html<http://www.w3.org/WAI/eval/template.html>
>>
>
> I think that the focus on human-readable reporting is more important
> than on machine-readable ones. Here is my suggestion:
>
> [[
> RQ 06 - Reporting
> The Methodology includes recommendations for reporting evaluation
> findings. It will be based on the
> [href=http://www.w3.org/WAI/**eval/template.html<http://www.w3.org/WAI/eval/template.html>standard template] and
> supplemented with machine-readable
> [href=http://www.w3.org/WAI/**intro/earl<http://www.w3.org/WAI/intro/earl>reports using Evaluation and
> Report Language (EARL)].
> ]]
>
>
>  RQ 07 -  Use of existing WCAG 2.0 techniques
>> Wherever possible the Methodology will employ existing testing procedures
>> in the WCAG 2.0 Techniques documents rather than replicate them.
>>
>> RQ 08 -  Recommendations for scope and sampling
>> It includes recommendations for methods of sampling web pages in large
>> websites and how to ensure that complete processes (such as for a shopping
>> site where all the pages that are part of the steps in an ordering process)
>> are included.  Such selections would be reflected in any conformance claim.
>>
>
> Minor: I stumbled over "large" -- is a website with say 50 or 100 pages
> considered large? It would still need sampling to evaluate...
>
>
>  RQ 09 -  Includes tolerance metrics
>> It includes calculation methods for determining nearness of conformance.
>> Depending on the amount of tolerance, a failure could fall within a certain
>> tolerance level meaning that the page or website might be considered
>> conformant even though there is a failure. Such tolerances would be
>> reflected in any conformance claim.
>>
>> RQ 10 - Support documentation
>> The document will give a short description of the knowledge necessary for
>> using the Methodology for evaluations.
>>
>
> Thanks,
>  Shadi
>
>
>  Regards
>> Richard
>>
>
> --
> Shadi Abou-Zahra - http://www.w3.org/People/**shadi/<http://www.w3.org/People/shadi/>
> Activity Lead, W3C/WAI International Program Office
> Evaluation and Repair Tools Working Group (ERT WG)
> Research and Development Working Group (RDWG)
>
>
>
Received on Tuesday, 4 October 2011 11:54:05 GMT

This archive was generated by hypermail 2.3.1 : Friday, 8 March 2013 15:52:12 GMT