AW: Requirements draft

Hello Richard, Eric, TF,

some comments:

> * Requirements:
>> R01: Technical conformance to existing Web Accessibility Initiative
> (WAI) Recommendations and Techniques documents.

> Comment (RW) :  I do not think we need the word technical. We should
> stick with WCAG as agreed when we discussed *A01.  The recommendations and
> techniques are not relevant here as our priority is the Guidelines. It
> is possible for someone to comply with a particular guideline without
> using any of the recommended techniques. What we are after is methodology.  I
> therefore suggest a suitable alternative as follows:
> 
> *R01 Define methods for evaluating compliance with the accessibility
> guidelines (WCAG)

Comment (KP): As I understood R01 it stresses the formal level. If the formulation would be "R01: Technical conformance to existing Web Accessibility Initiative (WAI) Recommendations and Techniques" I would agree. Because we have in the WCAG sub-documents like "understanding", "glossary" and so on. For that "documents" for me is ok. Because of other WAI documents like e.g. ATAG I would agree with

> R01: Technical conformance to existing Web Accessibility Initiative
> (WAI) Recommendations and Techniques documents.

As long as the formal level of the documents itself and not the techniques which are in the documents is meant.

>> R02: Tool and browser independent

> Comment (RW) : The principle is good but sometimes it may be necessary
> to use a particular tool such as a text-only browser. So I would prefer :
> 
> *R02: Where possible the evaluation process should be tool and browser
> independent.

Comment (KP): I partly agree with "possible". When we use "possible" we should then describe/define what "possible" exactly means. 

>> R03: Unique interpretation

> Comment (RW) : I think this means that it should be unambiguous, that
> means it  is not open to different interpretations. I am pretty sure that the
> W3C has a standard clause it uses to cover this point when building
> standards etc. Hopefully Shadi can find it <Grin> . This also implies use of
> standard terminology which we should be looking at as soon as possible so that
> terms like “atomic testing” do not creep into our procedures without clear
> /agreed definitions.

Comment (KP): Using standard terminology is an important point also for me. And I suggest that we should also regard the standard terminology used I testing theory. The advantage would be that we are using established terms which will help to avoid misunderstandings. 
 
>> R04: Replicability: different Web accessibility evaluators who perform
> the same tests on the same site should get the same results within a given
> tolerance.

> Comment (RW) : The first part is good, but I am not happy with
> introducing “tolerance” at this stage. I think we should be clear that we are after
> consistent, replicable tests. I think we should add separate requirement
> later for such things as “partial compliance” and “tolerance. See R14
> below.
> 
> *R04: Replicability: different Web accessibility evaluators who perform
> the same tests on the same site should get the same results.

Comment (KP): I strongly agree with Richard. Except "Replicability" and would suggest:

R04: Reliability: different Web accessibility evaluators who perform the same tests on the same site should get the same results.

>> R05: Translatable

> Comment (RW) : As in translatable into different languages – Yes -
> agree

Comment (KP): I agree and I see especially translatable in the context of using standard terminology which would be helpful for translating.  

>> R06: The methodology points to the existing tests in the techniques
> documents and does not reproduce them.

Comment (KP): I agree.

> Comment (RW) : yes – but I would like it a bit clearer that it is WCAG
> techniques.  I would also like the option to introduce a new technique
> if it becomes available. So I suggest
>
> *R06 Where possible the methodology should point to existing tests and
> techniques in the WCAG documentation.

>> R07: Support for both manual and automated evaluation.

> Comment (RW) :  Not all Guidelines can be tested automatically and it
> is not viable to test some others manually. This needs to be clearer that the
> most appropriate methods will be used, whether manual or automatic. Where
> both options are available they must deliver the same result.
> 
> *R07:  Use the most appropriate manual or automatic evaluation. Where
> either could be used then both must deliver the same result.

Comment (KP): I see "support" as just support and the important point "deliver the same result" in the context of R04 "Replicability" or as I suggest "Reliability".
 
>> R08: Users include (see target audience)

> Comment (RW) : Whilst user testing is essential  for confirming
> accessibility it is not needed/essential for checking compliance with
> WCAG. If we feel that user testing is needed then we must specify what users,
> what skill level, what tasks etc..so that evaluators all use the same type
> of user and get the same type of result. I would prefer not to include
> users here as a requirement.

Comment (KP): A tricky R. - especially in the context of the above mentioned "It is possible for someone to comply with a particular guideline without
using any of the recommended techniques." The question would be: How a tester can find out if an SC is met when the recommended techniques are not used? Wouldn't that mean that a tester needs deep knowledge in using for example Screenreaders as well as Magnifiers and ... We discussed this also in an another mail thread. I prefer to include users here but we have to describe what users according to Richards consideration in the above paragraph.

>> R09: Support for different contexts (i.e. self-assessment, third-party
> evaluation of small or larger websites).

> Comment (RW) :  Agreed.
Comment (KP): Agree
 
>> R10: Includes recommendations for sampling web pages and for expressing
> the scope of a conformance claim

> Comment (RW) : I agree. This is probably going to be the most difficult
> issue, but it is essential if our methodology is going to be useable in
> the real world as illustrated by discussions already taking place. Should
> it include tolerance metrics (R14)?

Comment (KP): I also think it’s the most difficult issue. Because of the ongoing discussion about different approaches I want to abstain for the moment.
 
>> R11: Describes critical path analyses,
> Comment (RW) :  I assume this is the CPA of the evaluation process (ie
> define website, test this, test that, write report etc.). In which case
> agreed

Comment (KP): I'm not sure what is meant by this R. Because of that no vote from me now.
 
>> R12: Covers computer assisted content selection and manual content
> selection

> Comment (RW) : I do not know what this means – can Eric explain ?
Comment (KP): I also don't have a exactly idea what this R. could mean.
 
>> R13: Includes integration and aggregation of the evaluation results and
> related conformance statements.

> Comment (RW) : I think this means “write a nice report” in which case I
> agree.
Comment (KP): I agree.
 
>> R14: Includes tolerance metrics.

> Comment (RW) : Agreed – but maybe combine with R10
Comment (KP): The tolerance metrics will depend on the testing procedure itself. Because of that for me I'm happy with that and suggest not to combine with any other R.

>> R15: The Methodology includes recommendations for harmonized
> (machine-readable) reporting.

> Comment (RW) : I am not sure that methodologies recommend things. Do
> you mean
> 
> *R15: Reports must be machine readable.

Comment (KP): As I understood R15 this means e.g. structures in documents but also recommendations for the content structure. If so, I agree with R15.


Best

Kerstin (KP)

> 
> 
> Best wishes
> Richard (RW)

Nice idea using initials. Thx.


> 
> -----Original Message-----
> From: Velleman, Eric
> Sent: Wednesday, August 31, 2011 12:56 PM
> To: public-wai-evaltf@w3.org
> Subject: Appendix to the agenda: Requirements draft
> 
> Dear Eval TF,
> 
> In our call, we will discuss further on the questions that are on the
> list.
> Please also react online. As a result of our last call, below you find
> a
> first draft of the possible requirements for the methodology. We will
> discuss this further tomorrow in our call:
> 
> First Draft Section on Requirements
> 
> * Objectives:
> The main objective is an internationally harmonized methodology for
> evaluating the conformance of websites to WCAG 2.0. This methodology
> will
> support different contexts, such as for self-assessment or third-party
> evaluation of small or larger websites.
> It intends to cover recommendations for sampling web pages and for
> expressing the scope of a conformance claim, critical path analyses,
> computer assisted content selection, manual content selection, the
> evaluation of web pages, integration and aggregation of the evaluation
> results and conformance statements. The methodology will also address
> tolerance metrics.
> The Methodology also includes recommendations for harmonized
> (machine-readable) reporting.
> 
> This work is part of other related W3C/WAI activities around evaluation
> and
> testing.
> More on the EvalTF page.
> 
> * Target Audience:
> A01: All organization evaluating one or more websites
> A02: Web accessibility benchmarking organizations
> A03: Web content producers wishing to evaluate their content
> A04: Developers of Evaluation and Repair Tools
> A05: Policy makers and Web site owners wishing to evaluate websites
> 
> The person(s) using the Methodology should be knowledgeable of the
> Guidelines and people with disabilities.
> 
> * Requirements:
> R01: Technical conformance to existing Web Accessibility Initiative
> (WAI)
> Recommendations and Techniques documents.
> R02: Tool and browser independent
> R03: Unique interpretation
> R04: Replicability: different Web accessibility evaluators who perform
> the
> same tests on the same site should get the same results within a given
> tolerance.
> R05: Translatable
> R06: The methodology points to the existing tests in the techniques
> documents and does not reproduce them.
> R07: Support for both manual and automated evaluation.
> R08: Users include (see target audience)
> R09: Support for different contexts (i.e. self-assessment, third-party
> evaluation of small or larger websites).
> R10: Includes recommendations for sampling web pages and for expressing
> the
> scope of a conformance claim
> R11: Describes critical path analyses,
> R12: Covers computer assisted content selection and manual content
> selection
> R13: Includes integration and aggregation of the evaluation results and
> related conformance statements.
> R14: Includes tolerance metrics.
> R15: The Methodology includes recommendations for harmonized
> (machine-readable) reporting.
> 
> The methodology describes the expected level of expertise for persons
> carrying out the evaluation and the possibility to conduct evaluations
> in
> teams using roles. There is also a description of the necessity to
> involve
> people with disabilities.
> 

Received on Monday, 12 September 2011 08:40:19 UTC