RE: table of contents Evaluation Methodology

I think there should be a complete worked out example of the 
methodology in action (applied to a "fictitious" website and stated in 
"generic" terms) presented 
as an appendix, to illustrate "proof of concept".  

"References" is listed twice (#4 and 15) - did you want to have one section
 for normative references and another for informative references?  Should references
be placed at the end of the document?

We should also make sure that all the requirements mentioned in 
http://www.w3.org/TR/qaframe-spec/

are covered, either in the table of contents or in their corresponding section text..

Also consider the good practices in the same link..

Thanks and best wishes
Tim Boland NIST

-----Original Message-----
From: public-wai-evaltf-request@w3.org [mailto:public-wai-evaltf-request@w3.org] On Behalf Of Velleman, Eric
Sent: Friday, October 14, 2011 5:54 PM
To: public-wai-evaltf@w3.org
Subject: table of contents Evaluation Methodology

Dear all,

Below a very rough first version of the possible table of contents. Please add sections if you miss any and at the same time describe what you think should be in the sections.

Table of contents proposal

- Abstract
- Status of this document
- Table of Contents

1. Introduction
General introduction to the document as a sort of executive summary.
2. Scope of this document
This section describes the scope of this document. It is required for
standards documents and does not describe the methodology
3. Target audience
Description of the target audience of the Methodology. We did some
work for this in the requirements document
4. References
5. Definitions/Terminology
Terminology that is important for the understanding of the Methodology.
General words and terms could be placed in the glossary at the end of
the document. We already did some work in the requirements
document like for website etc..
6. Expertise for evaluating accessibility
What expertise should people have who use this Methodology
6.1 Involving People with Disabilities in the process
As discussed in the requirements discussion we wanted to address in
the Methodology that involvement of users with disabilities is important.
There is text about this in the eval suite on the WAI pages.
7. Procedure to express the Scope of the evaluation (based on:)
How can an evaluator express the scope of a website. What is in and
what can be left out? Below are some possible sections that cover
things that look necessary to describe to pinpoint the exact scope of
what is in and what can be left outside the scope of a website:
7.1 Technologies used on the webpages
7.2 Base URI of the evaluation
7.3 Perception and function
7.4 Complete processes
7.5 Webpages behind authorization
7.6 Dividing the scope into multiple evaluations
Imagine a website is large and the would like to divide the evaluation
over different parts for which different people are responsible. If all
parts are in the scope of the website, then the scope could be divided
into multiple parts that all have to be evaluated.
8. Sampling of pages
An evaluator can manually evaluate all pages, but on website with
9M pages that is a lot of work. How to select a sample of a website
is described in this section. How many pages and how do you choose
them?
8.1 Sample selection (random and targeted sampling)
8.2 Size of evaluation samples (related to barriers)
9. Evaluation
This is the section describing step by step how to do the evaluation.
The evaluation is depending on many factors, like technologies used,
technologies evaluated, Accessibility supported etc. Part of the story
is the barrier that are encountered during evaluation. When are they
a real problem? Is it possible to have an error margin and how do
we describe that?
9.1 Manual and machine evaluation
9.2 Technologies
9.3 Procedure for evaluation
9.4 Barrier recognition
9.5 Error margin
10. Conformity
This section is largely from WCAG 2.0 with additional information.
10.1 Conformity requirements
10.2 Accessibility supported
10.3 Partial conformance
10.4 Conformance claims
10.5 Score function (barrier change)
11. Reporting
How to write a report from the evaluation that is human readable
and one that is machine readable. And what should be in the
report. Templates are included in the appendices.
11.1 Text based report
11.2 Machine readable report using EARL
12. Limitations and underlying assumptions
13. Acknowledgements
14. Glossary
15. References
16. Appendix: Template for manual evaluation report
17. Appendix: Template for EARL report

Kindest regards and happy discussing :)

Eric

Received on Monday, 17 October 2011 13:31:56 UTC