Re: table of contents Evaluation Methodology

Dear Eric
maybe we should also include some templates for the Sample selection at
the end of the methodlogy as appendix

kostas

> Dear all,
>
> Below a very rough first version of the possible table of contents. Please
> add sections if you miss any and at the same time describe what you think
> should be in the sections.
>
> Table of contents proposal
>
> - Abstract
> - Status of this document
> - Table of Contents
>
> 1. Introduction
> General introduction to the document as a sort of executive summary.
> 2. Scope of this document
> This section describes the scope of this document. It is required for
> standards documents and does not describe the methodology
> 3. Target audience
> Description of the target audience of the Methodology. We did some
> work for this in the requirements document
> 4. References
> 5. Definitions/Terminology
> Terminology that is important for the understanding of the Methodology.
> General words and terms could be placed in the glossary at the end of
> the document. We already did some work in the requirements
> document like for website etc..
> 6. Expertise for evaluating accessibility
> What expertise should people have who use this Methodology
> 6.1 Involving People with Disabilities in the process
> As discussed in the requirements discussion we wanted to address in
> the Methodology that involvement of users with disabilities is important.
> There is text about this in the eval suite on the WAI pages.
> 7. Procedure to express the Scope of the evaluation (based on:)
> How can an evaluator express the scope of a website. What is in and
> what can be left out? Below are some possible sections that cover
> things that look necessary to describe to pinpoint the exact scope of
> what is in and what can be left outside the scope of a website:
> 7.1 Technologies used on the webpages
> 7.2 Base URI of the evaluation
> 7.3 Perception and function
> 7.4 Complete processes
> 7.5 Webpages behind authorization
> 7.6 Dividing the scope into multiple evaluations
> Imagine a website is large and the would like to divide the evaluation
> over different parts for which different people are responsible. If all
> parts are in the scope of the website, then the scope could be divided
> into multiple parts that all have to be evaluated.
> 8. Sampling of pages
> An evaluator can manually evaluate all pages, but on website with
> 9M pages that is a lot of work. How to select a sample of a website
> is described in this section. How many pages and how do you choose
> them?
> 8.1 Sample selection (random and targeted sampling)
> 8.2 Size of evaluation samples (related to barriers)
> 9. Evaluation
> This is the section describing step by step how to do the evaluation.
> The evaluation is depending on many factors, like technologies used,
> technologies evaluated, Accessibility supported etc. Part of the story
> is the barrier that are encountered during evaluation. When are they
> a real problem? Is it possible to have an error margin and how do
> we describe that?
> 9.1 Manual and machine evaluation
> 9.2 Technologies
> 9.3 Procedure for evaluation
> 9.4 Barrier recognition
> 9.5 Error margin
> 10. Conformity
> This section is largely from WCAG 2.0 with additional information.
> 10.1 Conformity requirements
> 10.2 Accessibility supported
> 10.3 Partial conformance
> 10.4 Conformance claims
> 10.5 Score function (barrier change)
> 11. Reporting
> How to write a report from the evaluation that is human readable
> and one that is machine readable. And what should be in the
> report. Templates are included in the appendices.
> 11.1 Text based report
> 11.2 Machine readable report using EARL
> 12. Limitations and underlying assumptions
> 13. Acknowledgements
> 14. Glossary
> 15. References
> 16. Appendix: Template for manual evaluation report
> 17. Appendix: Template for EARL report
>
> Kindest regards and happy discussing :)
>
> Eric
>
>

Received on Saturday, 15 October 2011 15:39:05 UTC