Review of WCAG-EM

Dear members of the WCAG 2.0 Evaluation Methodology Task Force (Eval TF).

In the last weeks I've been reviewing the latest draft (26 February 2013)
of the Website Accessibility Conformance Evaluation Methodology (WCAG-EM)
1.0.

In this message I provide all my detailed comments, structured according to
the table of contents of WCAG-EM. Before my detailed comments I would like
to raise several issues:

   - The TF has done an incredible amount of work to prepare the current
   draft of WCAG-EM and I haven't had as much time as I would have liked to
   for preparing my comments, so some of them may look rude. My apologies in
   advance.
   - You already know that I'm an observer of the Eval TF. Don't hesitate
   to ask me about any clarification about my comments. They may be too
   cryptic in some cases.I will do my best to reply to your questions.
   - It is highly probable that my comments will raise issues where you
   already have reached consensus. It is not my intention to reopen debates or
   break that consensus.

My detailed comments are below. Best regards,
Loïc

*Introduction* (http://www.w3.org/TR/WCAG-EM/#introduction)

   - *Terms and definitions* (http://www.w3.org/TR/WCAG-EM/#terms)
   -
      - *Complete processes*. First, I think it should be singular
      (complete process). In addition, the current text is not a
definition as it
      just quotes text from WCAG 2.0. I suggest the following text as
definition:
      "Complete process: sequence of steps that need to be completed
in order to
      accomplish an activity".
      - *Common functionality*. Given the definition (functionality that if
      removes *fundamentally* changes the purpose of the website), the term
      "common" is not strong enough. I believe that the term should
emphasize the
      essential or fundamental character of this functionality that is, by
      definition, more important than non-fundamental functionality. I suggest
      one of the following:
      -
         - Critical functionality (taken from note 2)
         - Fundamental functionality (taken from the definition)
         - Purpose-defining functionality (again, based on the definition)
      - *Common web pages*. I think that this definition (web pages that
      are relevant to the entire website) lacks precision. I think that the
      concept that defines these pages is that their underlying
functionality is
      not specific for a given website, but is general to many web sites. Maybe
      they could be named "general-purpose web pages" and then the definition
      could be "web pages whose purpose is general and not specific to the
      website they belong". But I am not completely sure about this.
      - *(new definitions)* Given their relevance to WCAG-EM, I suggest
      adding two definitions from WCAG that are essential to the process of
      conformance evaluation: "conformance" and "satisfies a success
criterion".
      This last one means that a success criterion is satisfied (passes) unless
      it fails and to me has great implications in the evaluation of
      accessibility: a success criterion passes for a page unless it is
      demonstrated that it fails.
      -
         - *conformance*: satisfying all the requirements of a given
         standard, guideline or specification
         - *satisfies a success criterion*: the success criterion does not
         evaluate to 'false' when applied to the page

*Using this methodology* (http://www.w3.org/TR/WCAG-EM/#usage)

   - *Scope of applicability* (http://www.w3.org/TR/WCAG-EM/#applicability)
   -
      - *Particular types of websites* (
      http://www.w3.org/TR/WCAG-EM/#specialcases)
      -
         - *Website with separable areas*.
         -
            - First of all I think that it would be relevant to talk about
            "independence" between the areas. The first sentence could
be "In some
            cases websites may have clearly separable areas where
using one area does
            not require or depend on using another area of the website".
            - Then the example could be modified into something different:
            "An example could be a service company website with three
areas: a public
            area for information purposes, a customer area for
providing services to
            existing customers and an employee area for providing
services to people
            working in the company".
            - The last sentence can be maintained.
            - The result of applying all my proposals would be: "In some
            cases websites may have clearly separable areas where
using one area does
            not require or depend on using another area of the
website. An example
            could be a service company website with three areas: a
public area for
            information purposes, a customer area for providing
services to existing
            customers and an employee area for providing services to
people working in
            the company. Such areas can be considered as individual
websites rather
            than sub-sites for the purpose of this document."
         - *Evaluation tools (Optional)* (
   http://www.w3.org/TR/WCAG-EM/#tools)
   -
      - I don't understand why this section is labelled as "optional". It
      just describes the relationship between the methodology and evaluation
      tools. I suggest removing the "(optional)".
   - *Review teams (Optional)* (http://www.w3.org/TR/WCAG-EM/#teams)
   -
      - I don't understand why this section is labelled as "optional". This
      section "Using this methodology" does not seem to me a place to define
      "requirements" for users of the methodology (and in fact the previous
      sections - scope of applicability, required expertise, evaluation tools -
      don't require anything). The "methodology requirements" do
really start in
      the next level-1 section ("Conformance Evaluation Procedure").
      - I suggest removing the "(optional").
      - In addition the wording of the paragraph could be modified to avoid
      references to "requirements" by deleting the third sentence: "The
      methodology defined by this document can be carried out by an individual
      evaluator with the skills described in section Required
Expertise. However,
      using the combined expertise of review teams provides better coverage for
      the required skills and helps identify accessibility barriers more
      effectively. Specific guidance is provided in Using Combined Expertise to
      Evaluate Web Accessibility."
   - *Involving users (Optional)* (http://www.w3.org/TR/WCAG-EM/#users).
   -
      - I don't understand why this section is labelled as "optional". I
      suggest making the same changes: removing the "(optional)" and
deleting the
      third sentence.

*Conformance evaluation procedure* (http://www.w3.org/TR/WCAG-EM/#procedure)

   - One issue with terminology: I think there is lack of consistence while
   using the words "stages" and "steps". I suggest using "stage" for the
   high-level elements (1, 2, ..) and "steps" for the inner levels (1a, 1b,
   ...).
   - The description of the diagram (paragraph below the diagram) seems to
   be incomplete. It seems to me that there are back arrows from each step to
   any previous step: that is, for instance, from step 5 to steps 4, 3, 2 and
   1. Am I right interpreting the diagram?
   -
      - If I am right then the description has to be changed: "The diagram
      depicts each of the five steps defined in this section with arrows forth
      between each of two consecutive steps: 1. Define the Evaluation Scope; 2.
      Explore the Target Website; 3. Select a Representative Sample;
4. Audit the
      Selected Sample and 5. Report the Evaluation Findings. There are also
      arrows back from each step to any of its previous steps."
      - If I am wrong then the diagram should be changed to avoid this
      confusion.
   - *Step 1: Define the Evaluation Scope* (
   http://www.w3.org/TR/WCAG-EM/#step1)
   -
      - *Step 1.a: Define the Scope of the Website* (
      http://www.w3.org/TR/WCAG-EM/#step1a)
      -
         - Paragraph 1. Second sentence. The wording of this sentence is
         strange. I understand that WCAG-EM cannot use normative
language (shall,
         should), but the use of "may" doesn't work for me. What about
"This scope
         definition will follow the terms established in section Scope of
         Applicability"?
      - *Step 1.b: Define the Goal of the Evaluation* (
      http://www.w3.org/TR/WCAG-EM/#step1b)
      -
         - The terminology is confusing. The name of the step is "define
         the goal of the evaluation", whereas the content talks about "types of
         evaluation". These two should be aligned. Either the title of
the step is
         changed into "Define the type of evaluation" or the content
of the step is
         changed to describe different goals instead of evaluation types.
      - *Step 1.d: Define the Techniques and Failures to be Used (Optional)*
       (http://www.w3.org/TR/WCAG-EM/#step1d)
      -
         - I strongly believe that this step does not belong to this stage
         (1. Define evaluation scope) but to the next one (2. Explore
the target
         website). More precisely, I think it should be renumbered as
step 2e. The
         reason is that, to me, one can only define techniques and
failures once the
         functionality, the types of web pages and the technologies
relied upon are
         known.
         - If this step is moved to become 2e, then the last paragraph
         should be revisited and changed accordingly. For instance, the second
         sentence ("This definition...") refers to the exploration stage.
      - *Step 2: Explore the Target Website* (
   http://www.w3.org/TR/WCAG-EM/#step2)
   -
      - *Step 2.b: Identify Common Functionality of the Website* (
      http://www.w3.org/TR/WCAG-EM/#step2b)
      -
         - As I have suggested alternatives for "common functionality"
         (such as "critical functionality", "fundamental functionality" or
         "purpose-defining functionality" this step should be modified to be
         consistent with the new term.
      - *Step 2.d: Identify Web Technologies Relied Upon* (
      http://www.w3.org/TR/WCAG-EM/#step2d)
      -
         - There is a problem with the use of "relied upon" in this step.
         According to the definition of technologies that are "relied
upon" what it
         means is that the content would not conform WCAG if those
technologies were
         not supported. But the description of step 2.d talks about
"technologies
         relied upon to provide the website", which is a different concept. My
         suggestion is either to avoid the use of "relied upon" in
this step or to
         use it related to WCAG conformance and not to the overall website.
         - My preferred option is the first one: to avoid the use of
         "relied upon" and use the term "technologies used" instead.
         - If the second option is selected then this step would require
         two "substeps": (1) to identify all the technologies that are used to
         provide the website and (2) to identify which of those
technologies are
         relied upon for accessibility purposes.
      - *Step 3: Select a Representative Sample* (
   http://www.w3.org/TR/WCAG-EM/#step3)
   -
      - I have on general comment on the "sampling" stage. I have not seen
      any guidance related to how to measure the confidence level based on
      parameters of the sample. But this confidence level is essential when
      declaring the results of the evaluation of the website as a whole. Some
      guidance should be included in the steps below.
      - *Step 3.e: Include a randomly selected sample* (
      http://www.w3.org/TR/WCAG-EM/#step3e)
      -
         - I agree that a random sample can act as a verification indicator
         of the results found in the structured sample, but I don't
really agree
         that a random sample can be used to increase the confidence
in the results
         of the evaluation of the structured sample.
         - The reason is that the structured sample and the random sample
         are two different samples and my understanding from my (low)
knowledge on
         statistics is that these two different samples cannot be
combined into a
         single sample to derive a global result for the website with
an increased
         confidence level.
         - So I think that this step should be focused on using the random
         sample as a "control sample" to see whether the results of
the structured
         sample are replicated in the random sample.
         - Another possible use of the random sample would be to increase
         the probability of finding new accessibility errors that were
not present
         in the structured sample.
         - In addition I would like to know the reason for the numbers
         used. Why 5%? Why a minimum of 5 pages? What if the website
only contained
         10 pages and 6 of them were already part of the structured
sample? It would
         be great to have some reasoning explaining how the
probability of finding
         new errors could increase by increasing the percentage of
pages selected in
         the random sample.
      - *Step 3.f: Eliminate Redundancies in the Sample (Optional)* (
      http://www.w3.org/TR/WCAG-EM/#step3f)
      -
         - I don't agree with this step being optional. I cannot think of
         any good reason to maintain duplicated pages in the sample. That would
         decrease the relevance of the sample and the confidence level of the
         result, wouldn't it? I think that this step should be mandatory.
      - *Step 4: Audit the Selected Sample* (
   http://www.w3.org/TR/WCAG-EM/#step4)
   -
      - General comment about step 4: some of the content of this step
      seems to me to be out of scope. WCAG-EM is about testing conformance to
      WCAG 2.0, that is, to determine whether a web site conforms to the
      technical requirements of WCAG 2.0, which are the 61 success criteria and
      the 5 conformance requirements. WCAG-EM should not deal with
other types of
      accessibility evaluations, such as user testing or the concept of
      "accessibility-in-use" as defined in current research. So the first
      substep, as 4.a (check for use cases), seems to me out of scope.
      - What I expected as content of step 4 is guidance about:
      -
         - how to evaluate each success criterion for each element of a web
         page of the sample, including guidance about the best
approach: inspection,
         testing by experts, testing by users...
         - how to combine the results obtained for each element in order to
         get a result for the page,
         - how to evaluate the 5 conformance requirements once the result
         of each SC is known for the page,
         - how to combine the results obtained for each page to get the
         global result for the site (that then will be reported in step 5).
      -
      - Note 1.
      -
         - The first note is confusing. If there are repetitive elements on
         every web page it makes sense to evaluate them only once, and
then record
         and reuse the evaluation results. It is clear that an
evaluator should not
         evaluate several times the same repetitive elements but it is also
         important to note that the result assigned after the first
evaluation has
         to be used when evaluating each page where the repetitive
elements appear!
         - My understanding of the second sentence of this note is that it
         implies that the repetitive elements are only evaluated once and their
         result is only used once... I'm probably misreading it, but I
think that
         the text would benefit from some rewrite.
         - In addition I don't see the point of referring back to step 1.b
         in this note. What would be the difference on dealing with repetitive
         elements depending of the type of expected report?
         - My proposal for that second sentence would be:
         -
            - "An evaluator does not need to repeatedly identify successes
            and failures in meeting the conformance target for these repetitive
            elements on every web page. The success and failures for
such repetitive
            elements can be identified once and then be reused in all
the pages those
            elements appear."
         - Note 2.
      -
         - I think that the wording of this note is confusing. It should
         clearly explain that, according to WCAG 2.0, "no matching
content for SC"
         is really "SC is satisfied" with no question. It should also
explain that
         the "not applicable" could be used as a "description" of the
result but not
         as a result.
         - My proposal:
         -
            - According to WCAG 2.0, Success Criteria to which there is no
            matching content are satisfied. In such cases, an
evaluator may use text
            such as "Not applicable" as a description associated to the outcome
            "satisfied", to denote the particular situation where a
success criterion
            was satisfied because no relevant content was applicable.
         -
      - *Step 4.a: Check for the Broadest Variety of Use Cases* (
      http://www.w3.org/TR/WCAG-EM/#step4a)
      -
         - The title of misleading, because the requirement 4.a is about
         checking the conformance requirements, not about use cases.
         - I suggest changing the title: "Step 4.a: Check the conformance
         requirements".
         - I find that this step has gaps in the content I expected it to
         have. In my opinion checking for the 5 conformance
requirements is not a
         straightforward task and more guidance is needed. I would like to have
         guidance outlining a process such as:
         -
            - a) Look for failures (as published)
            - b) If no failures are found, check if any SC fails
            - c) If no SC fails and a "in-depth analysis" has been selected
            in step 1.b, then determines which techniques have been
used to conform to
            each SC
            - ...
         - My proposed process is based on the definition of "satisfies a
         SC" in WCAG: "the success criterion does not evaluate to
'false'", Given
         that definition, it is not really needed to find/determine
the techniques.
         It should be enough to be sure that no SC fails.
         - All the text about scenarios, use cases, personas and user
         involvement is, to me, out of scope when dealing about testing for
         conformance with a standard. These type of things are
essential during the
         development process when following a user-centred approach,
but I believe
         that they don't provide added value in the context of checking the
         conformance of WCAG 2.0. If this text about scenarios and
users is to be
         kept, I'd suggest it being transformed into a note.
      - *Step 4.b: Assess Accessibility Support for Features* (
      http://www.w3.org/TR/WCAG-EM/#step4b)
      -
         - First, this should be linked to step 2.d (identify web
         technologies relied upon). The first "cut" for features that have
         accessibility support is to look for the ones using
technologies identified
         in step 2.b.
         - Second, and according to my comment for step 4.a, I think this
         could be optional as it is only needed when the evaluation report will
         contain detailed information about how each SC is met (see my outlined
         process, step c)).
      - *Setp 4.c: Use Techniques and Failures Where Possible (Optional)* (
      http://www.w3.org/TR/WCAG-EM/#step4c)
      -
         - I agree that the use of techniques is optional, but I
         fundamentally disagree in the use of failures. Even if
failures are part of
         an informative document it is 100% sure that if one of those
failures is
         found, then the corresponding SC fails for the page being evaluated. I
         cannot imagine any exception to that idea.
         - As I already explained above, given the definition of
         "satisfying a SC", the concept of "failures" is an essential tool for
         evaluators and should not be optional
         - So I propose dividing this into two:
         -
            - 4.c.1 Use failures (mandatory)
            - 4.c.2 Use techniques (optional)
         - If this is accepted then the text of these two steps should be
         rewritten to fit the new structure.
      - *Step 4.d: Archive Web Pages for Reference (Optional)* (
      http://www.w3.org/TR/WCAG-EM/#step4d)
      -
         - I think that it would be good to have an explanation of the
         reasons to make this step optional. My first idea when
reading the title of
         the step was to ask for it to be mandatory, as I thought that
any evaluator
         should really archive the pages being evaluated (at least the
URI and data
         required to be able to re-evaluate the page if needed). But
then I read the
         details of this step and I tend to agree that all this information is
         optional. This is why I think that some clarification is needed.
      - *Step 4.e: Record Software Tools and Methods Used* (Optional) (
      http://www.w3.org/TR/WCAG-EM/#step4e)
      -
         - I think that this step should not be optional. I agree that it
         is not always necessary to report on this information but
evaluators should
         always record the information so it can be accessed if needed.
      - *Step 5: Report the Evaluation Findings* (
   http://www.w3.org/TR/WCAG-EM/#step5)
   -
      - *Step 5.a: Provide Documentation for Each Step* (
      http://www.w3.org/TR/WCAG-EM/#step5a)
      -
         - In the list of information documented in the report there is one
         reference missing: step 3.f (optional), that should be listed
with all the
         other sub-steps of step 3.
         - I think that the approach for documented failures in the
         detailed report could be similar to the one in the basic
report, but on a
         page by page basis. That is, if a success criterion fails,
then at least
         one example is provided in the page. My proposal for
replacing the last
         sentence:
         -
            - "Where failures in meeting WCAG 2.0 Success Criteria on a web
            page are identified, at least one examples is provided for
each identified
            failire in the page."
         - *Step 5.c: Provide a Performance Score (Optional)* (
      http://www.w3.org/TR/WCAG-EM/#step5c)
      -
         - I appreciate the effort spent in providing some guidance in this
         difficult area of web accessibility reporting. The three
options  are well
         explained with details on their sensitiveness.
         - This area of accessibility metrics is still under research and I
         think that WCAG-EM should at least acknowledge the W3C report (
         http://www.w3.org/TR/accessibility-metrics-report/) that was
         produced after a symposium organized in December 2011 (
         http://www.w3.org/WAI/RD/2011/metrics/).
      - *Appendix C: Example Reports* (http://www.w3.org/TR/WCAG-EM/#reports
   )
   -
      - I have no comments as the review note says that this section is to
      be updated to better align with step 5.



-- 
---------------------------------------------------------------
Loïc Martínez-Normand
DLSIIS. Facultad de Informática
Universidad Politécnica de Madrid
Campus de Montegancedo
28660 Boadilla del Monte
Madrid
---------------------------------------------------------------
e-mail: loic@fi.upm.es
tfno: +34 91 336 74 11
---------------------------------------------------------------

Received on Monday, 15 April 2013 22:08:13 UTC