W3C home > Mailing lists > Public > public-wai-evaltf@w3.org > October 2012

WCAG-EM comments. Section 3.1 Define Scope and 3.2 Explore Website

From: Ramón Corominas <rcorominas@technosite.es>
Date: Tue, 16 Oct 2012 18:31:44 +0200
Message-ID: <507D8BF0.8030202@technosite.es>
To: "Eval TF <\"\"Eval TF \">" <public-wai-evaltf\"@w3.org>
Dear Eval TF,

I continue with my reading of the full WCAG-EM document. Here are some 
more comments for Sections 3.1 and 3.2.

3.1 Define the Scope

3.1.1 Step 1.a: Define the Scope of the Website

[WCAG-EM] "This scope definition may not contradict the terms 
established in section 2.1"

This means that no exceptions could be considered, which may not be 
possible in some cases (the website as a whole would fail because a 
single page/functionality fails). I've already commented about this issue.

"third-party content"

Maybe it would be good to clarify here what "third-party" means. Many of 
our clients (or, specifically, their "Web" management/development team) 
are very confused about this and tend to consider "third-pary" as "any 
content not created by our team", which is not always true. For example, 
in a newspaper, journalists work for the same "web owner", so they are 
not "third-party".

3.1.2 Step 1.b: Define the Goal of the Evaluation

IMO, the "reporting options" shouldn't be included in this section 
(there is a complete section on that). Therefore, I would change the 
wording and not use "report" here, but "evaluation" or "analysis" (like 
"In-depth analysis").

Indeed, we have many different templates for reporting "Basic", 
"Detalied" and "In-depth" analysis, depending on how we provide the 
results: organised by Principles/Guidelines/Success Criteria, by 
section/pages, by type of barrier, by technology, by disability... This 
highly depends on the aim/target of the report, even if the "depth" of 
the evaluation is the same.

Basic Report [Evaluation]

Another typical use-case for Technosite is in pre-sales activities or 
awareness actions. Our clients (or our sales team) want to know if their 
websites are accessible, or if they meet the laws or regulations. Then 
we perform a "quick check" and create a very simple report highlighting 
the most important barriers/failures. Training sessions can also be a 
use-case for this.

Detailed Report [Evaluation] vs. In-Depth Analysis

Again, the wording seems to convey that "detailed" is a type of report 
that is always organised "per-page" while "in-depth" is always organised 
"per-issue". This would limit the way an evaluator provide the results 
according to different purposes or target audience. I think that the 
type of report should be independent of the depth of the analysis.

For example, we rarely report page-by-page, nor even all the possible 
instances of an issue. Instead, we provide information about the types 
of errors, the range of pages where the error occurs (or "global" if it 
is a general problem), and then provide some examples of the errors. 
This allows developers to understand the issue without having to read 
500 pages of repeated information.

3.1.3 Step 1.c: Define the Conformance Target

I would extent the "target" to cover not only a conformance level, but 
additional success criteria of levels different from the "conformance 
target". Indeed, some laws or internal QA procedures can include 
additional criteria (the old Spanish regulation did this with three WCAG 
1.0 AAA checkpoints, that had a higher level in the regulation).

I think this would allow further integration with QA policies. For 
example, a "composed target" could be: "Level AA and SC 1.4.6, SC 1.4.9, 
SC 2.4.9, SC 3.1.5, SC 3.2.5 (indeed, this is more or less our own 
internal QA policy for websites developed at Technosite).

3.1.4 Step 1.d: Define the Context of Website Use

Although I share this need, this is not mandatory according to WCAG 2.0 
conformance (it is an optional component of a Conformance Claim).

[WCAG-EM] "It is necessary to determine the primary context in which a 
website is used and the minimum set of web browsers and assistive 
technology that must be supported by the website."

I see a risk here. Talking about "primary context" could lead to think 
that the website can be "conformant" if only the "primary context" is 
covered. For example, most PDF features are not fully "accessibility 
supported" out of Windows, so if we define Windows as the "primary 
context" (85% users), we still exclude another 15%, which IMO means 

[WCAG-EM] "This definition of target users and tools needs to meet the 
terms defined in WCAG 2.0 Level of Assistive Technology Support Needed 
for "Accessibility Support"

But the referred document says: "The WCAG Working group and the W3C do 
not specify which or how many assistive technologies must support a Web 
technology in order for it to be classified as accessibility supported. 
This is a complex topic and one that varies both by environment and by 
language. There is a need for an external and international dialogue on 
this topic."

Thus, the question remains open to interpretation, so I would strongly 
discourage the use of "primary context" unless we define some rules for 
what "primary" can and cannot be.

[WCAG-EM] "and needs to be supported throughout the website. (...) 
Accessibility support needs to be uniform throughout a single website."

Although this is the ideal situation, this is not possible or feasible 
in some cases (I put some examples on this in my previous comments to 
Section 2).

I know that, in general, it's not good to allow exclusions, but under 
some situations prohibiting them would discourage implementing 
accessibility for the rest of the website. I've heard several times this 
idea: "If we cannot obtain a label because one page/section cannot 
conform, then we will not waste our time doing accessibility for the 
rest". I would try to further explore this issue.

3.1.5 Step 1.e: Define the Techniques to be Used (Optional)

[Ed] Since evaluators do not change the content of the pagws being 
evaluated, I would not use "Techniques to be used", but "Techniques to 
be Taken into Account", or "Techniques to be Considered" or a similar 

Introducing Techniques in the evaluation process can also be a 
controversial step. Firstly because they are informative only. But more 
important, because some "sufficient" techniques are not accessibility 
supported in many situations. Thus, "sufficient" does not always imply 
"conformance", so it is important to clarify this.

We usually consider two terms: "compliance" vs. "conformance". A web 
page can "comply" with a SC through a sufficient technique (if the 
technique is sufficient, it means that there is at least one context of 
use where the technique guarantees conformance). However, to ensure 
"conformance", the technique must pass the Conformance Requirements, so 
the "sufficient technique" needs also to be an "accessibility supported 
way of using a technology".

Conversely, a web page can conform even if it has content that fails a 
SC, provided that the failing content is a complementary (inaccessible) 
content that meets the CR #5 (no-interference).

Therefore, "Sufficient" doesn't always mean "conformant", nor "failure" 
means necessarily "non-conformant".

Maybe we can add an explanation to put the techniques in context: 
"Sufficient Techniques can only be considered if they are accessibility 
supported; Failures can only be considered if they apply to content that 
is relied upon".

3.2 Explore the Target Website

According to 2.1, we cannot exclude any part of a wwbsite. However, if 
there is a private area where we don't have access, we must assume that 
the private area is excluded, isn't it?

For example, if we are doing a large-scale evaluation of banking 
websites without having an account in each of them, we must limit our 
evaluation to public sections, so the scope would be "The xxx bank 
website except the private area". Again, exclusions are needed.

3.2.1 Step 2.a: Identify Common web pages
3.2.2 Step 2.b: Identify common functionality

Is "common" equivalent to "typical"? Is it "essential"? Accessed from 

The term is especially confusing when I try to understand "common 
functionality". Is a common functionality one that is accessed from 
every "screen" of a web application? If the functionality is only in a 
part of the app, then is it not common?

Anyway, why not use "relevant"? It is the first word of the definition, 
and it seems to convey exactly what you want to look at here.

In any case, it would be good to provide examples of "common 
functionality" in web applications, and maybe a definition in Section 1.4.

3.2.3 Step 2.c: Identify the variety of Web Page Types

In the option "Web pages that change appearance, behavior, and content 
depending on the user, device, and settings;", I think it should be 
added "browser, platform, orientation".

Maybe an explicit mention to Responsive Web Design would also be good, 
since this technique can influence a lot in the evaluation results.

3.2.4 Step 2.d: Identify Web Technologies Relied Upon

[Ed] "During this step the web technologies relied upon to provide the 
website are identified and documented"

Provide the website or "provide the website's information and 
functionalities"? Maybe I am too technical, but when I read this 
sentence I think about TCP/IP and HTTP (the technologies that "provide" 
the website).

In addition, HTML5 has many different features with varying 
accessibility support (for example, new sectioning elements, <audio> and 
<video>, <canvas>). This step should identify not only "technologies", 
but also those features separately. The same argument can be applied to 
JS libraries such as jQuery, Dojo, etc., that have different plugins 
with varying accessibility support, and probably to other technologies.

To be continued...


Ramón Corominas
Accessibility specialist
Technosite - Fundación ONCE
+34 91 121 0330
Received on Tuesday, 16 October 2012 16:32:27 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:40:23 UTC