[wbs] response to 'Approval for publication of WCAG-EM 1.0 as a W3C Working Group Note'

The following answers have been successfully submitted to 'Approval for
publication of WCAG-EM 1.0 as a W3C Working Group Note' (public) for David
MacDonald.

> 
> ---------------------------------
> Abstract
> ----
> 
> 
> 

 * (x) accept this section
 * ( ) accept this section with the following suggestions
 * ( ) I do not accept this section for the following reasons
 * ( ) I abstain (not vote)
 


> 
> 
> ---------------------------------
> Introduction
> ----
> 
> 
> 

 * (x) accept this section
 * ( ) accept this section with the following suggestions
 * ( ) I do not accept this section for the following reasons
 * ( ) I abstain (not vote)
 


> 
> 
> ---------------------------------
> Using This Methodology
> ----
> 
> 
> 

 * ( ) accept this section
 * ( ) accept this section with the following suggestions
 * (x) I do not accept this section for the following reasons
 * ( ) I abstain (not vote)
 
Combined Expertise (Optional)
"... using the combined expertise of different evaluators may provide
broader coverage of the required skills and help identify accessibility
barriers more effectively..."

===========
The referenced document "Using Combined Expertise to Evaluate Web
Accessibility" is 12 years old (2002)

I appreciate the first sentence disclaimer and "optional" status, but the
message seems clear...more evaluators on content is better. We had some
discussion of this section at CSUN and at TPAC. Several veteran evaluators
who have worked in large organizations, also felt that this is not the
reality of how things work.  Accessibility evaluation companies that I know
of use teams of evaluators, but they split up a site and each evaluator
takes a separate section. They are not combining expertise on the same
content. 

I think the current language  unnecessarily gives advantages to
organizations with teams of evaluators. There may be a mistaken impression
to procurement departments that they better meet this recommendation than
smaller firms, even though in reality they have 1:1 ratio of one evaluator
to any chunk of content. 

This section is not about the important recommendation of including users
with disabilities, that is a separate section of this document ... this
section is about teams of evaluators looking at the same content. 

How about a friendly amendment that might better address the issue?
===============
Combined Expertise (Optional)

"This methodology can be carried out by an individual evaluator with the
skills described in the previous section (Required Expertise). Using the
combined expertise of different evaluators may provide an effective way to
evaluate content when some of the required expertise is missing from one
team member but is possessed by another on the team.  While not required
for using this methodology, the use of review teams may sometimes be
necessary and/or beneficial. Using Combined Expertise to Evaluate Web
Accessibility provides further guidance on using the combined expertise of
review teams, which is beyond the scope of this document."

======
Also I would put the paragraphs "Involving Users (optional)" with
disabilities and "Evaluation tools (optional)" above this paragraph in the
section because they are more important.



> 
> 
> ---------------------------------
> Scope of Applicability
> ----
> 
> 
> 

 * (x) accept this section
 * ( ) accept this section with the following suggestions
 * ( ) I do not accept this section for the following reasons
 * ( ) I abstain (not vote)
 
"...amount of replaced web pages in a fresh sample is typically ~50% though
this could be increased when web pages on a website mostly conform to WCAG
2.0."

Wondering where 50% came from? I don't object particularly, but wondering.

> 
> 
> ---------------------------------
> Step 1: Define the Evaluation Scope
> ----
> 
> 
> 

 * (x) accept this section
 * ( ) accept this section with the following suggestions
 * ( ) I do not accept this section for the following reasons
 * ( ) I abstain (not vote)
 


> 
> 
> ---------------------------------
> Step 2: Explore the Target Website
> ----
> 
> 
> 

 * ( ) accept this section
 * (x) accept this section with the following suggestions
 * ( ) I do not accept this section for the following reasons
 * ( ) I abstain (not vote)
 
"Note: Where possible, it is often also useful to identify the libraries
and components used to create the website, such as Dojo, jQuery" ... 

perhaps add something like "if a CMS is used it will be helpful to identify
it, and it's version, along with a list library components added to it's
core framework."

> 
> 
> ---------------------------------
> Step 3: Select a Representative Sample
> ----
> 
> 
> 

 * (x) accept this section
 * ( ) accept this section with the following suggestions
 * ( ) I do not accept this section for the following reasons
 * ( ) I abstain (not vote)
 
I like the 10% idea... its practical and easy to calculate... this entire
section really makes sense to me now.

> 
> 
> ---------------------------------
> Step 4: Audit the Selected Sample
> ----
> 
> 
> 

 * ( ) accept this section
 * ( ) accept this section with the following suggestions
 * ( ) I do not accept this section for the following reasons
 * ( ) I abstain (not vote)
 
"For example, evaluators may utilize specific testing instructions and
protocols that may be publicly documented or only available to the
evaluators."

May need to add a sentence that they would need to be able demonstrate that
techniques they chose have actually met the SC.

============

"Optionally, an evaluation report can specifically indicate Success
Criteria for which there is no relevant content, for example, with "not
present".

May want to check in with Gregg on this... I personally don't have a
problem with it, but it was a pretty hot topic at one point. "Not present"
is better than N/A, and may be ok with him.



> 
> 
> ---------------------------------
> Step 5: Report the Evaluation Findings
> ----
> 
> 
> 

 * ( ) accept this section
 * ( ) accept this section with the following suggestions
 * ( ) I do not accept this section for the following reasons
 * ( ) I abstain (not vote)
 
Methodology Requirement 5.c:

Conformance level <keep-bold>evaluated</keep-bold>
Accessibility support <keep-bold>baseline</keep-bold>

Perhaps add something like
"If an automated test has been conducted provide either have a list of
urls, or the number of pages crawled."


Perhaps a sentence should be added such as 
"All pages sampled in this evaluation pass WCAG 2"
this could help distinguish this from a WCAG conformance statement.



> 
> 
> ---------------------------------
> Remaing Comments
> ----
> Provide any remaining comments that you may have.
> 
> 
Comments: 
It's come a long way and I think it is just about ready. It's hard to get a
bunch of evaluators to agree, and this for the most part has been
accomplished. Congrats all around.

> 
> These answers were last modified on 30 June 2014 at 12:54:59 U.T.C.
> by David MacDonald
> 
Answers to this questionnaire can be set and changed at
https://www.w3.org/2002/09/wbs/1/WCAG-EM-20140623/ until 2014-06-30.

 Regards,

 The Automatic WBS Mailer

Received on Monday, 30 June 2014 12:57:03 UTC