WCAG-EM Draft comments

Hello everyone,

I get a proxy error when trying to put in my comments about the WCAG-EM draft into the questionnaire, therefore you get them here as mail..

Regards,
Detlev


-------------------------------------------------

Priority: 
Medium

Location:
3.1.4 Step 1.d: Define the Context of Website Use, Requirement 1.d

Current wording:
"..noting that the definition of software support shall not conflict with the WCAG 2.0 guidance on the Level of Assistive Technology Support Needed for Accessibility Support.

"It is often not feasible for websites to support accessibility on every combination of web browser, assistive technology, and operating system that they run on, nor is it possible to test with every such combination of tools."

Suggested revision:
I cannot suggest an alternative wording right now - but a few things seem strange:
* the term 'software support' does not appear in the WCAG section referenced
* The WCAG section referenced notes all the known problems of AT and then "defers the judgment of how much, how many, or which AT must support a technology to entities closer to each situation that set requirements for an organization, purchase, community, etc".
Is the evaluator this entity, in the context of an evaluation using WCAG-EM?
So for any evaluator, what are the grounds on which an he/she would select the "minimum set of web browsers and assistive technology" for public websites, for example? Does the commissioner have a say in that? Are evaluators free to define or restrict the set of technologies as much as they like? The section provides no answers. 

* We should be careful with the word "tools". In the section quoted above (current wording), 'tools' is used to refer to user agents incl. assisstive technology. In other sections (e.g. 3.4.4 Step 4.d: Archive Web Pages for Reference), the word 'tools' is used for evaluation tools (WAT, bookmarklets, validators, etc).
I would suggest to use 'tools' exclusively for evaluation tools (browser bars, bookmarklets, validators or automatic checkers, etc) and to stick with 'user agent' and 'assistive technology' for the evironment defined in 3.1.4 Step 1.d: Define the Context of Website Use.

I also suggest as a minimum baseline for all evaluations of public internet sites the use of the two most widely distributed web browsers that offer good accessibility support at the time of evaluaton (currently Firefox and Internet Explorer).
The need to use *both* baseline browsers can be limited to the evaluation for all apects where rendering differences tend to affect results. For SCs like 1.4.3 Contrast, 2.4.1 Page titled or 2.4.6 Headings and Labels there is unlikely to be any difference across UA renderings whereas for SCs like 1.4.4 Resize Test, 2.1.1 Keyboard, 2.4.7 Focus visible, or 4.1.2 Name, Role, Value there can be notable and significant differences in rendering behaviour (and in turn, actual accessibility).

The second problem I see is that any inclusion of AT (say a particular screen reader) in the definition of context of use would imply that the site under test is fully (and competently) tested with that screen reader in 3.4.1 Step 4.a: Check for the Broadest Variety of Use Cases - if not, nothing definitive can be said about the level of accessibility support. If checks with AT are limited to just some aspects or pages in the sample, it begs the question why such checks were limited in such a way.


---------
Priority:
medium

Location:
3.2.1 Step 2.a: Identify Key Web Pages of the Website. Requirement 2.a

Current wording:
Requirement 2.a: The common web pages of the website and templates available to the evaluator shall be identified.

During this step the common web pages of the website and the templates that are available to the evaluator and used to create the website are identified and documented. The outcome of this step is a list of all common web pages and templates available to the evaluator and used to create the website. These will be part of the sample to be selected through the steps defined in 3.3 Step 3: Select a Representative Sample.

This step also helps understand key aspects of the website, such as the navigation and overall structure of the website.

Suggested revision:
Requirement 2.a: The common web pages and page states of the website shall be identified.

During this step the common web pages and page states of the website are identified and documented. These will be part of the sample to be selected through the steps defined in 3.3 Step 3: Select a Representative Sample.

This step also helps understand key aspects of the website, such as the navigation and overall structure of the website.

## Note that this revision would also affect the wording of other places, e.g. "3.3.1 Step 3.a: Include Key Web Pages of the Website" which also mentions templates (but not page states). ##

Rationale:
I do not see why separating the template from the instance of the template would make sense. The wording "templates that are available to the evaluator" does not clearly signal that this is a scenario where the commissioner/client explicitly wants evaluators to pin down issues to templates whewre püossible. 
Most SC would need to be evaluated not in a 'dry run' but as instantiated web page. If it is useful for the comissioner to identify the templates and get specific feedback on the template (separate from content issues, for example), this is fine, but the way step 2.a is phrased now sounds as if I had to identify or somwehow extract all templates used in the site under test. This will often not be difficult or impossible, e.g. where the page is the result of an aggregation of numerous portlets or CMS modules. The results (e.g., conflicts in heading hierarchies or keyboard traps) must be detected and described for the page (or page state) under test, but often cannot be reliably tracked to the interactions of the templates employed without inimite knowledge of the design.
So while identifying the templates may make sense in some context (e.g., in-house evaluation / quality audit during design) it doesn't make sense (or is undoable) for most evaluations of public web sites.
Identifying templates may however be described as additional optional part of WCAG-EM.

I have added 'page states' as important parts of the sample.


---------
Priority:
medium

3.3.2 Step 3.b: Include Exemplar Instances of Web Pages, Requirement 3b

Current wording:
"Requirement 3.b: At least two distinct web pages (where applicable) of each (1) key functionality, (2) content, design, and functionality, and (3) web technologies shall be part of the selected sample of web pages."

Suggested revision:
I would drop the requirement to have two distinct pages if what is meant is necessarily have two instances of page based on the same template. Instead I would put more emphasis than currently in evidence on exploring (and selecting, documenting) different *page states* (expanded menus, light boxes, tab panels, inserted error handling messages, etc)

Rationale:
One page per feature may be fine if the pages are nearly identical in structure and content. I believe that it must be down to the site exploration and the actual variation found whether one, two or more pages should be selected. Following this rule strictly it would greatly increase the number of pages in the sample (and in turn, effort) often with only marginal benefits.


---------
Priority:
high

Location:
3.3.4 Step 3.d: Include Complete Processes in the Sample

Current wording:
Requirement 3.d: All web pages that are part of a complete process shall be included.

The selected sample must include all web pages that belong to a series of web pages presenting a complete process. Also, no web page in the selected sample may be part of a process without all other web pages that are part of that process to be also included into the selected sample.

Suggested revision:

Add a note: 

Note: Including all pages of a process in the selected sample is not necessary when process steps are repetitive and based on the same template. For example, an online questionnaire may lead the user through dozens of multiple choice questions, each containing four radio buttons and based on the same template. In such case, including one of these pages in the selected sample would be sufficient. 

Rationale:
Evaluating many near-identical process pages would be a waste of time.


---------
Priority:
high

Location:
3.4 Step 4: Audit the Selected Sample, Note

Current wording:
Note: Many web pages will have repetitive content, such as the header, navigation, and other common components that may not need to be re-evaluated on each occurrence. Depending on the level of detail for reporting defined by 3.1.2 Step 1.b: Define the Goal of the Evaluation, an evaluator may not need to continue to identify successes and failures in meeting the conformance target for each web page. Section 3.5 Step 5: Report the Evaluation Findings provides more guidance on reporting.

Suggested revision:
All SC should be rated for all pages in the selected sample. Comments may be included just once and referenced from other places having the same issue.

Rationale
I would argue that the assessment of WCAG SC should be carried out for *each page in the sample* (if only to calculate some comprehensive compliance score later on). For this, it is important that pass/fail/N.A. for all SC on the chosen level of conformance is recorded for every page in the sample even if the accessibility issue has already been rated and described on another page. This would also indicate how widespread an issue is - whether it affects just one page, one section, or the entire site. This is also a (quantitative) measure of the gravity of the problem (the other is the *impact of the individual issue* on accessiblity). Next to the rating, a small reference to a comment included elsewhere would of course be sufficient.


---------
Priority:
high

Location:
3.4.1 Step 4.a: Check for the Broadest Variety of Use Cases, Requirement 4.a, Note 1

Current wording:
Note: According to WCAG 2.0, Success Criteria that do not apply to the content are deemed to have been met.

Suggested revision:
Note: While according to WCAG 2.0, Success Criteria that do not apply to the content are deemed to be satisfied, evaluators are free to set those Success Criteria to 'not applicable' since this differentiation can be highly meaningful for clients and other users of the evalution results.

Rationale:
Whether WCAG-EM should include 'not applicable' as rating option has been discussed at length in a previous EVAL-TF teleconference and I remember there was a sound majority in favour of it.


---------

Priority:
medium

Location:
3.4.1 Step 4.a: Check for the Broadest Variety of Use Cases, Requirement 4.a, Note 2

Current wording:
Templates are usually used to create many web pages, sometimes entire parts a website. Evaluating the templates that are identified per 3.3.1 Step 3.a: Include Key Web Pages of the Website may identify potential issues that may not be easily identified through evaluating individual instances of web pages. However, issues identified in templates alone do not necessarily imply that these issues occur on the website and need to be validated on individual instances of web pages. Also, identifying no issues in templates does not necessarily imply that no issues occur on on individual instances of web pages.

Suggested revision:
Many websites are based on templates. Evaluating one page based on a particular template can identify accessibility issues pertinent also to other pages based on the same template. When evaluating further pages based on the same template and the same template issue is found, Success Criteria ratings and comments may simply refer to other pages in the sample where the issue has already been covered.

Rationale:
I do not see how one would evaluate the template on its own, instead of a particular instance with all content rendered as web page. Therefore I find the whole paragraph rather confusing. The point included in my suggested revision is different: cut out repetition if some issue has already been explained on another page in the sample.

---------

Priority:
mild

Location:
3.4.2 Step 4.b

Current wording:
Use WCAG 2.0 Techniques Where Possible

Suggested revision:
*Refer* to WCAG 2.0 Techniques Where Possible

Rationale:
WCAG 2.0 techniques are not actually 'used' in the evaluation - but other evaluation techniques are (automatic checks, toolbar checks, bookmarklet checks etc). I would suggest to reserve the term 'use' to those techniques and speak of *referring* to (or identifying) WCAG 2.0 techniques.


---------
Priority:
medium

Location: 3.4.2 Step 4.b: 
Requirement 4b, first bullet point

Current wording: 
Provided that the WCAG 2.0 conformance requirements are met, it can be assumed that a WCAG 2.0 Success Criterion is:
* Met where sufficient techniques are applicable;
* Not met where common failures are applicable;

Suggested revision:
Provided that the WCAG 2.0 conformance requirements are met, a WCAG 2.0 Success Criterion is:

* Satisfied when for each instance on the page to which it applies, one of the Sufficient Techniques has been used successfully AND no Failure to meet that Success Criterion has been identified
* Not satisfied when any Failure listed for that SC is in evidence

When web techniques used on the page under test can be mapped onto failures documented in WCAG Techniques and Failures, these Techniques and/or Failures should be referred to as evidence of the failure of the page to conform.

Rationale:
* 'it can be assumed' introduces vagueness without giving any advice as to whether such assumption may be correct or not. It might be worthwhile spelling out the situation giving rise to the vagueness: e.g., the situation where a Sufficient Technique has been used successfully and no Failure has been identified, but the SC would still not be satisfied, perhaps because a clear Failure has been discovered that has not yet been documented in WCAG.
* 'Satisfy' a SC is the more formal term (compared to 'meet')
* The somewhat more correct and shorter version "Satisfied where sufficient techniques have been applied" does not spell out that this must be true for *all* instances on the page to which the Success Criterion applies
* I can see no reason to qualify Failures as 'Common'. In 'Understanding WCAG' and the Quickref, they are just referred to as Failures.
* 'Not met where common failures are applicable' seems a slightly fuzzy wording - the point is whether a Failure is in evidence or not, i.e. whethet it is 'found to apply' on a page
* The last sentence of my suggested revision makes it clear that a reference to WCAG Techniques and Failures is useful only where content under test *fails to conform*. We do not want an exhaustive enumeration of all the Techniques that have been used successfully, do we?


----------
Priority:
mild

Location:
3.4.2 Step 4.b: Requirement 4b, end of last but one paragraph


Current wording:
Otherwise it is good practice (for efficacy and justifiability) to use existing techniques to demonstrate successes and failures in meeting WCAG 2.0 Success Criteria.

Suggested revision:
Delete the sentence.

Rationale:
This is in substance a repetition of the intitial statement that (WCAG Techniques) "..provide an effective way of demonstrating whether WCAG 2.0 Success Criteria are met or not met." 'Justifiability' seems the wrong term - what is meant is that refering to the success or failure of using a WCAG Technique (established through its test) provides evidence for the conformance judgement of the evaluator. Also not sure whether referring to the matching WCAG Technique makes anything more 'efficacious': (1) It takes more time than not doing so; (2) I guess usually it will be of little help to implementors correcting the issues identified. 




--
Detlev Fischer
testkreis c/o feld.wald.wiese
Borselstraße 3-7 (im Hof), 22765 Hamburg

Mobil +49 (0)1577 170 73 84
Tel +49 (0)40 439 10 68-3
Fax +49 (0)40 439 10 68-5

http://www.testkreis.de
Beratung, Tests und Schulungen für barrierefreie Websites



----- Original Message -----
From: evelleman@bartimeus.nl
To: public-wai-evaltf@w3.org
Date: 07.08.2012 23:24:38
Subject: EvalTF Agenda Telco


> Dear Eval TF,
> 
> The next teleconference is scheduled for Thursday 9 August 2012 at:
>  * 14:00 to 15:00 UTC
>  * 15:00 to 16:00 UK Time
>  * 16:00 to 17:00 Central European Time (time we use as reference)
>  * 10:00 to 11:00 North American Eastern Time (ET)
>  * 07:00 to 08:00 North American Pacific Time (PT)
>  * 22:00 to 23:00 Western Australia Time
> 
> Please check the World Clock Meeting Planner to find out the precise date for your own time zone:
>  - <http://www.timeanddate.com/worldclock/meeting.html>
> 
> The teleconference information is: (Passcode 3825 - "EVAL")
>  * +1.617.761.6200
>  * SIP / VoIP -http://www.w3.org/2006/tools/wiki/Zakim-SIP
> 
> We also use IRC to support the meeting: (http://irc.w3.org)
>  * IRC server: irc.w3.org
>  * port: 6665
>  * channel: #eval
> 
> 
> AGENDA:
> 
> #1. Welcome, Scribe
> 
> #2. Review of new documents
> Please review by Wednesday 15 August 2012 the new editor draft and the updated disposition of comments that should address the comments received and discussed over the past weeks:
> 
> Updated Editor Draft:
>  - <http://www.w3.org/WAI/ER/conformance/ED-methodology-20120730>
> Updated disposition of comments:
>  - <http://www.w3.org/WAI/ER/conformance/comments>
> Survey for comments:
>  - <https://www.w3.org/2002/09/wbs/48225/WCAG-EM20120730/>
> 
> The major changes are highlighted with notes in the Editor Draft. A full diff-marked version to the 27 March Public Draft is linked from the survey. The disposition of comments indicates the open comments, and provides references to previous surveys and discussion.
> 
> #3. Discussion in the Telco
> Let us try to discuss the following comments that are still open during the Telco. Please also use the list:
> 
> Comment #19: Need to agree on opening an issue on "tolerance metrics" More explanation of the issue and proposed resolution:
> <http://www.w3.org/WAI/ER/conformance/comments#c19>
> 
> Comment #12: Need to agree on update in editor draft. There is a clarification of the concept of use-case in the updated section 3.4.1 Step 4.a (Check for the Broadest Variety of Use Cases). Does this clarify use case sufficiently for the moment or should we open an issue on use cases and come back tot that in a later draft.
> <http://www.w3.org/WAI/ER/conformance/comments#c12>
> 
> Comment #16: Need to resolve an objection. Is this objection solved by the updated section 3.5.3 Step 5.c (Provide a Performance Score (optional)? The section adds different scores. This methodology is not limited to automated evaluation or related sub-scores.
> <http://www.w3.org/WAI/ER/conformance/comments#c16>
> 
> Comment #48: Need to resolve an objection. The rationale is as indicated in the objection very much stressing the need to make it non-optional. We can decide to make no change and keep it optional (1) because we want to stay on the SC level as in WCAG2.0 and/or (2) open an issue to discuss making Step 1.e non-optional. Please note the open ended nature of the techniques.
> <http://www.w3.org/WAI/ER/conformance/comments#c48>
> 
> Comment #A1: Need to agree on the proposed resolution. The tekst says “..minimum set of web browsers and assistive technology to evaluate for shall be defined”. This is important because it is an important part of defining "accessibility support" Proposed resolution is therefore to make no change.
> <http://www.w3.org/WAI/ER/conformance/comments#c001>
> 
> Comment #A4: Need to agree on the proposed resolution. See:
> <http://www.w3.org/WAI/ER/conformance/comments#c004>
> 
> Comment #A6: Need to agree on the proposed resolution: Open an issue to further pursue the usage of the terms. See:
> <http://www.w3.org/WAI/ER/conformance/comments#c006>
> 
> Comment #A7: Need to agree on the proposed resolution: See:
> <http://www.w3.org/WAI/ER/conformance/comments#c007>
> 
> #5. TPAC in Lyon
> 

Received on Wednesday, 8 August 2012 14:16:24 UTC