- From: WBS Mailer on behalf of gv@trace.wisc.edu <webmaster@w3.org>
- Date: Fri, 13 Dec 2013 21:12:01 +0000
- To: public-wcag-em-comments@w3.org,shadi@w3.org,e.velleman@accessibility.nl
The following answers have been successfully submitted to 'Approval for draft publication of WCAG-EM' (public) for Gregg Vanderheiden. --------------------------------- Abstract ---- * ( ) accept this section as draft * (x) accept this section as draft with the following suggestions * ( ) I do not accept this section as draft * ( ) I abstain (not vote) This document has come a long way. Congratulations. Some minor and a couple larger concerns -- but overall a very nice job. =========================== In the ABSTRACT -- one bug " It does not provide instructions for evaluating web content feature by feature, which is addressed by the WCAG 2.0 techniques layer. " is not correct. The techniques are not designed to be used for evaluation. they are neither necessary nor sufficient -- so they cannot be used to evaluate. suggest that the sentence be revised to read: "It does not provide instructions for evaluating web content feature by feature, which is addressed by WCAG 2.0 success criteria." (otherwise very nice) --------------------------------- Introduction ---- * ( ) accept this section as draft * ( ) accept this section as draft with the following suggestions * (x) I do not accept this section as draft * ( ) I abstain (not vote) A few smaller things and one major 1) Sentence 1 of paragraph 2 ends with "..... and highlights considerations that evaluators to apply these steps in the context of a particular website. " which does not grok. It seems to be missing a word or something. 2) Last sentence of PP 2 ends: "...though in the majority of use cases it does not directly result into conformance claims." suggest either "result in confomance claim language" or "resolve into conformance claims" (though that is a bit ambiguous) or something. But it currently is unclear what it is trying to say. 3) Remove "ALSO" from third paragraph. 4) I think removing the first WCAG 2.0 from this sentence will make it clearer that the techniques you are referring to are more than the WCAG 2.0 WG defined techniques. ( "WCAG 2.0 techniques" can very easily be misread "WCAG 2.0 WG techniques". "The methodology relies on WCAG 2.0 techniques such as the Techniques for WCAG 2.0 ..." ================================ UNDER "Relation to WCAG 2.0 Conformance Claims =============================== Sentence 1 reads "WCAG 2.0 defines conformance requirements for individual web pages that are known to satisfy each conformance requirement, rather than for entire websites. It also defines how .... " doesn’t read well. Not clear what it is saying. I THINK you might mean "WCAG 2.0 defines conformance requirements for individual web pages (and in some cases, sets of web pages), but does not describe how to evaluate entire websites. WCAG 2.0 also defines how... " ==================== UNDER TERMS AND DEFINITIONS ==================== I understand the purpose of the term "core functionality" but its use bothers me very much, since it has been abused so completely in every other domain of accessibility. For the W3C to define or endose the term is extremely troubling. I would advise talking about "High Frequency pages" -- and "Pages needed to complete processes". And woud really speak against the use of the term CORE. It is not needed, and it is extraodinarily dangerous - both for web page evaluation and dangerous to accessibilty overall. <this is the only show stopper problem with this section. > < SEE " STEP 2" question below for a resolution to this issue - involving the use of a different term. For example DEPENDENT COMPONENTS. The definition here would be DEPENDENT COMPONENTS COMPONENTS of a website that, if removed, fundamentally changes the use, purpose, OR FUNCTIONALITY of the website for users. This includes information that users of a website refer to and tasks that they carry out to perform this functionality. Note: Examples of functionality include "selecting and purchasing a product from the shop area of the website", "filling and submitting the form provided on the website", and "registering for an account on the website". Note: Other PARTS OF THE WEBSITE ARE not excluded from the scope of evaluation. The term "DEPENDENT COMPONENTS" is intended to help identify critical web pages and include them among others in an evaluation. (one COULD use DEPENDENT FUNCTIONALITY if "components" causes problems but DEPENDENT COMPONENTS is better for a number of reasons). But CORE isn't really correct and is quite dangerous. --------------------------------- Using This Methodology ---- * (x) accept this section as draft * ( ) accept this section as draft with the following suggestions * ( ) I do not accept this section as draft * ( ) I abstain (not vote) --------------------------------- Scope of Applicability ---- * (x) accept this section as draft * ( ) accept this section as draft with the following suggestions * ( ) I do not accept this section as draft * ( ) I abstain (not vote) --------------------------------- Step 1: Define the Evaluation Scope ---- * ( ) accept this section as draft * (x) accept this section as draft with the following suggestions * ( ) I do not accept this section as draft * ( ) I abstain (not vote) Step 1d says: "W3C/WAI provides a set of publicly documented (non-normative) Techniques for WCAG 2.0 that help evaluate conformance to WCAG 2.0 Success Criteria. However, it is not necessary to use these particular techniques (see Understanding Techniques for WCAG Success Criteria). Some evaluators might use other methods (inline with the requirements for custom techniques) to evaluate conformance to WCAG 2.0.W3C/WAI provides a set of publicly documented (non-normative) Techniques for WCAG 2.0 that help evaluate conformance to WCAG 2.0 Success Criteria. " However, techniques are not designed or provided for evaluation. They are provided as example ways to meet SC. Suggest changing this to: "W3C/WAI provides a set of publicly documented (non-normative) Techniques for WCAG 2.0 that provide one way to meet the WCAG 2.0 Success Criteria. W3C/WAI provides a set of publicly documented (non-normative) Techniques for WCAG 2.0 that ONE WAY TO MEET THE WCAG 2.0 Success Criteria. However, it is not necessary to use these particular techniques (see Understanding Techniques for WCAG Success Criteria). Some AUTHORS might use other methods (IN LINE with the requirements for custom techniques) to CREATE conformance to WCAG 2.0 AND EVALUATORS SHOULD ACCEPT VIABLE ALTERNATIVE TECHNIQUES AS WELL. ========================================= --------------------------------- Step 2: Explore the Target Website ---- * ( ) accept this section as draft * (x) accept this section as draft with the following suggestions * ( ) I do not accept this section as draft * ( ) I abstain (not vote) Again the concept of CORE appears. I think that what you seek can be accomplished by changing the terminology to "ESSENTIAL COMPONENTS". This would achieve your goal and avoid the CORE FUNCTIONALITY landmine. Step 2.b: Identify DEPENDENT COMPONENTS of the Website Methodology Requirement 2.b: Identify an initial list of DEPENDENT COMPONENTS of the target website. Explore the target website to identify its DEPENDENT COMPONENTS. While some DEPENDENT COMPONENTS will be easy to identify, others will need more deliberate discovery. For example, an online shop is expected to have a payment function though it might be less easy to identify that it also has a currency conversion function that is ESSENTIAL to the particular context of the online shop - AND THAT THE FULL FUNCTIONING OF THE SHOP IS DEPENDENT ON IT. The outcome of this step is a list of DEPENDENT COMPONENTS that users MUST BE ABLE TO USE on the website, for example: Selecting and purchasing products from web shop; Filling and submitting the survey forms; Registering for an account on the website. Note: The purpose of this step is not to exhaustively identify all functionality of a website but to determine those COMPONENTS that the purpose and goal of the target website ARE DEPENDENT ON. This will inform later selection of web pages and their evaluation. Other functionality will also be included in the evaluation but through other selection mechanisms. --------------------------------- Step 3: Select a Representative Sample ---- * ( ) accept this section as draft * (x) accept this section as draft with the following suggestions * ( ) I do not accept this section as draft * ( ) I abstain (not vote) TYPO: need a space between "distinctinstance" in requirement 3c ALSO: replace CORE FUNCTIONALITY with DEPENDENT COMPONENTS In step 3d you don't actually say anywhere that ALL processes should be included in the sample. Just that if you select one page in a process- you must include the whole process. Maybe do DON'T WANT to say that all processes are included. (some site may have an almost endless number of them). Perhaps you should put this step AFTER the random selection step and then say: "IF ANY OF THE ABOVE PROCESSES HAVE PUT A PAGE INTO YOUR EVALUATION SAMPLE, WHERE THAT PAGE IS PART OF A PROCESS, THEN ALL OF THE PAGES INVOLVED IN THAT PROCESS MUST ALSO BE ADDED TO THE EVALUATION SAMPLE." --------------------------------- Step 4: Audit the Selected Sample ---- * ( ) accept this section as draft * (x) accept this section as draft with the following suggestions * ( ) I do not accept this section as draft * ( ) I abstain (not vote) ================ STEP 4 ================ Please DO NOT use the term "Not Applicable" (or N/A). The working group went to great lengths to keep that term away from the evaluation process. ALL SC apply to a website. If the site does not have MEDIA then all of the pages meet the SC. The SC are not NA. Again, once evaluators start labelling things NA -- all sorts of other reasons are used for the designation. Suggest changing in 4 - NOTE: In such cases, an evaluator may use an identifier such as "not applicable" to denote the particular situations where Success Criteria are satisfied because no matching content is presented. to In such cases, an evaluator may use an identifier such as "not present" to denote the particular situations where Success Criteria are satisfied because no matching content is presented. ============================= REQUIREMENT 4e ============================= I do not understand this one at all. "The evaluation outcomes of the structured and random sample correlate when they are sufficiently large and representative. While the individual occurences of WCAG 2.0 Success Criteria will vary between the samples, the randomly selected sample should not show new types of content (as identified in Step 2: Explore the Target Website), and the outcomes from evaluating these randomly selected sample should not show new findings to those that were already determined in the structured sample. When the correlation fails then evaluators need to select additional web pages and web page states (as per Step 3: Select a Representative Sample), to reflect the newly identified types of content and outcomes. The outcomes of Step 2: Explore the Target Website might need to be adjusted accordingly as well. This step is repeated until the structured sample is adequately representative." If the two correlate, then there would be no reason to do them both. The whole reason for doing both structured and random is because they will provide different samples. Also, the level of correlation is not specified. Nothing correlates 100% except a sample and itself. And 50% is chance. Two unrelated things will correlate at .5 or so (on average). (and how many people on the WCAG WG know how to calculate a correlation of this type?) ALSO - if the structured sample has a particularly troublesome type of content, the random sample evaluation results will always be different. And if there is no more of that type in the site, you could random sample forever and never get the random sample eval results to be the same as the structured sample (it would always be better). Finally, the two eval results will never be the same ever unless they were the same sample or they were both perfect, (or it was one of those 1 in 10,000 chances). If the purpose of this is to just determine if the structured sample includes all of the page and content types -- then I would avoid the word correlation and just say: Methodology Requirement 4.e: Check that each web page and web page state in the randomly selected sample does not show types of content and outcomes that are not represented in the structured sample. The purpose of this step is to ensure that the overall sample includes all of the page types. This is done by comparing the structured and random sample to see there are new types of content (as identified in Step 2: Explore the Target Website) found in the random sample than in the structured sample. If there are then the structured sample should be expanded and a new random sample taken until the random sample produces no new types of content. At this point it is a fair assumption that the sample is representative of the website (absent any other knowledg to the contrary). DOES THIS DO WHAT YOU ARE TRYING TO ACHIEVE? --------------------------------- Step 5: Record the Evaluation Findings ---- * ( ) accept this section as draft * ( ) accept this section as draft with the following suggestions * (x) I do not accept this section as draft * ( ) I abstain (not vote) All is good except 5 d Scoring has always proven to be problemmatic and misleading when actually tried -- and unless and until you have data showing that it actaully works based on evaluation of diverse sites -- we should not have this in the document. THis is alway seductively attractive but ive never seen it work. Some counter examples. I have a site with 1000 pages. Each page has (the same) something that makes the page completely inaccessible to people who are blind. The site scores 86 out of 87. (I only violated one SC -- but it was a showstopper and it was on every page). I have a really good site (10,000 page) but there is something on one page or another that technically violates every SC. They are minor and very widely dispersed. And all are because of something key about that particular page and don't really affect the accesbility. The site is given awards for accessibility but scores an F (very low numerical score) because of the way the calculations are made. (90 little problems on 10,000 pages of otherwise perfect pages.) etc. I don't know what the purpose of the numberical score is. What will it be used for? I think it is more dangerous than useful and without widespread testing - unproven as having any validity (face or otherwise) ALSO NOTE That this score calculation section also has "applicable to the evaluation" in 8 places in it. see previous concerns on not-applicable) Suggestion? Drop 5d. ================ Otherwise the rest of this section is good. PS Sorry I can't be there for the meeting but it is right on top of one of our weekly Cloud4all meetiings (a 25 member consortium I co-lead in Europe so can't skip out...) These answers were last modified on 13 December 2013 at 21:09:21 U.T.C. by Gregg Vanderheiden Answers to this questionnaire can be set and changed at https://www.w3.org/2002/09/wbs/1/WCAG-EM-20131129/ until 2013-12-17. Regards, The Automatic WBS Mailer
Received on Friday, 13 December 2013 21:12:03 UTC