Section 1. Introduction ----------------------- * It starts mentioning "website owners, procurers, suppliers, developers, and others". Relevant examples for others might be provided, so that they match the target audiences mentioned in section 1.2, e.g.: accessibility consultants, researchers, disability advocates, policy makers. The current writing of this paragraph is biased towards the website creation part, disregarding other agents who also need to evaluate accessibility conformance. * The first paragraph in the introduction replicates that of the abstract. However, there is a sentence appearing in the abstract, but missing from the introduction: "it addresses different contexts, including self-assessment and third-party evaluation". It should be added here as well for document cohesion. Subsection 1.4 Terms and definitions ------------------------------------ * Common web pages: I would strongly recommend choosing another adjective for "common" web pages, as it might be misleading. I am aware the term "common" replaced "elemental" web pages in previous drafts. I also understand the explicit definition clarifies it meaning, but might not be intuitive for readers, as it does not follow usual understanding of the meaning for "common". These pages are not "common" in that they are neither "occurring, found, or done often; not rare" (which would be the case of pages frequently appearing) nor "shared by two or more people or things" (which would be the case for template web pages) [quotations come from definitions for 'common' from OED] Maybe "core web pages" or similar would fit better. Alternatively, a complete phrase could provide a refinement, such as "commonly used web pages", only in case that is the meaning that tries to be conveyed. * Website part: The current definition is not clear enough. Following the definition, any subset of pages in a website would be a "website part", as they "serve the purpose and functionality of a web site" (could there even be any set of pages which to not serve the functionality of the website?) * Common functionality: I would suggest refining the term. Is it "commonly-used" or "cross-cutting [common to many pages]"? I would consequently update the defiition "... functionality of a website including tasks (...)" Subsection 2.1. Scope of applicability -------------------------------------- * Strictly following the definition of "web page" for WCAG 2.0 (and for this document), this methodology would not be applicable to, e.g., packaged HTML documentation, as it is not retrieved using HTTP. The scope should be extended so as to include offline documents (or those retrieved using different protocols other than http), in the same way as the definition of "web page" has also been refined in 2.1.1. to encompass different states of web applications. Subsection 2.2.1. Particular Types of Websites ---------------------------------------------- * Regarding Web Applications, it should be noted that this methodology only applies to the accesibility regarding the perspective of the interaction between the user and the application. If the application presents external contents (e.g. a web media player), it would need to abide as well to User Agent Authoring Guidelines, and be subject to a complementary methodology, in order to ensure the accessibility of the contents which are presented. Likewise, if the application generates content to be published elsewhere (e.g. a CMS editor), it would need to fulfill the Authoring Tools Authoring guidelines to ensure the contents they generate are accessible as well. Section 3. Conformance evaluation procedure ------------------------------------------- * "Iterative model" in software engineering usually refers to a different process flow (1->2->3->4->1->2->...). The process model presented in the diagram (1<->2<->3<->4) is most known as "waterfall with feedback". * Indeed, the later description of the methodology does not fully abide by this diagram, as different steps also have a "lookahead": step 1 may include some exploration, step 2 may include initial cursory checks, individual pieces of reporting are gathered before step 5, etc. Step 1.b -------- * It should be clarified whether these are the only possible evaluation goals, or just examples from a (potentially) broader set. The methodology requirement seems to limit the evaluation goal to one of three choices, while the detailed description of the step seeems more open, reading "some of the evaluation goals include". * The goal definition would have implications on the extent of the analysis performed. Following current writing, the more coarse-grained is the goal, the earlier an evaluation could provide an overall conformance rejection. E.g.: if we take a Basic Report, a yes/no result is sought after, and "no" could be said as soon as the first non-conformance appears. This should be clarified, either in this step, or within Step 4. Conversely, step 5.1 seems to imply this is not the expected result of applying the methodology; but rather the whole website must be analyzed even when basic conformance is already discarded for errors found in the middle of the process. The writing should be clarified here, and in step 5.a. Step 1.c -------- * WCAG 2.0, regarding conformance level note that "authors are encouraged to report (in their claim) any progress toward meeting success criteria from all levels beyond the achieved level of conformance." The methodology should reflect this. I would suggest including in step 1.c the possibility to aim for several conformance levels at the same time as a way to fit that. That would encompass as well the case presented in section 4.1 (Initial Conformance Assessment of a Website) where AAA level is recommended disregarding the target level aimed by the website owner or developer. Step 1.d -------- * I understand two concepts are intermingled here: a) the contexts of use in which the website users are primarily accessing it and b) the contexts of use for which the website access is being tested. E.g. a) a public website would have a distribution of user platform that mirrors the average web usage (a distribution including Chrome, MSIE, Firefox, Safari, plus several mobile browsers, each with different assistive technologies for each type of disability, on different operating systems, etc., including different versions of each as well); b) but maybe the evaluation is only performed on several, representative combinations of technologies (e.g. Chrome on Windows with NVDA, Firefox on GNome with Orca, MSIE with Jaws 13, Safari on Mac with Voice Over). The methodology should differentiate more clearly between both aspects, explicitly provisioning for the "sampling strategy" for contexts of use, covering the existing range for diferent facets of the context of use (platform, preferences, user tasks, etc.) * A explicit definition for "context of [website] use" in the definitions might be clarifying regarding this step. * It should be made explict that accessibility support should be applicable per context of use at the website level. E.g. if · there are two possible contexts of use (e.g. platforms + ATs); · and a website technology feature is not accessibility-supported by a specific context of use #1, · and another feature is not supported by the second context of use #2; then none of these contexts are supported (rather than both). * If the website includes features which are not accessibility-supported for a relevant context of use, this should be signalled as a problem. E.g. a feature is not a.s. by MSIE, though users always have Chrome or Firefox available for free. This imposes an innecessary burden to users (e.g. they cannot use bookmarks, corporate restrictions precluding installation of other browsers, need to install them, etc.) On the other hand, I understand this might be out of the scope of this methodology, as "accesibility supported" is already defined by Understanding WCAG 2.0 (clause 2.d of the technical definition of accessibility support) as "The user agent(s)(...) are available for download or purchase in a way that: does not cost a person with a disability any more than a person without a disability and, is as easy to find and obtain for a person with a disability as it is for a person without disabilities". Step 1.e -------- * Instead of making the step optional, I would suggest making it compulsory while leaving room to the use of other techniques different from WCAG's. On the one hand, in any evaluation process, some techniques need always to be used (whether they are WCAG techniques or anyone else's). On the other hand, it is true that organizations can include their own techniques, but that point can be indeed explictly recognized and integrated in the methodology. I consider that in an evaluation process, it is even more important to clearly specify which non-standard techniques are going to be used, as this is an information that might not be obvious. Step 2 ------ * I would add the word between brackets here: "it may be necessary to create accounts or otherwise provide [controlled] access to restricted areas of a website that are part of the evaluation." It should not seem at all that accessibility evaluation requires providing uncontrolled access to the evaluator. Even though access to restricted areas may be needed, security policies of the organization would still apply to the evaluator. * It is said that "it may be necessary to create accounts or otherwise provide access to restricted areas of a website that are part of the evaluation." Alternatively, access could be granted to replicas of the site where restricted information has been concealed, obfuscated, anonymized, replaced, stripped out, blurred out, anonymized, etc; as long as the original content and the replica are equivalent for the accessibility perspective. However, this should be taken with caution, as the last contion means this method cannot always be applied. E.g. personal data can be replaced with that of fictitious persons and evaluation would remain valid. But if the whole text is obfuscated, understandability of the original content may be difficult to infer. Or, color contrast cannot be evaluated in an image that is blurred, etc. In those cases, access to the original content would be required. * "Involvement of (...) and website developer can be helpful". It should be noted this can be difficult when there are too many website developers involved. Step 2.a -------- * [See comments on definitions for a suggestion to replace the phrase "common web pages"] * It is not clear enough what a common state of a web application is. I guess it does not refer to states which are common to different executions of similar processes (which would be included in the next step), but rather to a concept equivalent to that of common web pages. However, wording should be clarified to avoid confusion, as it might be any of both. Step 2.c. --------- * Web pages may come from the integration of different (even legacy) back-end systems, which may generate different results regarding accesibility. Thus I propose adding a new bullet point: · Web pages that have been generated by using varying server technologies. Step 3.a -------- [See comments to definitions on 'common web pages' and to step 2.a on 'common states'] Step 3.b -------- * What is the connection "content, design and functionality" exactly trying to express? a) include two distinct pages for each content; plus two for each design; plus two for each functioality. Then it should read "(2) content, (3) design, and (4) functionality". In that case, wouldn't "(4) functionality" already include "(1) common functionality"? b) include two distinct pages for each implemented value of the tuple "content-design-functionality". Then it should read "(2) combination of content, design and functionality". * Step 2.c defined different aspects to be taken into account when identifying the variety of web pages: types of content, styles, functional components, etc. The requirement reads instead "content, design and functionality", while the description just is about "types of web pages" (referring to step 2.c for details). The requirement should be reworded to either: include the 7 categories in setp 2.c, or use the catch-all "types of web pages" otherwise. Step 3.d -------- * Two conditions are established in two sentences (xxxx. Also, yyyy). However, it is not clear how these conditions supplement each other, as it seems they are equivalent. Step 4.a -------- * The satisfaction of WCAG 2.0 success criteria and the WCAG 2.0 conformance requirements are not indepentent, but rather the first is part of the second (all the success criteria for a level must be satisfied in order to satisfy the first conformance requirement). So, I would suggest rewriting the step requirement , reordering the contents as: "Check each web page in the sample selected per 3.3 Step 3: Select a Representative Sample for meeting each of the WCAG 2.0 conformance requirements, regarding all the WCAG 2.0 Success Criteria in the target conformance level (per 3.1.3 Step 1.c: Define the Conformance Target)". And the first sentence of the description as: "For each web page in the sample selected per 3.3 Step 3: Select a Representative Sample, check whether each of the WCAG 2.0 conformance requirements have been met, according to allthe WCAG 2.0 Success Criteria in the target conformance level (per 3.1.3 Step 1.c: Define the Conformance Target)." * I suggest adding a new point to the bullet list · Interactive user interface elements, and dynamically generated or rendered content. Step 4.b -------- * [See comments on step 1.e. regarding the optionality of this step, and the possibilit to include non-WCAG techniques] * The sentence: "Conversely, failures are documented ways of not meeting individual WCAG 2.0 Success Criteria. A WCAG 2.0 Success Criterion is not met on a web page when a failure applies to any instance of web content that is addressed by the WCAG 2.0 Success Criterion." would make more sense just after "WCAG 2.0 techniques are documented ways for meeting or for going beyond what is required by individual WCAG 2.0 Success Criteria." Step 4.c -------- * [See comments on step 1.d regarding the sampling of contexts of use evaluated] Step 5.a -------- * [See coments on step 1.b on the range of possible evaluation goals and their meanings] Step 5.b -------- * I envision other score metrics can be used, which weigh differently the web pages (e.g. depending on its popularity, pageviews, etc), or the success criteria (e.g. depending on the impact on users, etc.). Even multidimensional scores could be provided, with different values for different user ability profiles, or for different aspects of the website, etc. That's why, in any case, I would not state that "the performance score is calculated through one of the following approaches", but just rather they are suggested ways to compute them, being others as equally valid. * The metric employed to compute the score would anyway need to abide by some requirements: it should have been defined before the evaluation, to ensure objectivity, must be monotonous regarding the appearance or resolution of accessibility problems, etc.. * Regarding the specific metrics that are suggested in the document, I find particularlyuseful to exclude from the computation those criteria which are not applied because there was no element to which it could be applicable. Strictly following WCAG 2.0, this situation means the respective success criteria have succeded. However, this interpretation would artificially constraining the range of the scores, compacting them towards the upper range: "bad pages" failing at many SC while several other SC are not applicable tend to get "free" extra points, which render then indistinguishable from good pages which correctly solved the requirements posed by all those criteria. Thus I would propose removing "not applicable" results from score computations. * It should be noted that different scoring methods are more stringent the less granular the method is (conformance level > per website score calculation > per webpage score calculation > per instance). E.g. a single failure of a success criteria on a single webpage would reduce conformance level from AA to A; while it would reduce per-website score only from 38/38 to 37/38; and per-web-page core from (e.g.) 380/380 to 379/380, etc. * If different conformance levels are targeted at the same evaluation [see also comments to step 1.c], several scores might be provided (against different conformance levels). Section 4.2 Evaluating a Large Website with Separate Parts ---------------------------------------------------------- I do not grasp what the second paragraph exactly means. I guess that: an evaluation is made for each subsite; plus then an additional evaluation is made for the whole site, where pages are selected from the subsite samples, covering at least two pages for each subsite. A clarification or a rationale would help to understand it more easily. Section 4.3 Re-Running a Website Conformance Evaluation ------------------------------------------------------- In the case where the evaluation is rerun after errors are corrected... isn't it more useful to keep a superset of the original sample (so that it can be validated whether errors where corrected)? Section 5 Application of this Methodology ----------------------------------------- It is said that "It is not required to carry out any of the steps and activities defined by this methodology in any particular sequence." In that case, "steps" might not be a suitable name, as steps usually involves the concept of sequence (one step before another). The diagram at the beginning of section 3 would not apply either (as it implies a specific sequential ordering). [See also comments to the introduction to section 3, on the sequence of process steps]