- From: Charles McCathieNevile <charles@w3.org>
- Date: Sun, 31 Dec 2000 10:50:36 -0500 (EST)
- To: "Leonard R. Kasday" <kasday@acm.org>
- cc: "w3c-wai-gl@w3.org" <w3c-wai-gl@w3.org>
(Admin note: this has been Bcc'ed to AU and ER so they see the discussion, but I think it is best to keep it followed up in GL. Anyone have a better idea of how to do this so we keep the discussion moving at a reasonable pace for everyone?) summary: This stuff makes good techniques. It may make for checkpoints/guidelines, and it may belong under ATAG and WCAG differently. Thought process: It seems like a content requirement at some level to identify the accessibility of content. (This is already in WCAG 1). It seems that there is a lot of value for authoring tools in having that identification in a way that can be tested adn understood by the tool. If I open a page in tool X, and there is information that says "fragment #Y meets the following requirements..." then editing that fragment may change that status. Ideally, tool X would be able to verify the original statement, and would know if it had changed as a result of editing. (In fact the ATAG requires that information not be lost. But it allows for non-reversible transformations. For example in converting a SMIL presentation to some other format there may be no way to carry Audio descriptions in an optional track - they would therefore have to be included as part of the main track, although it may not be possible to seperate them out again automatically.) Clearly, using a verifiable method of ensuring some content is accessible is of value to authoring tools. (Not least to satisfy the desire of most designers for the least possible intrusion on the part of the tool...) So, it seems simple to suggest that there are techniques which suggest both the principle and the practise of making things accessible in a way that is testable. Going from there to checkpoints or guidelines requires answering the question "is it always something that must be done, or should be done, or may be done to provide access to content (or tools in the ATAG case)?" For example, allowing configuration of prompting and alerting about accessibility problems in content is not a checkpoint in ATAG. It is a generally recommended technique, but there may be instances where someone develops a tool for a specific type of editing, and doesn't allow the user to avoid being prompted to fix some problem. Although this is hardly likely to be a mass-market winner, it doesn't prevent accessibility at any level, so this alone would not be enough to make it fail any conformance requirements. Cheers Charles McCN On Fri, 29 Dec 2000, Leonard R. Kasday wrote: Thanks all for the comments on the first draft of "checkpoint on testability". I can see now that I have to explain this guideline better. I'll admit I don't see it spelled out in the scope of the current WCAG charter, but I think its an important guideline that needs a home. So I'll explain the guideline first and return later to the question of whether it belongs in GL. I'm going to rephrase it a bit: Minimize the amount of human effort needed to test if content is accessible. For example, if two techniques produce equally accessible content, pick the technique that requires the least human effort to test the result. Why? There are two reasons. The first reason is that in the real world, there's only finite time and money available for testing. And in the long run at least, after we've automated all the testing we can, it's human effort that takes time and costs money. So if a site takes too much human effort to test, it will not get tested completely, or might not get tested at all. Who does this affect? it affects web authors who want to test their own work. It affects system test departments in larger operations. It affects organizations doing acceptance testing of web design work they have contracted out. It affects organizations like schools who need to test if web resources are usable by all their students. If testing takes too much effort for any of these organizations, it doesn't get done completely or at all. So ultimately it affects the end user. Because in today's world, if something isn't being tested, it probably doesn't work well or at all. So if these organizations haven't tested, what's passed on the end user is less likely to be fully accessible. And now for the second reason. This follows up a comment William made. The end user to check for him or herself whether a site is accessible before trying to make use of it. The more that testing can be automated, the more of that testing the user can do for him or herself. That's because testing that takes human judgment costs the user time at best; and at worst it requires a type of judgment that the user can't do at all (e.g. a judgment requiring visual inspection of images for someone who's blind). Now, we can debate whether this issue is in the scope of GL, but I think its quite important nonetheless. So: is in in the scope of GL? Well, on the one hand, I don't see it spelled out in the charter. But it is a requirement on content. So what other group would do it? As a practical matter, do we want to have two groups writing two different documents about content? Seems to me GL is the best home for this Guideline. Len -- Leonard R. Kasday, Ph.D. Institute on Disabilities/UAP and Dept. of Electrical Engineering at Temple University (215) 204-2247 (voice) (800) 750-7428 (TTY) http://astro.temple.edu/~kasday mailto:kasday@acm.org Chair, W3C Web Accessibility Initiative Evaluation and Repair Tools Group http://www.w3.org/WAI/ER/IG/ The WAVE web page accessibility evaluation assistant: http://www.temple.edu/inst_disabilities/piat/wave/ -- Charles McCathieNevile mailto:charles@w3.org phone: +61 (0) 409 134 136 W3C Web Accessibility Initiative http://www.w3.org/WAI Location: I-cubed, 110 Victoria Street, Carlton VIC 3053, Australia until 6 January 2001 at: W3C INRIA, 2004 Route des Lucioles, BP 93, 06902 Sophia Antipolis Cedex, France
Received on Sunday, 31 December 2000 10:50:42 UTC