- From: Doyle Burnett <dburnett@sesa.org>
- Date: Tue, 09 Dec 2003 11:51:15 -0900
- To: "Arditi, Aries (by way of Wendy A Chisholm <wendy@w3.org>)" <aarditi@lighthouse.org>, W3C Web Content <w3c-wai-gl@w3.org>
Thanks For Your Re-Post, Aries - As I am one of a number of us working on a "re-look" at guideline 1.6 within the draft of WCAG 2.0, I would like to follow-up with some of what you had to say. First, for me alone, thank you for the input - I am sure others appreciated your comments. Given my own personal experiences with color deficiency issues, I agree that an algorithm is probably not going to solve the issue of helping authors develop 100% readable content. With regard to the "vastness" of issues concerning color/contrast: brightness, hue saturation, environmental conditions, monitor types and their differences, individual user differences/preferences, etc, etc, etc - it is unlikely that a formula can easily solve the issues. Are you suggesting that 1.6 and 1.7 be combined as a single guideline? I could be wrong and by no means speak for the larger group, but I believe the 20 dB (4xlouder) came from an earlier post with regard to what the audio company Shure (Shure microphone fame) recommends as the background/foreground noise ratio. The American Speech and Hearing Association also has standard signal to noise ratios that apply to classroom environment - please see: http://www.asha.org/about/publications/leader-online/archives/2001/classroom _acoustics_update.htm These ratios are for listeners with normal hearing. Individuals with hearing loss would need to have determined their individual "hearing loss threshold" and have the speech increased to the +15 to 20 dB levels to discriminate speech over background noise. This seems close to the 20 dB that WCAG came up with and possibly this would be a good source to reference but it does NOT speak specifically to sound from a computer. However, it seems (to me personally) a good place to start and something to be able to refer back to. My thoughts - Doyle Burnett Doyle Burnett Education and Training Specialist Multiple Disabilities Program Special Education Service Agency dburnett@sesa.org Www.sesa.org -- On 12/9/03 10:13 AM, "Arditi, Aries (by way of Wendy A Chisholm <wendy@w3.org>)" <aarditi@lighthouse.org> wrote: > > Hi all, > > Wendy Chisholm and Judy Brewer asked me to comment on this months ago, and > I posted it to the site they asked me to post it at. Recently, someone > more savvy in the functioning of the WAI lists informed me that the list is > totally ignored. So here, I repost it, in the hopes that it might be > helpful to someone, somewhere, some day.... > > Aries Arditi > Senior Fellow in Vision Science > Lighthouse International > > Original Post: > > With respect to the June 24 WCAG 2.0 draft, I have a number of comments, > detailed below. First, however, I would like to congratulate the authors > for taking on such a difficult task. In particular, I'm delighted that the > group is intending to broaden the scope of the guidelines to address a > broader set of technologies. To the extent that this broadening succeeds, > its influence can go well beyond that of web content and authoring, and > potentially guide the efforts of developers of assistive technology as well. > > > Having said that, I believe also that the guidelines are in general less > comprehensible (indeed, less accessible!) to the intended readership than > were the version 1.0 guidelines, and that the lack of comprehensibility > stems from the very abstraction of principles that was necessary for > broadening to other technologies. I believe these problems can be addressed > and alleviated through the planned technology-specific checklists, > especially if the checkpoints in the WCAG are cross referenced to technology > specific guidelines. Apart from a few exceptions, I don't believe, however, > that making the guidelines significantly easier to understand can be > accomplished by simple rewriting. Like a mathematics text in which axioms > and theorems provide the foundation, and examples and problems the practical > understanding, this may be a necessarily abstract document. > > The conformance structure of Core and Extended is fine, both easy to > understand and potentially effective, provided more objective and > quantitative methods of evaluating specific conformance to checkpoints can > be developed. But conformance to specific checkpoints is best evaluated in > the context of specific technologies, where specific items are less open to > braod interpretation. Without technology specific checkpoints, I would > imagine that squabbling and bickering over conformance will increase, and > with more latitude to interpret the guidelines as they see fit, developers > could claim more for less actual conformance. So I would favor keeping the > general structure of the draft as a kind of umbrella document, but > instantiate the conformance checkpoints with technology-specific > checkpoints. Since the planned technology-specific checklists were not > included in this draft, it's difficult to evaluate the conformance > structure. However, in theory, the structure seems fine to me. > > Developers at my organization do use the 1.0 WCAG, but much of their use is > indirect---via automated tools to evaluate conformance. Because web sites > can be huge and dynamic, reliance on such tools is at least partially > inevitable. However, to the extent that the current document is even less > specific than v1.0, I would see migration being difficult. Without > technology-specific checklists, it is easy to imagine developers of > evaluation and repair tools with widely disparate views of what constitutes > conformance. Again, I would prefer to view this draft as an umbrella > document with more specifics to be incorporated as an essential component of > the 2.0 WCAG recommendation. > > Here are some other specific comments, nearly all pertaining to Guidelines 1 > and 2: > > 1. I didn't understand the designation of examples and benefits as > "non-normative" or "informative." Are these terms lingo, or am I just > being thick (or both)? > > 2. Overview of Design Principles, last para before Note: > > This would better read, "Accessible web content can benefit people with as > well as without disabilities." Accessible web content does not ALWAYS > benefit people other than the disabled. High levels of magnification, for > example, which can make things more accessible to people with low vision, > can make documents more difficult to navigate. > > Delete next 2 sentences, which are superfluous since examples of accessible > web content benefiting others is subsequently given (and giving wheelchair > examples to the reader is tiresome---readers of the WCAG can, nearly 14 > years after the ADA, be assumed to understand general things about > disabilities without being prompted to conjure up images of wheelchairs). > > 3. User needs, first bullet: > > Change "via sound" to "visually." > > 4. Definitions for checkpoint 1.1: text equivalent > > This is not a definition. I would first define "text" as code representing > written language, that is a one-to-one mapping of alphabetic and numeric > symbols. Then define "text-equivalent" as text that serves to communicate > substantially equivalent content as another representation such as an image. > > 5. Benefits of checkpoint 1.1, first bullet: > > add ", or otherwise transformed to different presentation format (e.g. font, > text size) > > 6. Checkpoint 1.5 > > I believe that the benefit of adding structure to the document is that it > allows assistive technologies more information about the document in order > to tailor its presentation to the user's capabilities. This checkpoint, as > written, however, seems to focus on how document elements are rendered. For > example, the (sole) required success criterion refers to visual appearance > or auditory characteristic. The examples also are about how to render. > That's not a WCAG issue, in my view. > > Also, who (or what) could judge whether structure "has been made perceivable > to more people..."? Compared to what? This is way too vague. > > 7. Editorial Note (22 May 2003) regarding "additional items in 4/29 draft > > Yes, in my opinion, these should be moved to techniques. > > 8. Checkpoint 1.6 > > This is very vaguely worded. I would propose instead the following > language: "Where possible, use layered designs to allow selective > presentation or blanking of visual or auditory elements, comprising, for > example, foreground and background content." > > The sole success criterion for 1.6 is in terms only of visually imagery. I > would refer to layered auditory images, such as background music with a > voice-over, to allow hearing-impaired user to remove the background music > for greater clarity. > > Under best practice for 1.6 #1 and #2, what does "easily readable" mean? > And to whom? This is not quantifiable. > > Also, who came up with the 20 dB (or 4 X louder) foreground/background audio > figure? Are there any data to back this up as being sufficient? > > It's true that there are no effective tests for visibility or readability > that do not rely on standard human observers in terms of color or color > contrast (the claims of some notwitstanding). The reasons for this include > the fact that display devices and hardware used to drive them vary widely in > peak luminance, color gamut, and linearity; viewing conditions of observers > vary widely (ambient room lighting, presence of reflections from nearby > windows, etc.); readability and visual discriminability also depend > critically on the kinds of stimuli used. A very low contrast text in a huge > font may be as or more readable than a smaller font in very high contrast. > Typography and image characteristics also go into the mix of important > factors. And let us not leave out the enormous variability in visual > capacity of the visually-impaired population, some of whom will settle for > no less than the blackest black against the whitest white. The long and > short of it is that we simply do not have in the foreseeable future a good > way to evaluate objectively things like accessibility of text contrast > and/or color. This is thus an issue in which, I believe, the best we can do > (and thus the most we should do) is to suggest principles to follow to > enhance accessibility. > > 9. Examples of checkpoint 1.6 first bullet: > > What are the "standard foreground/background contrast requirements"? If > there are any, please clarify (and cite). > > 10. Required success criteria for checkpoint 2.2 #1: > > I would add the following bullet: "* or the user can control the > presentation by eliciting phases sequentially via keystrokes." > > 11. Checkpoint 2.3 > > I would be very surprised if anyone could devise a substantive and effective > test for flicker in machine readable format. Here are some thought > experiments: > > Consider a single pixel flickering in the 3-49Hz range on your screen right > now. Do you think you could detect it? If so, do you think it could > possibly cause a seizure in someone with photosensitive epilepsy? You do? > How about if that single pixel flickered by changing its intensity 1/256? or > 1/(256^3)? > Still think so? > > How many phases (i.e. alternations) of light and dark constitute flicker in > the 3-49 Hz range? What if I present a black dot for 1.5 seconds, then > replace it by a white dot for 1.5 seconds? Would you call that 3 Hz > flicker? In other words, how many "cycles" of the pattern do you require > before you call it flicker? > > Here's a third: Mathematically the sum of leftward moving and rightword > sinusoidal gratings is equal to a single stationary sinusoidal grating whose > bars reverse in contrast. Any single point in that pattern will flicker. Do > we call that flicker or moving gratings? Were we to analyze many moving > patterns we would find that we could describe them mathematically in terms > of flickering components. What if we have a video taken from a moving car or > train that passes by a picket fence. Will not there be some flicker in that > video? Shall we ban all animations in the WCAG? > > It is likely that scientists will some day be able to better zero in on the > specific visual causes of photosensitive epileptic seizures; presumably high > contrast large field flicker poses a greater risk than do low contrast > flickering single pixels. > > Until we know more, or until qualified experts on this condition weigh in, > or credible scientific evidence is presented that better specifies the > parameters of the flicker, I believe this checkpoint should be deleted. > > Thank you for the opportunity to comment on this draft. > > Aries Arditi, Ph.D. > Senior Fellow in Vision Science > Arlene R. Gordon Research Institute > Lighthouse International > 111 East 59th Street > New York, NY 10022 > > Tel: +1 212 821 9500 (direct) > Fax: +1 212 751 9667 > <http://www.lighthouse.org/research_staff_arditi.htm>http://www.lighthouse.org > /research_staff_arditi.htm > > > ****************************************************************************** > ******************** > The contents of this email and any attachments are confidential. > It is intended for the named recipient(s) only. > If you have received this email in error please notify the system manager > or the > sender immediately and do not disclose the contents to anyone or make copies. > > ** eSafe scanned this email for viruses, vandals and malicious content ** > ****************************************************************************** > ******************** > > > >
Received on Tuesday, 9 December 2003 15:53:38 UTC