- From: Gregory J. Rosmaita <unagi69@concentric.net>
- Date: Tue, 11 Sep 2007 13:35:30 -0400 (EDT)
- To: Lisa Seeman <lisa@ubaccess.com>, <unagi69@concentric.net>,'Charles McCathieNevile' <chaals@opera.com>, <oedipus@hicom.net>,'Sander Tekelenburg' <st@isoc.nl>, <public-html@w3.org>, <wai-xtech@w3.org>
aloha, lisa! thank you for providing the other half of the equation -- i have a few self-voicing apps specifically designed for persons with cognative, learning, or lanugage disabilities listed on the http://esw.w3.org/topic/HTML/UAs in particular: Penfriend XL: http://www.penfriend.biz/ and ReadPlease: http://www.readplease.com/ any other applications that fit your real world user scenarios, please let me know, so they can be added to the HTML WG wiki in order to reinforce the point that speech isn't just for blind/VI users, but has a multiplicity of very important use cases, such as charles chen's CLiCkSpeak http://clickspeak.clcworld.net/ which, in the words of its developer: quote Unlike FireVox, which is designed for visually impaired users, CLiCK, Speak is designed for sighted user who want or need text-to-speech functionality. [...] If you're a sighted user who wants to have web pages read to you because you have cognitive issues (for example, dyslexia), because you have literacy issues (like me - I can understand spoken Mandarin Chinese just fine, but reading is difficult for me), because you want to reduce eyestrain and listen to a web page being read, etc., then you are likely to prefer CLiCk, Speak over Fire Vox." unquote thank you lisa for keeping an important user group whose lives are tangibly affected by supplemental speech on the HTML WG's radar, gregory. ---- Lisa Seeman <lisa@ubaccess.com> wrote: > > Gregory wrote: self-voicing apps have their place in the overall scheme of > things but they are NOT substitutes for screen readers. > > Two places were they have an important role is for people with learning or > language disabilities. Another use is for people who do not have their own > computer, and can use firevox on a shared computer, such as the library > (which may not be prepared to install a bulky program such as Jaws but will > be prepared to help someone get started). Also as people develop vision > problem (such as associated with diabetes and aging) may often use the self > voicing apps for reading print. This group do not need a screen reader for > selecting icons at the start up but they may not want to, or even be able > to, manage a screen reader (which take quite good memory skills). Another > huge group who need to be taken into account are third word computer users > who may be unable to afford a screen reader or may not read well. > > So they are not a substitutes for screen readers but self-voicing apps > have an important place in the world. > > All the best > > Lisa > > > > > > > -----Original Message----- > From: wai-xtech-request@w3.org [mailto:wai-xtech-request@w3.org] On Behalf > Of Gregory J. Rosmaita > Sent: Monday, September 10, 2007 9:29 PM > To: Charles McCathieNevile; Sander Tekelenburg; public-html@w3.org; > wai-xtech@w3.org > Subject: screen-reader versus self-voicing app (was: Re: Screen-reader > behaviour) > > > aloha! > > as a screen-reader user, let me attempt to explain why there is no > groundswell of support for "self-voicing" applications by those dependent > upon speech output... > > 1) unavoidable black holes: > self-voicing applications cannot replace a dedicated screen reader, for > self-voicing applications often cannot interpret key parts of the chrome, > espeically if the chrome does not reuse standard control sets for the OS on > which it is running -- download interfaces, view source interfaces (that > open up a new browser instance or tab), the ability to "browse" files to be > uploaded to a web site, etc. this is because the self-voicing application > exists solely to voice the application which is currently running; > > 2) one can put one's screen-reader into "sleep" mode for a particular app, > so that the self-voicing app doesn't conflict with the screen reader, but > this often leads to unexpected and undesired results; > > for example, in order to use FireVox, i set JAWS to become inactive whenever > FireVox is loaded -- however, since FireVox is an extension, and not a > seperate app, i can no longer run FireFox with a screen reader, because the > screen reader cannot differentiate between the synonymic executable files > when invoked, and therefore, disables screen reader interaction when a > partucular executable is loaded; > > 3) self-voicing apps can still conflict with a scren reader due to events > from the self-voicing apps firing whilst one is in a plain text document > checking one's credit card or banking information; which is also why > self-voicing applications have limited appeal and why they CANNOT be run > without a screen reader -- if i am using a self-voicing app, once i switch > tasks, i have no wey of knowing what is currently running -- even when doing > something as "trivial" as copying the contents of a page to the clipboard > and pasting it to an empty plain text document -- without a screen-reader at > the ready to "awaken" when the user switches from the self-voicing app, the > speech-dependent user is left without a means of ensuring that previous > information has not been overwritten, nor what directory into which the file > is going to be saved, nor access to any system calls ("do you want to > overwrite..." or "error - nothing selected" > > self-voicing apps have their place in the overall scheme of things, but they > are NOT substitutes for screen readers; what we should be concentrating upon > is NOT how does current assisstive technology handle current markup, but how > to enable assisstive technology to handle markup better, by providing more > explicit association patterns and as much semantic information as possible > > THAT is the goal -- to improve bi-directional communication between > applications, in this case, between user agents and screen readers -- not to > critique the current state of support -- it must ALSO be realized that HTML > 4.01 did not proclaim that it had addressed all accessibility problems, only > those that emergency triage units identified as the most crucial problems in > the late nineteen-nineties -- it was NEVER intended to be the be all or end > all in web accessibility, but an effort to provide a means of breaching > perceptual black holes and the sort of device dependence and modality > dependence that breaks assisstive technologies... even for a self-voicing > app to work well, it must rely upon the semantics built into the markup > language it supports... > > this is why this whole thread is a red herring in my opinion -- we cannot > "break" what was done in the past to promote accessibility, useability, > internationalization and device independence, nor should we be bound to > putting old wine into new bottles -- where superior mechanisms are > available, they should be implemented, but those mechanisms implemented in > HTML 4.01 specifically for accessibility, device independence and > internationalization MUST be supported as part of the "backwards > compatibility" principle, hence my suggested verbiage for the design > principle document: > > "Browsers should retain residual markup designed for a specific > purpose, such as accessibility. internationalization or device > independence. Simply because new technologies and superior > mechanisms have been identified, not all of them have been > implemented. Moreover, disabled users are more likely to be > users of "legacy technology" because it is the only technology > that interacts correctly with third-party assistive technologies" > > or words to that effect... > > gregory. -- "He who lives on Hope, dies farting." -- Benjamin Franklin, Poor Richard's Almanack -- Gregory J. Rosmaita, unagi69@concentric.net Camera Obscura: http://www.hicom.net/~oedipus/
Received on Tuesday, 11 September 2007 17:35:47 UTC