- From: jack <jhines@cdc.net>
- Date: Thu, 9 Apr 1998 16:36:24 -0400
- To: "'w3c-wai-ig@w3.org'" <w3c-wai-ig@w3.org>
Question: Can or should accessibility issues for OS(s) with browser based shells be coupled to those same issues in regards to a WEB site (and the underlying protocols) ? The fundamental point here is that the distinction between what is WEB and what is local resource is much less visible to the casual user when driving from a browser based shell. The underlying technologies are becoming the same. Then why struggle with accessibility issues for the WEB and not the OS ? I have come to believe that the issues and the needs are (and should be) the same. Consider the dissimilar environment a user (especially a casual or even computer-phobic user) will encounter when navigating between the voice command/response layers and the browser and web page layers, (and so on). Consider that the WEB, even the entire Internet and intranets, are becoming extensions of the OS and their resources. The technologies required to render text to speech, speech to text, and voice command/response are largely the same between the two environments (OS and WEB). The major difference in the architecture of layers is where the text/speech interpretation occurs. Commonly, the speech entities are yet another application to the OS. To fully extend the OS for accessibility conventions, without seriously upsetting the applications development industry, requires a kernel level module that would expose the flow of objects (including text) between the applications and the OS. If this were available (it is definitely possible), then there would be a common, CONSISTENT definition of "speech hooks" for the OS (consistent to ALL applications, past, present, and future). Particular speech enabled applications would not be required. Development costs would be spread across a relatively huge domain, making accessibility features such as speech the rule rather than the exception. Witness a significant positive impact to the (excessively ?) large costs associated with accessibility via speech. And, best of all, the WAI and others are already wrangling with issues that those same OS(s) with browser based shells will have to conform to. 'Nuff said, Jack ---------- From: Chris Maden[SMTP:crism@ora.com] Sent: Thursday, April 09, 1998 1:33 PM To: Subject: Re: [Jack Hines] > Actually, the NT platform is a good candidate for a voice > command/response driven OS, IF, IF Microsoft (c) would expose an API > into the GDI module of the kernel ! I don't know if OS accessibility is on-topic for this list, but since it came up... Jessica Perry Hekman, author of _Linux in a Nutshell_ and RSI sufferer, is working to build interest in accessibility tools for Linux, specifically focusing on a DragonDictate port. An interest list announcement is below: > You are invited to join ddlinux, a list devoted to discussing means > of realizing the goal of speech recognition for Linux. To subscribe, > send mail to > ddlinux-request@leb.net > with the word "subscribe" in the subject. -Chris -- <!NOTATION SGML.Geek PUBLIC "-//Anonymous//NOTATION SGML Geek//EN"> <!ENTITY crism PUBLIC "-//O'Reilly//NONSGML Christopher R. Maden//EN" "<URL>http://www.oreilly.com/people/staff/crism/ <TEL>+1.617.499.7487 <USMAIL>90 Sherman Street, Cambridge, MA 02140 USA" NDATA SGML.Geek>
Received on Thursday, 9 April 1998 16:37:16 UTC