W3C home > Mailing lists > Public > w3c-wai-ig@w3.org > April to June 2000


From: <pjenkins@us.ibm.com>
Date: Fri, 23 Jun 2000 11:17:47 -0400
To: "webwatch" <webwatch@telelists.com>
cc: w3c-wai-ig@w3.org
Message-ID: <85256907.00540784.00@d54mta03.raleigh.ibm.com>

> IBM is launching new fleet of products designed to allow users to surf
> Internet by talking over the phone. "Instead of using a Web browser,
> using a voice browser," says the general manager for IBM Voice Systems.

Just so we are clear here,  these voice browsers process VoiceXML, not the
HTML that is processed by current popular graphical browsers.
VoiceXML is an XML-based markup language for distributed voice
applications, much as HTML is being used as a language for distributed
visual applications. VoiceXML is designed for creating audio dialogs that
feature synthesized speech, digitized audio, recognition of spoken and
telephone keypad tone (DTMF) input, recording of spoken input, telephony,
and mixed-initiative conversations. The goal is to provide voice access and
interactive voice response (e.g. by telephone, [wireless or land line PDA,
or desktop) to web-based content and applications that are new or may
already exist as visual applications.   The IBM WebSphere server allows the
"content" that was going to the graphical browser to also be directed and
transformed [transcoded] to the voice browser.  It is the same "content",
the "news, stock quotes, movie listings, horoscopes and other information"
that is being transcoded for the voice browser.

VoiceXML and the current voice browsers are not currently designed for
providing accessibility to the page that shows up in the graphical browser,
but to a different "page" - a voice interactive view - of web-based
content.  The idea is that important "content" will show up in both the
graphical browser and the voice browser, but with the a differently
designed user interface - one for the graphical browser and one for the
phone.  Hopefully the majority of VoiceXML application designers will
insure that users can use the application without the use of a smart screen
phones that might tempt the designer to rely on the tiny screen for visual
feedback because many users will want to use a regular old phone with all
the "smarts" on the server and no tiny screen in the handset.

VoiceXML is being defined by an industry forum, founded by AT&T, IBM,
Lucent and Motorola, established to promote the Voice eXtensible Markup
Language (VoiceXML) and was submitted the W3C's Voice Browser working group
in hopes that it would be used as the basis for a W3C recommendation.
VoiceXML frees the designer and authors of voice response type applications
from low-level programming and resource management. It enables integration
of voice services with data services using the familiar Internet and Web
paradigm, and it gives users the power to seamlessly transition between
applications. The voice browser may be in the hadset [smarter phones] or
the dialogs are provided by document servers which are used by the older
phone hand sets.

Other accessibility concerns from the deaf and hard of hearing [such as
using a TTY device] and the mobility impaired [ease of using the phone
keypad] have not been discussed as far as I have read.

Phill Jenkins
IBM Accessibility Center - Special Needs Systems
Received on Friday, 23 June 2000 12:46:55 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 20:35:56 UTC