News Release: W3C Moves Forward on New Extensions for Voice Technologies and the Web

Based in part on its first technical Workshop in Beijing, and valuable
inputs from the VoiceXML Forum, the W3C is building on the Speech
Interface Framework. Efforts include extensions to the Speech Synthesis
Markup Language that bring better support for Asian and other languages,
and a standardized approach for speaker verification features in the
next version of VoceXML. For more information, please contact Janet Daly
<janet@w3.org>, +1 617 253 5884, or the W3C Communications Team contact
in your region.

W3C Moves Forward on New Extensions for Voice Technologies and the Web

New Version of SSML to include Internationalization features; VoiceXML
3.0 to incorporate Speaker verification

Web resources

This News Release
   In English: http://www.w3.org/2005/12/ssml-pressrelease.html.en
   In French: http://www.w3.org/2005/12/ssml-pressrelease.html.fr
   In Japanese: http://www.w3.org/2005/12/ssml-pressrelease.html.ja

W3C's Voice Browser Activity: http://www.w3.org/Voice/

http://www.w3.org/ -- 6 December 2005: The World Wide Web Consortium
(W3C) announced new work on extensions to components of the Speech
Interface Framework which will both extend Speech Synthesis Markup
Language functionality to Asian and other languages, and include speaker
verification features into the next version of VoiceXML, version 3.0.
Addressing both areas expands both the reach and functionality of the
framework.

Working Group Internationalizing SSML

The Speech Synthesis Markup Language (SSML), a W3C Recommendation since
2004, is designed to provide a rich, XML-based markup language for
assisting the generation of synthetic speech in Web and other
applications. The essential role of the markup language is to provide
authors of synthesizable content a standard way to control aspects of
speech such as pronunciation, volume, pitch, rate, etc. across different
synthesis-capable platforms.

While these attributes are critical, additional attributes may be even
more important to specific languages. For example, Mandarin Chinese, the
most widely spoken language in the world today, also has the notion of
tones - the same written character can have multiple pronunciations and
meanings based on the tone used. Given the profusion of cellphones in
China - some estimate as high as over one billion - the case for
extending SSML for Mandarin is clear in terms of sheer market forces.
Including extensions for Japanese, Korean and other languages will
ensure that a fuller participation possible of the world on the Web.

Speaker Verification Extension to Be Included in VoiceXML 3.0

Another feature users are demanding of telephony services and the Web is
speaker verification.

"Identity theft, fraud, phishing, terrorism, and even the high cost of
resetting passwords have heightened interest in deploying biometric
security for all communication channels, including the telephone,”said
Ken Rehor of Vocalocity, newly elected Chairman of the VoiceXML Forum
and participant in the W3C Voice Browser Working Group. “Speaker
verification and identification is not only the best biometric for
securing telephone transactions and communications, it can work
seamlessly with speech recognition and speech synthesis in VoiceXML
deployments."

Until now, most vendors have compensated for this missing feature by
making a custom fix for their services. The result has been a set of
divergent technologies that do not interoperate. Thanks to requirements
contributions from the VoiceXML Forum's Speaker Biometrics Committee,
the W3C Voice Browser Working Group has been able to identify the
features needed for a standardized speech verification module. The
Working Group is now beginning to address these requirements.

Timing Perfect for New Participants

Given the depth and breadth of the announced new work, as well as plans
for additional features for VoiceXML 3.0, this is a perfect time for new
companies, researchers and other interested parties to join W3C and
participate in the latest developments for voice technologies and the
Web. Among potential critical contributors are those from the research
and industrial sectors throughout Asia in the areas of Asian languages
and speaker verification, to allow for the best possible expertise in
the development of standards that truly serve the needs of Web users
worldwide. More information on the W3C Voice Browser Activity and on
joining W3C is on the W3C Web site.

Contact Americas, Australia --
Janet Daly, <janet@w3.org>, +1.617.253.5884 or +1.617.253.2613
Contact Europe, Africa and the Middle East-
Marie-Claire Forgue, <mcf@w3.org>, +33.492.38.75.94
Contact Asia --
Yasuyuki Hirakawa <chibao@w3.org>, +81.466.49.1170

About the World Wide Web Consortium [W3C]

The W3C was created to lead the Web to its full potential by developing
common protocols that promote its evolution and ensure its
interoperability. It is an international industry consortium jointly run
by the MIT Computer Science and Artificial Intelligence Laboratory (MIT
CSAIL) in the USA, the European Research Consortium for Informatics and
Mathematics (ERCIM) headquartered in France and Keio University in
Japan. Services provided by the Consortium include: a repository of
information about the World Wide Web for developers and users, and
various prototype and sample applications to demonstrate use of new
technology. Over 400 organizations are Members of the Consortium. For
more information see http://www.w3.org/

Received on Tuesday, 6 December 2005 15:01:20 UTC