Re: QUESTIONNAIRE DEADLINE EXTENDED through Monday

Speaking as one of the authors of VoiceXML 2.0/2.1 and the Chief  
Editor of VoiceXML 3.0, I can tell you that the intended audiences  
were different.

VoiceXML 2.0/2.1 was intended primarily to be a standard language for  
implementing Interactive Voice Response (IVR) applications, and as  
such assumes an underlying telephony model.  There is always a concept  
of a call in progress and a (spoken or tone) dialogue between a human  
and a machine.  Since the interaction is primarily on the audio  
channel and must adhere to social norms for spoken interactions, there  
are very critical timing parameters that are completely baked into the  
solution, as is appropriate.

The HTML Speech effort is about adding speech capabilities to a visual  
browser, where there may not be an explicit notion of time-sensitive  
dialog or of audio as the primary (or only) channel.

While you could build audio-only dialogs using HTML Speech  
capabilities (whatever they end up being) and you can build non- 
dialog, non-audio-exclusive applications with VoiceXML 2.0/2.1 (and  
even more easily with the new VoiceXML 3.0), neither language is  
properly suited to do the other's job.

It is my personal hope that we will, over the next few years, discover  
how to transition smoothly from one to the other, so that an app  
originally written in VoiceXML can, with limitations and a bit of  
work, be ported to HTML and vice versa.  This may become even more  
appropriate as mobile devices become capable of both and as telephony,  
audio media, and visual media handling all become truly integrated on  
the devices.

-- dan

On Apr 6, 2011, at 12:13 PM, Cesar Castello Branco wrote:

> WHAT ABOUT http://www.w3.org/TR/voicexml20/ ?
>
> IT IS CURRENTLY IMPLEMENTED BY OPERA BROWSER.
>
> WHY REINVENT WITH A NEWER SPECIFICATION ?
>
>
>

Received on Thursday, 7 April 2011 12:11:36 UTC