Comments from the Multi-modal Interaction group on the VoiceXML 2 .0 Last Call Working Draft

The Multi-Modal Interaction Working Group has reviewed the VoiceXML 2.0 
Last Call Working Draft and would like to make the following comments.
We also welcome any additional comments and discussion on the public mailing
lists.

The group believes that VoiceXML would be more useful to multi-modal 
interactions with the changes discussed in items 1, 2 and 3 below,
particularly 
items 1 and 3.  We do not believe that these issues necessarily need to be 
resolved in order for the spec to progress, but we would like to hear
discussion 
of any plans the group has for addressing these issues.

Item 4 is just a minor correction.

Many thanks to Gerald McCobb (IBM) for assembling these comments on behalf 
of the multi-modal group.

Debbie Dahl, Chair, Multi-modal Interaction Working Group


1. VoiceXML Modularization
Modularization of VoiceXML would separate VoiceXML constructs into 
separate modules.  This would allow the constructs to be used in a
multimodal language as components that can be embedded in multimodal
documents.
Priority:  High

2. NLSML
It would be useful to understand how NLSML formatted results can be used to
populate VoiceXML field items.  The VoiceXML specification includes a
comprehensive discussion of mapping ASR results in the form of ECMAScript
objects to VoiceXML forms, but says very little about NLSML format.
Priority:  Medium High

3. XML Events
A modularized VoiceXML should support XML Events.  VoiceXML components
embedded in multimodal XML documents would share a multimodal document's DOM
and DOM
events.
Priority:  High

4. Grammar tag's content model
Section 3.1.1 should state explicitly that the SRGS grammar tag is extended
to allow PCDATA for inline grammar formats besides SRGS.  It currently says
that SRGS tags including grammar have not been redefined.
Priority:  High

Received on Tuesday, 28 May 2002 16:24:50 UTC