VoiceXML 2.0 clarification

We have a minor question on whether our interpreter is working correctly in
one case.  The question is whether or not DTMF should be buffered during a
transition with a fetchaudio.  Here are the two arguments:

 

The "yes" argument from section 4.1.8:

A VoiceXML interpreter is at all times in one of two states:

*	waiting for input in an input item (such as <field>, <record>, or
<transfer>), or 
*	transitioning between input items in response to an input (including
spoken utterances, dtmf key presses, and input-related events such as a
noinput or nomatch event) received while in the waiting state. While in the
transitioning state no speech input is collected, accepted or interpreted.
Consequently root and document level speech grammars (such as defined in
<link>s) may not be active at all times. However, DTMF input (including
timing information) should be collected and buffered in the transition
state. Similarly, asynchronously generated events not related directly to
execution of the transition should also be buffered until the waiting state
(e.g. connection.disconnect.hangup). 

The "no" argument (later in 4.1.8):

*	when the interpreter begins fetching a resource (such as a document)
for which fetchaudio was specified. In this case the prompts queued before
the fetchaudio are played to completion, and then, if the resource actually
needs to be fetched (i.e. it is not unexpired in the cache), the fetchaudio
is played until the fetch completes. The interpreter remains in the
transitioning state and no input is accepted during the fetch. 

This does not say that "no speech input is accepted"; it says "no input is
accepted" which seems to include DTMF.  Any clarification on intent from the
authors or other implementations is appreciated. Thanks,

Ken 

Received on Saturday, 17 September 2005 00:36:49 UTC