W3C home > Mailing lists > Public > www-voice@w3.org > July to September 2005

Re: VoiceXML 2.0 clarification

From: Dave Burke <david.burke@voxpilot.com>
Date: Sat, 17 Sep 2005 20:20:18 +0100
Message-ID: <001001c5bbbc$d7a5a4d0$0a01a8c0@db01.voxpilot.com>
To: <ken.waln@edify.com>, <www-voice@w3.org>
Hi Ken,

My understanding of the intent is that DTMF should be collected and buffered during a fetch of a resource with fetchaudio (assuming the flushed  prompts do not have bargein false set). If this were not the case, then the ability for power users to type-ahead with DTMF would be lost across certain page transitions. Our implementation reflects this intepretation.

  ----- Original Message ----- 
  From: ken.waln@edify.com 
  To: www-voice@w3.org 
  Sent: Saturday, September 17, 2005 1:24 AM
  Subject: VoiceXML 2.0 clarification

  We have a minor question on whether our interpreter is working correctly in one case.  The question is whether or not DTMF should be buffered during a transition with a fetchaudio.  Here are the two arguments:


  The "yes" argument from section 4.1.8:

  A VoiceXML interpreter is at all times in one of two states:

    a.. waiting for input in an input item (such as <field>, <record>, or <transfer>), or 
    b.. transitioning between input items in response to an input (including spoken utterances, dtmf key presses, and input-related events such as a noinput or nomatch event) received while in the waiting state. While in the transitioning state no speech input is collected, accepted or interpreted. Consequently root and document level speech grammars (such as defined in <link>s) may not be active at all times. However, DTMF input (including timing information) should be collected and buffered in the transition state. Similarly, asynchronously generated events not related directly to execution of the transition should also be buffered until the waiting state (e.g. connection.disconnect.hangup). 
  The "no" argument (later in 4.1.8):

    a.. when the interpreter begins fetching a resource (such as a document) for which fetchaudio was specified. In this case the prompts queued before the fetchaudio are played to completion, and then, if the resource actually needs to be fetched (i.e. it is not unexpired in the cache), the fetchaudio is played until the fetch completes. The interpreter remains in the transitioning state and no input is accepted during the fetch. 
  This does not say that "no speech input is accepted"; it says "no input is accepted" which seems to include DTMF.  Any clarification on intent from the authors or other implementations is appreciated. Thanks,

Received on Saturday, 17 September 2005 19:20:35 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:07:38 UTC