W3C home > Mailing lists > Public > www-voice@w3.org > April to June 2002

VoiceXML comments from Elvira development team - part II

From: Pavel Cenek <pavel.cenek@itek.norut.no>
Date: Fri, 19 Apr 2002 09:43:18 +0200
Message-ID: <3CBFCA96.2070102@itek.norut.no>
To: www-voice@w3.org

    I am one of the developers of a VoiceXML interpreter called Elvira. During
the development and use of Elvira our team faced some problems with VoiceXML
and we also collected some remarks. We would appreciate any comments
concerning the following issues.

Here is the second part of VoiceXML comments from the Elvira development
team. This part contains proposals for some changes of VoiceXML.

4 - Grammars in <initial>
We think it would be useful to allow grammars in <initial>. The grammars would
be active only during executing <initial>.

A mixed initiative dialog often looks such that user can freely specify his
needs and some field items are filled. The rest should be collected one piece
at a time. Thus the <form>-scoped grammars should be deactivated in that time.
Now the only solution is to have all <field>s modal. But it deactivates also
document-scoped grammars and it is often undesired. Grammars in <initial>
would solve this problem gracefully.

5 - External events
In Elvira, it is possible to add an external event to the input queue. There
are some proposals of external event processing in VoiceXML. We propose to use
the following approach:
1. events take always precednce over user's input. If FIA is collecting user's
    input and en (external) event occures, the input is discarded and the event
    is processed
2. event processing must be done more often - at least
    - before the select phase
    - in the collect phase of _every_ form item
    - in the beginning of the process phase
    - before leaving the <form>
It also solves the problems "what should be done if an expression evaluation
(typically evaluation of a value of an attribute) during the select or process
phase causes a runtime error".

6 - Telephony
We understand VoiceXML as a language for description of the dialog flow. In
the best case it should have nothing to do with telephony, phone should be
considered as one of possible input devices.

The telephony support in VoiceXML is not sufficient anyway, that's why CCXML
is designed. The dialog termination or transfer to another phone number should
be solved purely by means of external events.

If VoiceXML is considered to be a _general_ language for description of the
dialog flow rather then tailored for telephony, then also PC keyboard (or
better a keyboard in general) should be supported as input device and
<grammar mode="dtmf"> should be replaced by <grammar mode="keyboard">. DTMF
grammars could be only a special case of keyboard grammars.

    We would be very grateful for any relevant comments

            Best regards

                Pavel Cenek
                member of Elvira development team

  Pavel Cenek   Ph.D. student           email: xcenek@fi.muni.cz
  Laboratory of Speech and Dialogue  homepage: http://www.fi.muni.cz/~xcenek
  Faculty of Informatics, MU Brno    lab page: http://www.fi.muni.cz/lsd

  Currently affiliated with Norut IT           http://www.itek.norut.no/itek/
  Tromsoe, Norway as guest researcher
  Elvira - LSD VoiceXML interpreter         http://gin2.itek.norut.no/elvira/
  Engine for building dialogue applications
Received on Friday, 19 April 2002 03:43:21 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 23:03:46 UTC