- From: Jim Barnett <1jhbarnett@gmail.com>
- Date: Wed, 24 Aug 2016 10:28:08 -0400
- To: public-voiceinteraction@w3.org
- Message-ID: <b315f5ae-2570-d2be-1463-d02613542192@gmail.com>
One question that comes to mind is whether there is a relatively static domain model, separate from the more dynamic data model (in a language like SCXML). The data model is used to keep track of the state of the current interaction (number of items in the shopping cart, etc.) while the domain/task model would represent the basic features of all interactions. Another question is whether we think of the virtual agents as having scripts that they use to interact with users. (Contact centers often assign scripts to actual human agents to guide their interactions with callers.) Scripts would be like interaction templates. Again thinking in terms of a language like SCXML, scripts could be state machines. If the virtual agent has multiple scripts to choose from, we can either represent that as one big state machine (with branching logic at the top) or we can break it down so that one component would select the script while another would execute it. - Jim On 8/24/2016 9:02 AM, Deborah Dahl wrote: > > I’d like to talk about putting together a framework for virtual > assistants along the lines of the Speech Interface Framework > (https://www.w3.org/TR/voice-intro/ see Fig. 1) or the Multimodal > Framework (https://www.w3.org/TR/mmi-framework/ see Figs. 2 and 3). It > would list the components of a virtual agent/community of virtual > agents and point out where existing and/or future standards can be > used. I’ll try to draft a diagram that we can talk about. If you have > any thoughts about this, please post to the list. > > > > ------------------------------------------------------------------------ > Avast logo <https://www.avast.com/antivirus> > > This email has been checked for viruses by Avast antivirus software. > www.avast.com <https://www.avast.com/antivirus> > >
Received on Wednesday, 24 August 2016 14:28:47 UTC