- From: Schnelle-Walka, Dirk <dirk.schnelle@jvoicexml.org>
- Date: Wed, 7 Oct 2020 15:36:50 +0200
- To: public-voiceinteraction@w3.org
- Message-ID: <f428c1c4-3c43-d07a-868f-ae77db2eb783@jvoicexml.org>
Dear all,
unfortunately, I will have to skip the meeting today. However, I made
some changes to the document and you may want to have a look at that
* section 3 new graphic for minimal architecture
o new component to track context requires an update in other
architecture drawings
o plan to use this as a basis to introduce detailed subcomponent
drawing per component section
o end with a general overview before the walkthrough (now right
below the simplified drawing)
* section 3.2.2.1 table for dialog strategies (ws a list)
* section 3.2.5 new table for core dialogs
* section 6.1 (new) added a section for abbreviations
Thank you
Dirk
Am 06.10.2020 um 23:55 schrieb Deborah Dahl:
>
> During our last call we talked about generic capabilities that could
> be included in any Intelligent Personal Assistant platform, and it
> seemed like we could get some ideas from VoiceXML
> (https://www.w3.org/TR/voicexml20/)
> <https://www.w3.org/TR/voicexml20/)> built in events.
>
> I took an action to look them up.
>
> They are:
>
> Help
>
> Nomatch
>
> Noinput
>
> Error
>
> These could be used in any VoiceXML document and a VoiceXML-compliant
> platform would be required to provide some kind of handler, that is,
> not crash or hang, if they occurred. It was considered to be poor
> design on the developer’s part to let the platform default handler
> actually handle these events (by not providing overrides), but they
> had to be there.
>
> In our architecture, the analogy could be that these capabilities are
> always part of the IPA Service.
>
Received on Wednesday, 7 October 2020 13:37:08 UTC