- From: dirk.schnelle <dirk.schnelle@jvoicexml.org>
- Date: Tue, 04 Feb 2020 05:31:13 +0100
- To: Jim Larson <jim42@larson-tech.com>, public-voiceinteraction@w3.org
- Cc: Deborah Dahl <dahl@conversational-technologies.com>
- Message-ID: <30acc5w144VCXUJ.RZmta@mo4-p00-ob.smtp.rzone.de>
Hey Jim,thank you for your questions. I always appreciate to get some additional thoughts on that to learn what we may potentially have missed and what might be added to the document to make things clearer. Unfortunately, there were only few comments so far and I hope that we will get some more.Trying to come up with some answers.Ad 1This depends on what you understand by those IPA implementations. Actually, they try to provide the entire chain. However, in a more standardized way, it would be possible to extract some backend portion of that and make it also accessible in other environments. This also adds more potential to be independent of the vendor of your smart phone, thus easily extending the user base.Ad 2IPA providers may have some own naming for the same slots. They should also be able to do so. However, the meaning for the dialog is the same, regardless of what the IPA providers are using exactly. The dialog related intent sets are meant to abstract from that. I will review how to make this clearer in the document.Ad 3Good point, that we should not forget about roles. In the case of dialogs, this can be done either by the providers or by a dialog author who simply wants to enable certain functionality using 3rd party components. Maybe, this is comparable to providing new skills to Alexa?Ad 4This is also something I thought of but was not sure if this should be mentioned explicitly as it also may complicate the overall picture. Maybe, this is wanted, e.g., in case you want to have a service more recognizable by using a branded voice. On the other hand, the dialog would interface with the user with different voices, which might be wanted as just described-Here, I would be interested in some other opinions.Ad 5Almost correct. The dialogs may know, which service they want to access and what they want to be filled out, e.g. travel. However, there might be serval providers offering that travel service. In that case, the IPA selection service works as a registry, knowing all the IPA service providers that are suitable to fulfil the request. It may select one depending on the preference of the dialog or the user profile. So, yes, the dialogs express their demand for remaining slots to be filled out. I should make this more clear in the document.HthDirk -------- Ursprüngliche Nachricht --------Von: Jim Larson <jim42@larson-tech.com> Datum: 02.02.20 01:02 (GMT+01:00) An: public-voiceinteraction@w3.org Cc: Deborah Dahl <dahl@conversational-technologies.com> Betreff: Some questions about the W3C Intelligent Personal Assistant Architecture Some questions about the W3C Intelligent Personal Assistant Architecture 1. As I understand, Google Assistant, Alexa, Microsoft Cortana, Bixby, and other IPA providers reside in the blue rectangle to the right of the architecture picture. 2. What is the relationship between Dialog n/Intent Set n (in the orange box) an IPA provider n (in the blue box 3. Who do you envision will build and maintain the dialog (the orange rectangle)? This looks like a giant jo IPA Providers supply their own user interface. 4. Will they allow giving up control to the client (green box) an Dialog (orange box) especially their own TTSs and ASRs 5. It appears to me, that the orange dialog serves two purposes (a) answer common questions and (b) route remaining questions to the appropriate IPA provider. Do I understand this correctly. Regards, Jim Larson
Received on Tuesday, 4 February 2020 04:31:27 UTC