- From: Deborah Dahl <Dahl@conversational-Technologies.com>
- Date: Tue, 21 Sep 2021 15:05:24 -0400
- To: <public-voiceinteraction@w3.org>
1. Publicity: Article in online Speech Technology Magazine https://www.speechtechmag.com/Articles/News/Speech-Technology-News/W3C-Looks-to-Implement-Standards-for-Personal-Assistant-Interoper ability-148856.aspx See marketing plan at https://github.com/w3c/voiceinteraction/edit/master/marketing-plan.md 2. W3C Technical Plenary See schedule and registration information at https://lists.w3.org/Archives/Public/public-voiceinteraction/2021Sep/0004.html a. voice agent interoperability breakout session proposal https://www.w3.org/wiki/TPAC/2021/SessionIdeas#Voice_Agent_Interoperability Exact date and time TBD, but will be during the week of October 18-22. Our session will be open to the public. During tomorrow's call let's discuss what specific topics we might want to focus on. Considering that this will be a very web-savvy community, we may want to talk about topics that could tap into their expertise. b. Regular group call, same time but with a different URL (TBD) and the possibility of outside observers https://www.w3.org/wiki/TPAC/2021/GroupMeetings 3. Interfaces discussion Let's look at whether the Multimodal Architecture + EMMA is suitable for describing the information in input path 4 in Figure 3 https://w3c.github.io/voiceinteraction/voice%20interaction%20drafts/paArchitecture-1-2.htm#walkthrough input path 4 description: "The audio is sent along with all augmenting data to the IPA Service." EMMA spec: https://w3c.github.io/emma/emma2_0/emma_2_0_editor_draft.html MMI Architecture spec: https://www.w3.org/TR/mmi-arch/ I'll send an example shortly.
Received on Tuesday, 21 September 2021 19:05:58 UTC