- From: Deborah Dahl <dahl@conversational-technologies.com>
- Date: Wed, 1 Aug 2007 10:25:42 -0400
- To: <www-multimodal@w3.org>
Summary of the W3C Multimodal Interaction F2F meeting June 18-20, 2007 Thanks to H-care for hosting the MMI working group face-to-face meeting June 18-20, 2007 at Roncade in the beautiful Veneto region of Italy, and for the activity in Venice on Wednesday afternoon and evening. Thanks also for an informative and interesting visit to H-care and related businesses on Thursday afternoon. Here is a summary of our discussions. Multimodal Architecture:[1] The Multimodal Architecture describes a loosely coupled architecture for multimodal user interfaces, which allows for co-resident and distributed implementations, and focuses on the role of markup and scripting, and the use of well defined interfaces between its constituents. We reviewed the current version of the MMI Architecture and planned for the next version. The next Working Draft will include a definition of the format schema for life cycle events and more details on protocols. We expect the next Working Draft to be published in October. Multimodal Authoring: We are working on developing authoring examples developed by WG members using SCXML, EMMA, and XHTML as well as higher level issues such as focus synchronization. We plan to have a draft ready to review at the next face to face meeting in November. This is not expected to be a Recommendation track document but a Working Group Note, or we may fold this into an informative appendix of the Multimodal Architecture and Interfaces specification. EMMA:[2] EMMA is an XML markup language for containing and annotating the interpretation of user input. We reviewed comments from the second Last Call Working Draft of EMMA and finalized our responses. Since the face to face we have sent responses to the commenters and have received agreement with our responses from nearly all the commenters. We are now targeting August 15 for Candidate Recommendation and November 15 for Proposed Recommendation. InkML:[3] The Ink Markup Language serves as the data format for representing ink entered with an electronic pen or stylus. We would like to get more participants involved in this work by recruiting additional members from companies in the ink marketplace, perhaps by holding a workshop. We will also start planning work on defining an ink modality. Joint MMI and Voice Browser Meeting: We had a joint meeting with the Voice Browser Working group where we heard presentations from several outside speakers. Umberto Basso, CEO of H-care, presented their Human Digital Assistant technology which includes 3D real-time rendering and text-to-speech. Prof. Piero Cosi discussed his work in 3D Facial Animation and Prof. Giuseppe Riccardi discussed his work in third generation conversational interfaces. Our next Face to Face will be held in Boston in conjunction with the W3C Technical Plenary meeting, Nov 5-9, 2007. The Multimodal Interaction F2F meeting will be held on Thursday and Friday of that week, with a joint meeting with the Voice Browser WG on Tuesday afternoon. Best regards, Debbie Dahl, MMIWG Chair [1] Architecture: http://www.w3.org/TR/mmi-arch/ [2] EMMA: http://www.w3.org/TR/emma/ [3] InkML: http://www.w3.org/TR/InkML/
Received on Wednesday, 1 August 2007 14:26:18 UTC