- From: Deborah Dahl <dahl@conversational-technologies.com>
- Date: Fri, 2 Jul 2004 06:55:22 -0400
- To: <www-multimodal@w3.org>
The W3C Multimodal Interaction (MMI) Working Group [1] held a face to face meeting in Detroit, Michigan, June 10-11, 2004, hosted by EDS and OnStar. There were 24 attendees from 21 organizations. This note summarizes the results of the meeting. The MMI meeting was colocated with a meeting of the Voice Browser Working Group [2](see [3] for a summary of the Voice Browser meeting). We took advantage of this to engage in several discussions about the evolving Voice Browser V3 architecture and its relationship to multimodal architectures. We also spent some time on a review of the MMI use case document [4]. Because we've made considerable progress on the MMI framework [5] and multimodal integration [6], since the original use cases were developed, it was useful to revisit the use cases in order to understand their implications for MMI architectures. We plan to publish an MMI architecture document which will include the results of the updated use case discussion, as well as a description of the MMI architectures. This document is targeted for publication in 4Q 04. Other ongoing group activities were reviewed, including: 1. EMMA (Extensible MultiModal Annotation)[7] for representing and annotating user input. We are aiming for a publication of the next working draft at the end of July. We anticipate that the next working draft after that one will be a Last Call, currently planned for December of 2004. 2. InkML (Ink Markup Language) for representing digital ink [8]. An update to the Working Draft is planned for late July, with the Last Call currently planned for December of 2004. 3. Work on making system and environment information available to multimodal applications. Because the system and environment work is closely related to Device Independence [9], the MMI group is working closely with the Device Independence Working Group in this area. The group is targeting July, 2004 for a Working Draft based on this work. 4. Approaches to handling composite input; that is, coordinated input from multiple modalities, such as speech combined with a pointing gesture. The group is planning to publish a Note on the results of the study, expected in 4Q 04. Some of the insights gained during this study will also be considered for incorporation into the EMMA work. 5. An ongoing study of approaches to interaction managment. We plan to publish a Note on the results of this study in 4Q 04. IBM and V-Enable also presented demonstrations of some of their multimodal efforts. Many thanks to EDS and OnStar for providing excellent meeting facilities and team-building activities. The next face to face meeting will take place during the week of September 20, 2004, in Hawthorne, New York, hosted by IBM. References: [1] Multimodal Interaction Working Group: http://www.w3.org/2002/mmi/ [2] Voice Browser: http://www.w3.org/Voice/ [3] Voice Browser face to face summary: http://lists.w3.org/Archives/Public/www-voice/2004AprJun/0079.html [4] MMI Use Cases http://www.w3.org/TR/mmi-use-cases/ [5] MMI Framework http://www.w3.org/TR/mmi-framework/ [6] Multimodal Integration: http://www.w3.org/TR/modality-interface/ [7] EMMA: http://www.w3.org/TR/emma/ [8] InkML: http://www.w3.org/TR/InkML/ [9] Device Independence: http://www.w3.org/2001/di/ Best regards, Debbie Dahl, MMI Working Group Chair
Received on Friday, 2 July 2004 06:56:03 UTC