- From: Deborah Dahl <dahl@conversational-technologies.com>
- Date: Mon, 07 Apr 2003 13:21:14 -0400
- To: www-multimodal@w3.org
- Cc: W3C Multimodal group <w3c-mmi-wg@w3.org>
This is a summary of the most recent face to face meeting of the W3C Multimodal Interaction Working Group. March 6-7, 2003 Cambridge, MA, USA hosted by the W3C, in conjunction with the Third Technical Plenary Meeting of the W3C [1]. This was the fifth face to face meeting of the Multimodal Interaction Working Group. There were 47 attendees from 30 organizations. We took advantage of the presence of many other W3C Working Groups at the Technical Plenary to have joint meetings with: Device Independence [2] Scalable Vector Graphics [3] Timed Text [4] DOM (overview of DOM-3)[5] These meetings largely focused on understanding the other groups' activities and how these activities are related to the MMI group's work. 10 demonstrations of multimodal applications were also presented. Gerry McCobb (IBM): XHTML+Voice on the pc and small device and mulitmodal authoring tool preview Michael Johnston (ATT): Multimodal Access To City Help Giovanni Seni (Motorola): Form-filling on a WebPad computer using grammar-assisted handwriting recognition Kuansan Wang (Microsoft): MapPoint demo Jean-Daniel Fekete (INRIA): experimental system for managing multimodality Sunil Kumar (V-Enable): multimodality on thin clients Roberto Pieraccini (SpeechWorks): multimodal conversational system for the Ford Concept Car Tsuneo Nitta (Toyohashi University of Technology) rapid prototyping, MMI generator, same service accessed by mobile, PDA and kiosk. Jurgen Sienel (Alcatel) multimodal browser Michael Bodell (TellMe) The demonstration session was opened up to other attendees from the Technical Plenary meeting and was well-attended by members of other groups. In addition to the joint meetings, a number of topics pertaining to internal group activities were addressed. 1. EMMA: we continued to move forward on the EMMA specification [6] by reviewing change requests. The first EMMA Working Draft is expected to be published at the end of May. 2. Ink: We reviewed recent work on the ink specification [7], which is expected to be published at the end of May. 3. Input/Output Objects: We worked on refining the definition of input/output objects through breakout groups. The groups focused both on modalities such as speech and ink as well as functions such as capture, recognition, classification, and semantic extraction. 4. Interaction Management: We raised issues and worked on refining ideas on interaction management in a multimodal context. 5. System, Session and Environment Objects: We reviewed recent work within the group on system, session and environment objects. Note that the work on input/output objects, interaction management, and system, session and environment objects is expected to be incorporated into the first Working Draft of the Framework Specification, which we currently plan to publish at the end of June. In addition to the group meeting, there was also a panel session on multimodality during the Technical Plenary Meeting on March 5 [8]. The next face to face meeting will take place June 2-4, 2003, in Redmond, Washington, hosted by Microsoft. References: [1] Technical Plenary page: http://www.w3.org/2002/10/allgroupoverview/ [2] Device Independence: http://www.w3.org/2001/di/ [3] Scalable Vector Graphics: http://www.w3.org/Graphics/SVG/Overview.htm8 [4] Timed Text: http://www.w3.org/AudioVideo/TT/ [5] DOM: http://www.w3.org/DOM/ [6] EMMA requirements: http://www.w3.org/TR/EMMAreqs/ [7] ink requirements: http://www.w3.org/TR/inkreqs/ [8] Technical Plenary panel (session 2): http://www.w3.org/2003/03/TechPlenAgenda.html Deborah Dahl, MMI Working Group Chair
Received on Monday, 7 April 2003 13:21:45 UTC