- From: Deborah Dahl <dahl@conversational-technologies.com>
- Date: Mon, 11 Dec 2006 16:03:18 -0500
- To: <www-multimodal@w3.org>
The W3C Multimodal Interaction Working Group is pleased to announce the publication of the third Working Draft of Multimodal Architecture and Interfaces [1]. In multimodal interaction users choose the way or "mode" of access that suits their current needs. With this framework, developers can provide user interfaces allowing multiple ways to interact with the Web and output for each mode, including displays, tactile mechanisms, speech and audio. The Multimodal Architecture and Interfaces document describes a loosely coupled architecture for multimodal user interfaces, which allows for co-resident and distributed implementations, and focuses on the role of markup and scripting, and the use of well defined interfaces between its constituents. The main difference from the second draft is a more detailed specification of the events sent between the Runtime Framework and the Modality Components. Future versions of this document will further refine the event definitions, while related documents will address the issue of markup for multimodal applications. In particular those related documents will address the issue of markup for the Interaction Manager, either adopting and adapting existing languages or defining new ones for the purpose. Comments for this specification are welcomed and should have a subject starting with the prefix '[ARCH]'. Please send them to <www-multimodal@w3.org>, the public email list for issues related to Multimodal. Best Regards, Debbie Dahl Chair, Multimodal Interaction Working Group [1] http://www.w3.org/TR/2006/WD-mmi-arch-20061211/
Received on Monday, 11 December 2006 21:03:42 UTC