W3C home > Mailing lists > Public > public-web-and-tv@w3.org > February 2011

Multimodal Architecture Last Call Working Draft

From: Deborah Dahl <dahl@conversational-technologies.com>
Date: Tue, 15 Feb 2011 14:15:56 -0500
To: <public-web-and-tv@w3.org>
Message-ID: <00cf01cbcd44$c641c3a0$52c54ae0$@conversational-technologies.com>
I thought that some of you might be interested in the recent publication of
the "Multimodal Architecture and Interfaces" specification
(http://www.w3.org/TR/mmi-arch/), which was published by the Multimodal
Interaction Working Group a few weeks ago. Kazuyuki may have mentioned it at
the workshop. I think it's very relevant to anything having to do with the
TV and different ways of interacting with it, such as voice or gesture-based
interaction as well as biometrics.

The specification describes a loosely coupled architecture for multimodal
user interfaces, which allows for co-resident and distributed
implementations, and focuses on the role of markup and scripting, and the
use of well-defined interfaces between its constituents. It takes a big step
toward making multimodal components interoperable by specifying a common
means of communication between different modalities. It also very naturally
supports distributed applications, where different types of modalities are
processed in different places, whether on one or more local devices, in the
cloud, or on other servers. Finally, we believe it will also will provide a
good basis for a style of interaction called 'nomadic interfaces,' where the
user interface can move from device to device as the user moves around.

We're very interested in getting comments on the specification. Comments
can be sent to the MMI public mailing list at www-multimodal@w3.org. 

Best regards,
Debbie Dahl
Multimodal Interaction Working Group Chair
Received on Tuesday, 15 February 2011 19:16:33 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:57:03 UTC