Multimodal Architecture Last Call Working Draft

I thought that some of you might be interested in the recent publication of
the "Multimodal Architecture and Interfaces" specification
(http://www.w3.org/TR/mmi-arch/), which was published by the Multimodal
Interaction Working Group a few weeks ago. Kazuyuki may have mentioned it at
the workshop. I think it's very relevant to anything having to do with the
TV and different ways of interacting with it, such as voice or gesture-based
interaction as well as biometrics.

The specification describes a loosely coupled architecture for multimodal
user interfaces, which allows for co-resident and distributed
implementations, and focuses on the role of markup and scripting, and the
use of well-defined interfaces between its constituents. It takes a big step
toward making multimodal components interoperable by specifying a common
means of communication between different modalities. It also very naturally
supports distributed applications, where different types of modalities are
processed in different places, whether on one or more local devices, in the
cloud, or on other servers. Finally, we believe it will also will provide a
good basis for a style of interaction called 'nomadic interfaces,' where the
user interface can move from device to device as the user moves around.

We're very interested in getting comments on the specification. Comments
can be sent to the MMI public mailing list at www-multimodal@w3.org. 

Best regards,
Debbie Dahl
Multimodal Interaction Working Group Chair

Received on Tuesday, 15 February 2011 19:16:33 UTC