W3C home > Mailing lists > Public > www-multimodal@w3.org > July 2012

transition announcement: "Registration & Discovery of Multimodal Modality Components" as First Public WG Note

From: Kazuyuki Ashimura <ashimura@w3.org>
Date: Fri, 06 Jul 2012 17:18:08 +0900
Message-ID: <4FF69F40.6070804@w3.org>
To: www-multimodal@w3.org
I am pleased to announce that the Multimodal Interaction Working Group
has published the First Public Working Group Note of "Registration &
Discovery of Multimodal Modality Components in Multimodal Systems:
Use Cases and Requirements" as follows.

Document title
---------------

Registration & Discovery of Multimodal Modality Components in
Multimodal Systems: Use Cases and Requirements

Document URI
-------------

This version:
   http://www.w3.org/TR/2012/NOTE-mmi-discovery-20120705/

Latest published version:
   http://www.w3.org/TR/mmi-discovery/

Previous version:
   none

Instructions for providing feedback
------------------------------------

Comments for this specification are welcomed and should have a subject
starting with the prefix '[dis]'.  Please send them to
<www-multimodal@w3.org>, the public email list for issues related to
Multimodal.


Note:
-----

As the W3C top page news [1] describes, the background and objective
of this WG Note is as follows:

- The users of mobile phones, personal computers, tablets or other
   electronic Devices are increasingly interacting with their devices
   in a variety of ways: touch screen, voice, stylus, keypads,
   etc.

- Today, users, vendors, operators and broadcasters can produce and
   use all kinds of different Media and Devices that are capable of
   supporting multiple modes of input or output.  Tools for authoring,
   edition or distribution of Media for Application developers are
   well-documented.  But there is a lack of powerful tools or practices
   for a richer integration and semantic synchronization of all these
   media.

- To the best of our knowledge, there is no standardized way to build
   a Web Application that can dynamically combine and control
   discovered modalities by querying a registry based on
   user-experience data and modality states.  This document describes
   design requirements that the Multimodal Architecture and Interfaces
   specification [2] needs to cover in order to address this
   problem.

[1] http://www.w3.org/News/2012#entry-9491
[2] http://www.w3.org/TR/2012/CR-mmi-arch-20120112/

Best regards,

Kazuyuki Ashimura (on behalf of the Multimodal Interaction WG chair)
Multimodal Interaction Activity Lead

-- 
Kaz Ashimura, W3C Staff Contact for Web&TV, MMI and Voice
Tel: +81 466 49 1170
Received on Friday, 6 July 2012 08:20:25 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 6 July 2012 08:20:25 GMT