Summary of the December, 2004 W3C Multimodal Interaction Working Group face to face meeting

The W3C Multimodal Interaction (MMI) Working Group [1] held a face to
face meeting in Turin, Italy, December, 2004, hosted by Loquendo,
following a meeting of the Voice Browser Working Group [2].  Thanks to
Loquendo for their great logistical arrangements and support.

There were 22 attendees from 20 organizations. This note summarizes
the results of the meeting.

The MMI meeting focused on MMI architecture and authoring
approaches. 

In the architecture area we continued working on our internal
architecture document with the goal of having a final internal review
at the next face to face meeting. Publication would be expected to
follow shortly thereafter. The MMI architecture provides a general means
for allowing modality-specific components to communicate with each
other, plus basic infrastructure for application control and platform
services.

During the authoring discussion we discussed authoring approaches. Our
plan is to review our existing requirements, possibly adding some
requirements, and review implemented proposals against the
updated requirements. Current candidates for review include SALT, X+V
and the Harel State Table based language being developed in the Voice
Browser Working Group.

We are currently planning to publish the MMI architecture document in
March, 2005. We are also planning to publish a Working Draft on
authoring toward the end of 2005.

Opera, HP and Loquendo also presented demonstrations of some of
their work in speech and multimodal technology.

The next Multimodal Interaction Working Group meeting will be held in
Boston, Massachusetts on February 28-March 1, 2005, hosted by the W3C
in conjunction with the annual W3C Technical Plenary Meeting.

best regards,

Debbie Dahl, MMI WG Chair

References:

[1] Multimodal Interaction Working Group:
    http://www.w3.org/2002/mmi/
[2] Voice Browser: http://www.w3.org/Voice/

Received on Wednesday, 19 January 2005 14:01:30 UTC