Summary of the Multimodal Interaction Working Group's Face to Fa ce Meeting, June 20-21

Here's a summary of the W3C multimodal interaction working group's 
second face to face meeting, which was held June 20-21. Please 
feel free to send any questions and comments you might have
to this list. 

Debbie Dahl, Unisys, Chair

The W3C Multimodal Interaction Working Group had its second face to
face meeting in Chelmsford, MA, on June 20 and 21, 2002, hosted by
Snowshore Networks. There were 39 attendees from 31 organizations.

The main tasks on the agenda were to: 

1. complete a second pass over the group's requirements and agree on a
   schedule for publication

2. take the next steps toward a multimodal specification

3. take the next steps toward a natural language specification

4. understand considerations that will help the group decide on the
   best way to accommodate ink input in multimodal applications.

5. get an update on other related activities, specifically,

   a. ETSI's DSR activity (http://portal.etsi.org/stq/kta/DSR/dsr.asp),
   presented by Tasos Anastasakos (Motorola)

   b. SALT (www.saltforum.org), presented by Stephen Potter and
   Kuansan Wang (Microsoft)

   c. the DARPA Communicator project (http://fofoca.mitre.org/)
   presented by Roberto Pieraccini (SpeechWorks)

   d. various industrial activities in multimodal interaction

Requirements:

We completed a review of the current working version of our
requirements document and expect that we will be able to officially
publish it around September 1. Stephane Maes of IBM is taking the lead
on preparing the requirements document. The requirements document is
based on the orginal MMI requirements document published by the Voice
Browser Group (http://www.w3.org/TR/multimodal-reqs) with significant
updates developed in MMI group discussions. It includes sections on
general topics, input, output, and architecture. It will also include
an in-depth analysis of several important use cases (see below under
"multimodal specification") as well as a glossary, which is being
assembled by Jim Larson (Intel).

Multimodal specification: 

The group decided that the top priority for the next steps toward a
multimodal specification would be to get an overall consensus on the
architecture and events that need to be supported to handle our use
cases. We decided to divide into groups and do an in-depth analysis of
four important use cases in order to tease out the most significant events.
The groups were:

1. name dialing, led by Dave Raggett of W3C/OpenWave
2. form-filling (specifically, airline reservations), led by Giovanni Seni
of Motorola
3. multi-device (specifically, dating), led by Scott McGlashen of PipeBeach
4. driving directions, led by Emily Candell of Comverse

We began the analysis during the face to face meeting, and are
continuing it during telecons and via email. Each analysis will
include both a general description of the use case as well as a
definition of the events that are required to support it. We expect to
complete the use case analysis by the end of July and add the
discussion to the published requirements document. The events that
emerge from the use case analysis will form the basis of the events
discussion in the MMI specification.

Jim Larson (Intel) and T.V. Raman (IBM) also prepared an overall
architectural/framework document that we'll be using as a basis for
further architectural discussions.

Once we have achieved consensus on the overall framework and events to
be supported, more specific issues of input, output, and container
documents will be addressed. Our goal is to publish the first Working
Draft in December of 2002.

Natural Language specification:

A subgroup was formed to work on a specification for natural language
utterances, based on earlier work done by the Voice Browser Working
Group (http://www.w3.org/TR/nl-spec/). Wu Chao of Avaya and Roberto
Pieraccini of SpeechWorks will be taking over editing this document
from Debbie Dahl (Unisys). We set a goal of publishing the second
Working Draft (the first since this work was taken over by the MMI
group) at the end of September.  We are also seeking a new name for
the NL specification, now called NLSML (Natural Language Semantic
Markup Language). We want to avoid the use of the term "semantics"
since "semantics" has different meanings in different contexts and may
imply a deeper analysis than the spec is intended to support.

Ink: 

Giovanni Seni of Motorola presented an overview of existing ink
representations and discussed how no one existing ink representation
is sufficient to support our use cases. In the next few weeks, the
group will discuss how we can support these use cases, some of which
are very compelling. Some options are to develop an ink markup
specification within the MMI group, either now or at some future
point, or to try to work with another organization to modify an
existing specification.

Received on Monday, 15 July 2002 13:26:52 UTC