Summary of Multi-modal Interaction Face-to-Face meeting 28 Feb.- 1 Mar, 2002

Multi-modal mailing list subscribers,

As you know, the W3C has recently started an activity and working group to
define standards for multi-modal interaction. In order to make information
about the working group's progress and activities generally available, the
group will periodically post relevant information to this list, including
summaries of our face-to-face meetings, to supplement the information that's
available on the web page (http://www.w3.org/mmi). This message includes a
summary of the first face-to-face meeting. I hope you will find this
information useful; please don't hesitate to follow up if you have any
specific questions or comments. This includes comments about the multi-modal
activity in general as well as comments on this meeting summary. Suggestions
for use cases that involve  multi-modal interaction are especially welcome,
since compiling use cases is the group's current focus. 

Deborah A. Dahl, Unisys, Working Group Chair


Summary of February 28-March 1 Face-to-Face Meeting

The W3C Multi-modal Interaction Working Group held its first face to face
meeting February 28 and March 1 2002 during the W3C Technical Plenary in
Cannes, France. 

46 people from 29 organizations attended this meeting. The goals of this
meeting were to:

1. understand the group's charter and W3C process 
2. start understanding the areas we should focus on by compiling use cases
and potential requirements for multi-modal standards 
3. begin learning about what's currently being done in multi-modal
interaction by hearing presentations on multi-modal industry efforts,
including SALT (http://www.saltforum.org) and XHTML+Voice
(http://www.w3.org/TR/xhtml+voice/)
4. begin learning about related activities both inside and outside the W3C
that will be relevant to our work, including SMIL
(http://www.w3.org/AudioVideo/), XForms (http://www.w3.org/MarkUp/Forms/),
ink, and Voice Browser (http://www.w3.org/Voice/).
5. plan for future activities 
The group will begin its work by compiling and prioritizing use cases of
multi-modal applications, and then analyzing them to see what requirements
are required to support our highest priority use cases. We will also be
compiling a glossary of relevant terminology. Subgroups were also formed to
look in detail at issues regarding events and ink. In parallel we will also
pursue an educational program in order to continue learning about related
activities. This will be accomplished by having experts in specific related
areas give presentations during teleconferences. Suggestions so far include:
related IETF standards, Natural Language Semantics Markup Language (NLSML),
identifying device capabilities (CCPP), distributed speech recognition
(DSR), and 3GPP.

Received on Monday, 25 March 2002 11:45:08 UTC