Multimodal Interaction Working Group rechartered

We are pleased to announce that the W3C Multimodal Interaction Working
Group has been rechartered to continue its work on developing
standards to support multimodal interaction on the World Wide Web.
The mission of the Multimodal Interaction Working Group, part of the 
Multimodal Interaction Activity, is to develop open standards that
enable the following vision:

 * Extending the Web to allow multiple modes of interaction:
     GUI, Speech, Vision, Pen, Gestures, Haptic interfaces, ...
 * Anyone, Anywhere, Any device, Any time: Accessible through the 
     user's preferred modes of       
     interaction with services that adapt to the device, user and 
     environmental conditions

The primary goal of the Multimodal Interaction Working Group is to
develop W3C Recommendations that enable multimodal interaction with
the web, including applications based on mobile phones and other
devices with limited resources.

The standards being developed by this Working Group will enable users
to provide input via speech, handwriting, or keystrokes, with output
presented via displays, pre-recorded and synthetic speech, audio, and
tactile mechanisms such as mobile phone vibrators and Braille
strips. Application developers will be able to provide an effective
user interface for whichever modes the user selects.

More details about the group and its work can be found in the charter
( and the web page

Membership in the Multimodal Interaction Working Group is open to
representatives of W3C member companies. Further information about
joining the Working Group can be obtained by contacting the Chair,
Deborah Dahl (Invited Expert) at <>, 
the Team Contact, Kazuyuki Ashimura at <>, or the 
W3C Advisory Council Representative of your company.

best regards,

 Deborah Dahl, MMI WG Chair
 Kazuyuki Ashimura, Team Contact

Received on Tuesday, 17 April 2007 19:51:04 UTC