[multimodal interaction] questionnaire on multimodal authoring topics

Dear public-ddwg@w3.org subscribers,

The W3C Multimodal Interaction Working Group [1] seeks to extend the
Web to allow users to dynamically select the most appropriate mode of
interaction for their current needs, including any disabilities, while
enabling developers to provide an effective user interface for
whichever modes the user selects.  Depending upon the device, users
will be able provide input via speech, handwriting, and keystrokes,
with output presented via displays, pre-recorded and synthetic speech,
audio, and tactile mechanisms such as mobile phone vibrators and
Braille strips.

This Multimodal Architecture [2] describes a loosely coupled
architecture for multimodal user interfaces, which allows for
co-resident and distributed implementations, and focuses on the role
of markup and scripting, and the use of well defined interfaces
between its constituents.

To make the Multimodal Architecture more useful in current and
emerging markets, the Multimodal Interaction Working Group is
identifying and prioritizing topics to include in future multimodal
standards. We solicit your input so that we can concentrate on the
topics that will lead to standards that will provide the greatest
benefit to the industry. We seek your input on the following
questions:

1. What client device(s) should we target multimodal application
   specification languages?

2. For the applications you wish to develop, what input and output methods
   should be supported by multimodal application specification languages?

3. What existing languages should continue to be supported to develop
   multimodal applications?

4. Should new languages be developed to author multimodal applications?

5. If you already have a version of multimodal applications/solutions,
   how important is the ability to plug-in/reuse the modality
   components you have already developed?

We have put together an online questionnaire on these issues which you
can find at:
http://www.w3.org/2002/09/wbs/1/MmiAuthoringQuestions/

We would be extremely grateful for your input. Although responses are
welcome at any time, for the greatest impact, it would be most helpful
to get your input by June 15.

[1] Multimodal Interaction Working Group: http://www.w3.org/2002/mmi/
[2] Multimodal Architecture: http://www.w3.org/TR/mmi-arch/

Sincerely,

For Debbie Dahl, Chair, W3C Multimodal Interaction Working Group;
Kazuyuki Ashimura, W3C Multimodal Interaction Activity Lead

-- 
Kazuyuki Ashimura / W3C MMI & Voice Activity Lead
mailto: ashimura@w3.org
voice: +81.466.49.1170 / fax: +81.466.49.1171

Received on Tuesday, 29 May 2007 07:50:01 UTC