W3C home > Mailing lists > Public > www-svg@w3.org > November 2008

SVG as a multimodal modality component

From: Deborah Dahl <dahl@conversational-technologies.com>
Date: Tue, 4 Nov 2008 11:31:54 -0500
To: <www-svg@w3.org>
Cc: <www-multimodal@w3.org>
Message-ID: <008d01c93e9a$d9894ef0$6801a8c0@chimaera>

Dear SVG,
During the TPAC, the Multimodal Interaction Working Group and the SVG
Working
Group had a joint meeting to discuss possible points of collaboration. One
of
the most interesting ideas was to explore how SVG could be used as a
Modality
Component in the Multimodal Architecture [1]. A Modality Component
encapsulates
modality-specific capabilities (for example, speech recognition, graphical
display,
or handwriting recognition) in a multimodal application. It communicates 
information back and forth to an overall controller, the Interaction
Manager,
through a well-defined set of asynchronous events, the life cycle events.  
In the most recent version of the Multimodal Architecture, we've included a 
set of rules and guidelines for defining Modality Components, using face
recognition as an example (Appendix F). It seemed worth looking at how 
SVG could function as a Modality Component in an MMI application. This could
lead to interesting applications like voice control of SVG graphics using
commands like "make the text bigger", or "pause the animation", or combined
voice and pointing control of graphics with commands like "put a red square
here", accompanied by a mouse click. 
The MMI Working Group would very much welcome the SVG Working Group's
feedback
on the suitability of the rules in Appendix F for defining a SVG-based
modality component. Please send any feedback to the MMI public list,
www-multimodal@w3.org.

best regards,

Debbie Dahl
MMIWG Chair

[1] MMI Architecture and Interfaces:
http://www.w3.org/TR/2008/WD-mmi-arch-20081016/
Received on Tuesday, 4 November 2008 16:32:43 GMT

This archive was generated by hypermail 2.3.1 : Friday, 8 March 2013 15:54:41 GMT