SVG as a multimodal modality component

Dear SVG,
During the TPAC, the Multimodal Interaction Working Group and the SVG
Working
Group had a joint meeting to discuss possible points of collaboration. One
of
the most interesting ideas was to explore how SVG could be used as a
Modality
Component in the Multimodal Architecture [1]. A Modality Component
encapsulates
modality-specific capabilities (for example, speech recognition, graphical
display,
or handwriting recognition) in a multimodal application. It communicates 
information back and forth to an overall controller, the Interaction
Manager,
through a well-defined set of asynchronous events, the life cycle events.  
In the most recent version of the Multimodal Architecture, we've included a 
set of rules and guidelines for defining Modality Components, using face
recognition as an example (Appendix F). It seemed worth looking at how 
SVG could function as a Modality Component in an MMI application. This could
lead to interesting applications like voice control of SVG graphics using
commands like "make the text bigger", or "pause the animation", or combined
voice and pointing control of graphics with commands like "put a red square
here", accompanied by a mouse click. 
The MMI Working Group would very much welcome the SVG Working Group's
feedback
on the suitability of the rules in Appendix F for defining a SVG-based
modality component. Please send any feedback to the MMI public list,
www-multimodal@w3.org.

best regards,

Debbie Dahl
MMIWG Chair

[1] MMI Architecture and Interfaces:
http://www.w3.org/TR/2008/WD-mmi-arch-20081016/

Received on Tuesday, 4 November 2008 16:36:10 UTC