W3C home > Mailing lists > Public > www-svg@w3.org > November 2008

SVG as a multimodal modality component

From: Deborah Dahl <dahl@conversational-technologies.com>
Date: Tue, 4 Nov 2008 11:31:54 -0500
To: <www-svg@w3.org>
Cc: <www-multimodal@w3.org>
Message-ID: <008d01c93e9a$d9894ef0$6801a8c0@chimaera>

Dear SVG,
During the TPAC, the Multimodal Interaction Working Group and the SVG
Group had a joint meeting to discuss possible points of collaboration. One
the most interesting ideas was to explore how SVG could be used as a
Component in the Multimodal Architecture [1]. A Modality Component
modality-specific capabilities (for example, speech recognition, graphical
or handwriting recognition) in a multimodal application. It communicates 
information back and forth to an overall controller, the Interaction
through a well-defined set of asynchronous events, the life cycle events.  
In the most recent version of the Multimodal Architecture, we've included a 
set of rules and guidelines for defining Modality Components, using face
recognition as an example (Appendix F). It seemed worth looking at how 
SVG could function as a Modality Component in an MMI application. This could
lead to interesting applications like voice control of SVG graphics using
commands like "make the text bigger", or "pause the animation", or combined
voice and pointing control of graphics with commands like "put a red square
here", accompanied by a mouse click. 
The MMI Working Group would very much welcome the SVG Working Group's
on the suitability of the rules in Appendix F for defining a SVG-based
modality component. Please send any feedback to the MMI public list,

best regards,

Debbie Dahl

[1] MMI Architecture and Interfaces:
Received on Tuesday, 4 November 2008 16:32:43 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 22:54:21 UTC