W3C home > Mailing lists > Public > www-voice@w3.org > April to June 2012

Multimodality, Grammars and Event Models

From: Adam Sobieski <adamsobieski@hotmail.com>
Date: Fri, 20 Apr 2012 14:12:47 +0000
Message-ID: <SNT138-W87724F5D5B04E7E9931D5C5220@phx.gbl>
To: <www-voice@w3.org>










Voice Browser Working Group, Greetings.  I wanted to also broach multimodal input and grammars.  Some other ideas for extending SRGS include specifying JavaScript events and DOM elements in grammars, as a hypothesis towards the multimodal examples indicated in Section 4 of the HTML Speech Incubator Group Final Report (http://www.w3.org/2005/Incubator/htmlspeech/XGR-htmlspeech-20111206/#use-cases).
 That is, something like:
<item event="html5:click" target="...#element" before="0.5s" after="0.5s" /> or <event type="html5:click" target="...#element" before="0.5s" after="0.5s" /> Where a JavaScript event type can be specified for any element or for a particular element, and where a time interval can also be specified.  Other approaches include that SpeechRecognitionEvent can return interim results ("hypothesis" events), which could be used for multimodal interactions.   Kind regards, Adam Sobieski

 		 	   		  
Received on Friday, 20 April 2012 14:13:24 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 20 April 2012 14:13:29 GMT