- From: JOHNSTON, MICHAEL J (MICHAEL J) <johnston@research.att.com>
- Date: Thu, 15 Sep 2011 13:51:01 -0400
- To: HTML Speech XG <public-xg-htmlspeech@w3.org>
One thing I see missing from the API draft is support for the INFO messages for sending metadata to the recognizer during recognition. In the html+speech protocol we have a generic capability to send metadata to the recognizer, the relevant reco-method is INFO (see below). These messages can be sent during the transmission of audio. This covers multimodal use cases where there may be metadata (e.g. GUI actions, button clicks etc) that take place while the user is speaking, which are relevant for processing the user's audio. To support this at the API level we need some kind of method on SpeechInputRequest that will cause the INFO message to be sent over the protocol. e.g. interface SpeechInputRequest { ..... void s<file:///Users/johnstonmjr/NOTES/2011/sep%202011/speechwepapi.html#dfn-setsensitivity>endinfo(in DOMstring i<file:///Users/johnstonmjr/NOTES/2011/sep%202011/speechwepapi.html#dfn-sensitivity>nfo); ..... Michael reco-method = "LISTEN" ; Transitions Idle -> Listening | "START-INPUT-TIMERS" ; Starts the timer for the various input timeout conditions | "STOP" ; Transitions Listening -> Idle | "DEFINE-GRAMMAR" ; Pre-loads & compiles a grammar, assigns a temporary URI for reference in other methods | "CLEAR-GRAMMARS" ; Unloads all grammars, whether active or inactive | "INTERPRET" ; Interprets input text as though it was spoken | "INFO" ; Sends metadata to the recognizer INFO In multimodal applications, some recognizers will benefit from additional context. Clients can use the INFO request to send this context. The Content-Type header should specify the type of data, and the data itself is contained in the message body.
Received on Thursday, 15 September 2011 17:51:07 UTC