- From: Satish S <satish@google.com>
- Date: Fri, 15 Jun 2012 10:06:57 +0100
- To: Deborah Dahl <dahl@conversational-technologies.com>
- Cc: Jerry Carter <jerry@jerrycarter.org>, public-speech-api@w3.org
Received on Friday, 15 June 2012 09:07:28 UTC
> > Is this roughly what you had in mind? > I understood what Jerry wrote as - There is a local recognizer, probably with a device specific grammar such as contacts and apps - There is a remote recognizer that caters to a much wider scope The UA would send audio to both and combine the results to deliver to Javascript Jerry, could you clarify which use case you meant? The language I proposed was aimed towards a recognizer sitting outside the UA and generating EMMA data in which case it seemed appropriate that the UA would pass it through unmodified. If the UA is indeed generating EMMA data (whether combining from multiple recognizers or where the recognizer doesn't give EMMA data) it should be allowed to do so. Milan and Salish, could you elaborate on what you had in mind when you > raised concerns about the UA modifying the speech recognizer’s EMMA? > The primary reason I added that clause in was to preserve those EMMA attributes (emma:process, ...) from the recognizer to JS without calling out specific attributes. Since we agreed that instead of calling out attributes we'll add use cases as examples, there is lesser reason for this clause now and I agree it does enable use cases like what I mentioned above. So I'm fine dropping that clause if there are no other strong reasons to keep it in. -- Cheers Satish
Received on Friday, 15 June 2012 09:07:28 UTC