W3C home > Mailing lists > Public > public-speech-api@w3.org > June 2012

Re: Review of EMMA usage in the Speech API (first editor's draft)

From: Satish S <satish@google.com>
Date: Fri, 15 Jun 2012 10:06:57 +0100
Message-ID: <CAHZf7Rk93keWo3ncHq8duDM1z0mnrxrHhwuCf=G83hEz1-s6ug@mail.gmail.com>
To: Deborah Dahl <dahl@conversational-technologies.com>
Cc: Jerry Carter <jerry@jerrycarter.org>, public-speech-api@w3.org
> Is this roughly what you had in mind?

I understood what Jerry wrote as
- There is a local recognizer, probably with a device specific grammar such
as contacts and apps
- There is a remote recognizer that caters to a much wider scope
The UA would send audio to both and combine the results to deliver to

Jerry, could you clarify which use case you meant? The language I proposed
was aimed towards a recognizer sitting outside the UA and generating EMMA
data in which case it seemed appropriate that the UA would pass it through
unmodified. If the UA is indeed generating EMMA data (whether combining
from multiple recognizers or where the recognizer doesn't give EMMA data)
it should be allowed to do so.

Milan and Salish, could you elaborate on what you had in mind when you
> raised concerns about the UA modifying the speech recognizer’s EMMA?

The primary reason I added that clause in was to preserve those EMMA
attributes (emma:process, ...) from the recognizer to JS without calling
out specific attributes. Since we agreed that instead of calling out
attributes we'll add use cases as examples, there is lesser reason for this
clause now and I agree it does enable use cases like what I mentioned
above. So I'm fine dropping that clause if there are no other strong
reasons to keep it in.

Received on Friday, 15 June 2012 09:07:28 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:02:27 UTC