RE: EMMA in Speech API (was RE: Speech API: first editor's draft posted)

Hi, a couple of comments.

 

From: Satish S [mailto:satish@google.com] 
Sent: Monday, May 21, 2012 5:35 PM
To: Deborah Dahl
Cc: Bjorn Bringert; Young, Milan; Glen Shires; Hans Wennborg;
public-speech-api@w3.org
Subject: Re: EMMA in Speech API (was RE: Speech API: first editor's draft
posted)

 

I agree that having a uniform representation of results and semantic
interpretation is necessary. The only question I have is why XML formatted
according to EMMA is preferred over native JS objects. To clarify, I'm
suggesting that semantic information, if received as EMMA from the
recognizer, be converted by the UA to native JS objects so accessing them is
far simpler.

 

With EMMA XML:

  var doc = alternative.emmaXML;

  var interpretation = doc.getElementsByTagName("emma:interpretation")[0];

  var origin =
interpretation.getElementsByTagName("origin")[0].childNodes[0].nodeValue;

  var destination =
interpretation.getElementsByTagName("destination")[0].childNodes[0].nodeValu
e;

 

Instead, with native JS object:

  var origin = alternative.interpretation.origin

  var destination = alternative.interpretation.destination

 

I prefer the latter as it does away with the boilerplate that every single
web app has to go through.

 

I don't disagree with making the JS object available as well as the EMMA -
both could be available. 

There are at least two use cases where the web app doesn't have to do
anything directly with the EMMA - (1) passing the EMMA along to a dialog
manager, and (2) saving the EMMA result for later logging and analysis. For
those use cases the web app doesn't have to unpack the EMMA. 

 

Yes, SISR is a standard for representing the semantic result, but it doesn't
provide a way to represent any metadata.

 

Could you explain what you mean by meta data in this context with a use
case? It should be possible to fit that in the above proposal as well.

 

Here are some examples.

Use case 1: I'm testing different speech recognition services. I would like
to know which service processed the speech associated with a particular
result, so that I can compare the services for accuracy. I can use the
emma:process parameter for that. 

Use case 2: I want the system to dynamically slow down its TTS for users who
speak more slowly. The EMMA timestamps, duration, and token parameters can
be used to determine the speech rate for a particular utterance.

Use case 3: I'm testing several different grammars to compare their
accuracy. I use the emma:grammar parameter to record which grammar was used
for each result. 

 

Obviously you could write Javascript or server-side processing to record all
this information, but it would have to be done repeatedly for every
application, and it's much more convenient to have it all available in the
EMMA result. 

I also think it would be a waste of time for this group to go through the
exercise of figuring out how to represent all the EMMA metadata attributes
in a native JS fashion. We would inevitably have to spend time agreeing on
which EMMA metadata attributes are important enough to work on and I think
it would just be less work to make the EMMA result available for those
applications that need it.

 

Cheers
Satish



On Mon, May 21, 2012 at 6:36 PM, Deborah Dahl
<dahl@conversational-technologies.com> wrote:

Many applications will have a dialog manager that uses the speech
recognition result to conduct a spoken dialog with the user. In that case it
is extremely useful for the dialog manager to have a uniform representation
for speech recognition results, so that the dialog manager can be somewhat
independent of the recognizer. In fact, there are existing applications that
I know of that do expect EMMA-formatted results. It would be very
inconvenient for these dialog managers to have to be modified to accommodate
different formats depending on the recognition service. Similarly, another
type of consumer of speech recognition results is likely to be logging and
analysis applications, which again could benefit from uniform EMMA results.
I believe it's also undesirable for the application developer to have to
look at the result and then manually create an EMMA wrapper for it. 

Yes, SISR is a standard for representing the semantic result, but it doesn't
provide a way to represent any metadata. In addition, it won't help if the
language model is an SLM rather than a grammar. 

Also, just a general comment about API's and novice developers. I think
developers in general are very good at ignoring aspects of an API that they
don't plan to use, as long as they have a simple way to get started. I think
developer problems mainly arise with API's where there's a huge learning
curve just to do hello world.

 

From: Satish S [mailto: <mailto:satish@google.com> satish@google.com] 
Sent: Monday, May 21, 2012 12:17 PM
To: Bjorn Bringert
Cc: Young, Milan; Deborah Dahl; Glen Shires; Hans Wennborg;
<mailto:public-speech-api@w3.org> public-speech-api@w3.org


Subject: Re: EMMA in Speech API (was RE: Speech API: first editor's draft
posted)

 

I would prefer having an easy solution for the majority of apps which


just want the interpretation, which is either just a string or a JS
object (when using SISR). Boilerplate code sucks. Having EMMA
available sounds ok too, but that seems like a minority feature to me.

 

Seems like the current type "any" is suited for that. Since SISR represents
the results of semantic interpretation as ECMAScript that is interoperable
and non-proprietary, the goal of a cross-browser semantic interpretation
format seems satisfied. Are there other reasons to add EMMA support?

 

Received on Monday, 21 May 2012 23:10:11 UTC