I'm suggesting that if the UA doesn't integrate with a speech engine that supports EMMA, that it must provide a wrapper so that basic interoperability can be achieved. In use case #1 (comparing speech engines), that means injecting an <emma:process> tag that contains the name of the underlying speech engine.
I agree that use case #3 could not be achieved without a tight coupling with the engine. If Deborah is OK with dropping this, so am I.
I don't understand your point about use case #4. Earlier you were arguing for a null/undefined value if the speech engine didn't natively support EMMA. Obviously this would prevent the suggested use case.
From: Satish S [mailto:satish@google.com]
Sent: Wednesday, May 30, 2012 8:19 AM
To: Young, Milan
Cc: Bjorn Bringert; Deborah Dahl; Glen Shires; Hans Wennborg; public-speech-api@w3.org
Subject: Re: EMMA in Speech API (was RE: Speech API: first editor's draft posted)
Satish, please take a look at the use cases below. Items #1 and #3 cannot be achieved unless EMMA is always present.
To clarify, are you suggesting that speech recognizers must always return EMMA to the UA, or are you suggesting if they don't the UA should create a wrapper EMMA object with just the utterance(s) and give that to the web page? If it is the latter then #1 and #3 can't be achieved anyway because the UA doesn't have enough information to create an EMMA wrapper with all possible data that the web app may want (specifically it wouldn't know about what to put in the emma:process and emma:fields given in those use cases). And if it is the former that seems out of scope of this CG.
I'd like to add another use case #4. Application needs to post the recognition result to server before proceeding in the dialog. The server might be a traditional application server or it could be the controller in an MMI architecture. EMMA is a standard serialized representation.
If the server supports EMMA then my proposal should work because the web app would be receiving the EMMA Document as is.
--
Cheers
Satish