W3C home > Mailing lists > Public > public-speech-api@w3.org > May 2012

Re: EMMA in Speech API (was RE: Speech API: first editor's draft posted)

From: Satish S <satish@google.com>
Date: Mon, 21 May 2012 17:16:32 +0100
Message-ID: <CAHZf7R=0C+VE6J=1TtVH6Mja4BT1W8j__YacGBeH1cxpSNBtuw@mail.gmail.com>
To: Bjorn Bringert <bringert@google.com>
Cc: "Young, Milan" <Milan.Young@nuance.com>, Deborah Dahl <dahl@conversational-technologies.com>, Glen Shires <gshires@google.com>, Hans Wennborg <hwennborg@google.com>, "public-speech-api@w3.org" <public-speech-api@w3.org>
> I would prefer having an easy solution for the majority of apps which
> just want the interpretation, which is either just a string or a JS
> object (when using SISR). Boilerplate code sucks. Having EMMA
> available sounds ok too, but that seems like a minority feature to me.

Seems like the current type "any" is suited for that. Since SISR represents
the results of semantic interpretation as ECMAScript that is interoperable
and non-proprietary, the goal of a cross-browser semantic interpretation
format seems satisfied. Are there other reasons to add EMMA support?
Received on Monday, 21 May 2012 16:17:25 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:02:26 UTC