Re: Speech API: first editor's draft posted

For this initial specification, we believe that a simplified API will
accelerate implementation, interoperability testing, standardization and
ultimately developer adoption.  Getting rapid adoption amongst many user
agents and many speech recognition services is a primary goal.

Many speech recognition services currently do not support EMMA, and EMMA is
not required for the majority of use cases, therefore I believe EMMA is
something we should consider adding in a future iteration of this
specification.

/Glen Shires


On Mon, Apr 23, 2012 at 11:44 AM, Deborah Dahl <
dahl@conversational-technologies.com> wrote:

> Thanks for preparing this draft.
> I'd like to advocate including the EMMAText and EMMAXML attributes in
> SpeechRecognitionResult. One argument is that at least some existing
> consumers of speech recognition results (for example, dialog managers and
> log analysis tools) currently expect EMMA as input. It would be very
> desirable not to have to modify them to process multiple different
> recognizer result formats. A web developer who's new to speech recognition
> can ignore the EMMA if they want, because if all they want is tokens,
> confidence, or semantics, those are available from the
> SpeechRecognitionAlternative objects.
>
> > -----Original Message-----
> > From: Hans Wennborg [mailto:hwennborg@google.com]
> > Sent: Thursday, April 12, 2012 10:36 AM
> > To: public-speech-api@w3.org
> > Cc: Satish S; Glen Shires
> > Subject: Speech API: first editor's draft posted
> >
> > In December, Google proposed [1] to public-webapps a Speech JavaScript
> > API that subset supports the majority of the use-cases in the Speech
> > Incubator Group's Final Report. This proposal provides a programmatic
> > API that enables web-pages to synthesize speech output and to use
> > speech recognition as an input for forms, continuous dictation and
> > control.
> >
> > We have now posted in the Speech-API Community Group's repository, a
> > slightly updated proposal [2], the differences include:
> >
> >  - Document is now self-contained, rather than having multiple
> > references to the XG Final Report.
> >  - Renamed SpeechReco interface to SpeechRecognition
> >  - Renamed interfaces and attributes beginning SpeechInput* to
> > SpeechRecognition*
> >  - Moved EventTarget to constructor of SpeechRecognition
> >  - Clarified that grammars and lang are attributes of SpeechRecognition
> >  - Clarified that if index is greater than or equal to length, returns
> null
> >
> > We welcome discussion and feedback on this editor's draft. Please send
> > your comments to the public-speech-api@w3.org mailing list.
> >
> > Glen Shires
> > Hans Wennborg
> >
> > [1] http://lists.w3.org/Archives/Public/public-
> > webapps/2011OctDec/1696.html
> > [2] http://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html
>
>
>


-- 
Thanks!
Glen Shires

Received on Monday, 23 April 2012 19:38:15 UTC