W3C home > Mailing lists > Public > whatwg@whatwg.org > December 2009

[whatwg] Web API for speech recognition and synthesis

From: Fergus Henderson <fergus@google.com>
Date: Thu, 3 Dec 2009 12:30:12 -0500
Message-ID: <5a76647a0912030930y141ce7cdp1b0619c60b781322@mail.gmail.com>
On Thu, Dec 3, 2009 at 7:32 AM, Diogo Resende <dresende at thinkdigital.pt>wrote:

> I agree 100%. Still, I think the access to the mic and the speech
> recognition could be separated.

While it would be possible to separate access to the microphone and speech
recognition, combining them allows the API to abstract away details of the
implementation that would otherwise have to be exposed, in particular the
audio encoding(s) used, and whether the audio is streamed to the recognizer
or sent in a single chunk.  If we don't provide general access to the
microphone, the speech recognition API can be simpler, implementors will
have more flexibility, and implementations can be simpler and smaller
because they won't have to deal with conversions between different audio

So I'm in favour of not separating out access to the microphone, at least in
v1 of the API.

Fergus Henderson <fergus at google.com>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.whatwg.org/pipermail/whatwg-whatwg.org/attachments/20091203/d7fe4556/attachment.htm>
Received on Thursday, 3 December 2009 09:30:12 UTC

This archive was generated by hypermail 2.4.0 : Wednesday, 22 January 2020 16:59:19 UTC