[whatwg] Speech input element

On Thu, May 20, 2010 at 1:32 PM, Anne van Kesteren <annevk at opera.com> wrote:
> On Thu, 20 May 2010 14:29:16 +0200, Bjorn Bringert <bringert at google.com>
> wrote:
>>
>> It should be possible to drive <input type="speech"> with keyboard
>> input, if the user agent chooses to implement that. Nothing in the API
>> should require the user to actually speak. I think this is a strong
>> argument for why <input type="speech"> should not be replaced by a
>> microphone API and a separate speech recognizer, since the latter
>> would be very hard to make accessible. (I still think that there
>> should be a microphone API for applications like audio chat, but
>> that's a separate discussion).
>
> So why not implement speech support on top of the existing input types?

Speech-driven keyboards certainly get you some of the benefits of
<input type="speech">, but they give the application developer less
control and less information than a speech-specific API. Some
advantages of a dedicated speech input type:

- Application-defined grammars. This is important for getting high
recognition accuracy in with limited domains.

- Allows continuous speech recognition where the app gets events on
speech endpoints.

- Multiple recognition hypotheses. This lets applications implement
intelligent input disambiguation.

- Doesn't require the input element to have keyboard focus while speaking.

- Doesn't require a visible text input field.

-- 
Bjorn Bringert
Google UK Limited, Registered Office: Belgrave House, 76 Buckingham
Palace Road, London, SW1W 9TQ
Registered in England Number: 3977902

Received on Thursday, 20 May 2010 06:18:56 UTC