W3C home > Mailing lists > Public > whatwg@whatwg.org > May 2010

[whatwg] Speech input element

From: Bjorn Bringert <bringert@google.com>
Date: Thu, 20 May 2010 13:29:16 +0100
Message-ID: <AANLkTin3w7CSrWO2gMzEssIl_1-VCfy2Gg9lfjJ5sICj@mail.gmail.com>
On Wed, May 19, 2010 at 10:38 PM, David Singer <singer at apple.com> wrote:
> I am a little concerned that we are increasingly breaking down a metaphor, a 'virtual interface' without realizing what that abstraction buys us. ?At the moment, we have the concept of a hypothetical pointer and hypothetical keyboard, (with some abstract states, such as focus) that you can actually drive using a whole bunch of physical modalities. ?If we develop UIs that are specific to people actually speaking, we have 'torn the veil' of that abstract interface. ?What happens to people who cannot speak, for example? Or who cannot say the language needed well enough to be recognized?

It should be possible to drive <input type="speech"> with keyboard
input, if the user agent chooses to implement that. Nothing in the API
should require the user to actually speak. I think this is a strong
argument for why <input type="speech"> should not be replaced by a
microphone API and a separate speech recognizer, since the latter
would be very hard to make accessible. (I still think that there
should be a microphone API for applications like audio chat, but
that's a separate discussion).

-- 
Bjorn Bringert
Google UK Limited, Registered Office: Belgrave House, 76 Buckingham
Palace Road, London, SW1W 9TQ
Registered in England Number: 3977902
Received on Thursday, 20 May 2010 05:29:16 UTC

This archive was generated by hypermail 2.4.0 : Wednesday, 22 January 2020 16:59:23 UTC