W3C home > Mailing lists > Public > whatwg@whatwg.org > May 2010

[whatwg] Speech input element

From: timeless <timeless@gmail.com>
Date: Thu, 20 May 2010 00:53:04 +0300
Message-ID: <AANLkTik-CQgzP2qCEOdtIIjajJ_hGIFkIlDhoAsm_t10@mail.gmail.com>
On Thu, May 20, 2010 at 12:38 AM, David Singer <singer at apple.com> wrote:
> I am a little concerned that we are increasingly breaking down a metaphor,
> a 'virtual interface' without realizing what that abstraction buys us.

I'm more than a little concerned about this and hope that we tread
much more carefully than it seems some parties are willing to do. I'm
glad I'm not alone.

>?At the moment, we have the concept of a hypothetical pointer and hypothetical
> keyboard, (with some abstract states, such as focus) that you can actually drive
> using a whole bunch of physical modalities.

>?If we develop UIs that are specific to people actually speaking, we have
> 'torn the veil' of that abstract interface. ?What happens to people who cannot
> speak, for example? Or who cannot say the language needed well enough
> to be recognized?
Received on Wednesday, 19 May 2010 14:53:04 UTC

This archive was generated by hypermail 2.4.0 : Wednesday, 22 January 2020 16:59:23 UTC