W3C home > Mailing lists > Public > whatwg@whatwg.org > May 2010

[whatwg] Speech input element

From: David Singer <singer@apple.com>
Date: Wed, 19 May 2010 14:38:38 -0700
Message-ID: <4C191C19-FC75-40BA-B2FB-7188E7354695@apple.com>
I am a little concerned that we are increasingly breaking down a metaphor, a 'virtual interface' without realizing what that abstraction buys us.  At the moment, we have the concept of a hypothetical pointer and hypothetical keyboard, (with some abstract states, such as focus) that you can actually drive using a whole bunch of physical modalities.  If we develop UIs that are specific to people actually speaking, we have 'torn the veil' of that abstract interface.  What happens to people who cannot speak, for example? Or who cannot say the language needed well enough to be recognized?


David Singer
Multimedia and Software Standards, Apple Inc.
Received on Wednesday, 19 May 2010 14:38:38 UTC

This archive was generated by hypermail 2.4.0 : Wednesday, 22 January 2020 16:59:23 UTC