- From: Eric S. Johansson <esj@harvee.org>
- Date: Thu, 09 Sep 2010 13:44:46 -0400
- To: "JOHNSTON, MICHAEL J (MICHAEL J)" <johnston@research.att.com>
- CC: Satish Sampath <satish@google.com>, "public-xg-htmlspeech@w3.org" <public-xg-htmlspeech@w3.org>
On 9/9/2010 1:16 PM, JOHNSTON, MICHAEL J (MICHAEL J) wrote: > One of the central > goals of the web (and W3C) is strive for consistency of experience > across different browsers. A developer creating a (multimodal) interface > combining speech input with graphical output needs to have the > ability to provide a consistent user experience not just for graphical > elements but also for voice. In my experience, the graphical user interface experience is completely independent of the vocal user interface. interfaces to be narrow and deep versus speech which tend to be wide and shallow (as a rule). The user interface developer needs to build a three user interfaces. The graphical interface, the aural interface, and the speech interface. Because most developers only have time for graphical interface, I advocate making the tools for building other interfaces accessible to the end user. Yes, my focus is on the disabled because if you make it work for them, it will work for the temporarily able-bodied. An experiment I would love to try someday is building a speech driven user interface for e-mail with zero graphical elements to start with. then by using a grammar which enables discovery by displaying options whenever the user pauses or hesitates, teach them about what's possible.
Received on Thursday, 9 September 2010 17:46:27 UTC