- From: Satish Sampath <satish@google.com>
- Date: Tue, 26 Oct 2010 23:17:36 +0100
- To: Dave Burke <daveburke@google.com>
- Cc: Bjorn Bringert <bringert@google.com>, Robert Brown <Robert.Brown@microsoft.com>, Michael Bodell <mbodell@microsoft.com>, Deborah Dahl <dahl@conversational-technologies.com>, Dan Burnett <dburnett@voxeo.com>, "public-xg-htmlspeech@w3.org" <public-xg-htmlspeech@w3.org>
> I agree that this requirement is problematic for hands-free and other > usage scenarios. > .... > A different "how" might be that the user agent, for instance, prompts > the user when a page wants to do speech for the first time and gives them a > set of choices such as, for example: > ... I don't see prompting the user on first use as being friendly for hands free usage as well, as they have to answer the prompt. For users who rely on some form of input, whether it is the keyboard/mouse or voice command interfaces, they will continue to use them for their normal browsing and controlling the computer. They will be able to activate speech input through the same means as they would select a regular button in any web page. The use case I can think of for a totally hands free scenario is if a particular speech shell-like page is set as the browser homepage, and every time the device is switched on the browser starts up and lands in this speech shell. Some examples may include net books and mobile devices, however these again require user input on startup to authenticate or unlock, whether it is touch/keyboard/voice, and using similar means the user could activate speech input once they land in the home page.
Received on Tuesday, 26 October 2010 22:18:06 UTC