Re: R29. Web application may only listen in response to user action

> I agree that this requirement is problematic for hands-free and other
> usage scenarios.
> ....
> A different "how" might be that the user agent, for instance, prompts
> the user when a page wants to do speech for the first time and gives them a
> set of choices such as, for example:
> ...

I don't see prompting the user on first use as being friendly for
hands free usage as well, as they have to answer the prompt.

For users who rely on some form of input, whether it is the
keyboard/mouse or voice command interfaces, they will continue to use
them for their normal browsing and controlling the computer. They will
be able to activate speech input through the same means as they would
select a regular button in any web page.

The use case I can think of for a totally hands free scenario is if a
particular speech shell-like page is set as the browser homepage, and
every time the device is switched on the browser starts up and lands
in this speech shell. Some examples may include net books and mobile
devices, however these again require user input on startup to
authenticate or unlock, whether it is touch/keyboard/voice, and using
similar means the user could activate speech input once they land in
the home page.

Received on Tuesday, 26 October 2010 22:18:06 UTC