Re: Organized first draft of Use Case and Requirements Document

Hi Eric,

My replies inline..

> I think the concern about capturing audio without the users knowledge is
> important issue especially in mobile devices where the NaturallySpeaking
> application for the iphone ships audio to a server upstream and then nuance
> harvests the data for their own use.  I think that it's not resolvable issue
> except by making it a local policy decision as to whether or not to enable
> microphones remotely.  people will still do it for convenience but at least
> you gave them the decision.

I think as an XG we should be extra cautious in terms of security and
privacy. Even if our proposal specifies that a user gesture is
required to start speech input, a UA designed with accessibility in
mind could have a built-in option to start speech input automatically
on web pages which have speech input elements (similar to how UAs
today automatically move input focus to the first text box in the page
for keyboard input). The key thought in my mind is that this shouldn't
be enabled by default for all end users, but only for those who
specifically enable it in their UA.

> Yes, this is far afield from the "in the cloud" application but it is the
> kind of functionality needed for a more general speech recognition
> environment. Not sure if it fits in this context so, feel free to toss my
> comments if necessary.

In my opinion these use cases target specific/small segments of users
and I think they are best suited by specific applications designed for
such purposes instead of a web browser/UA. We have better chance of
success if the most common use cases are addressed first and web apps
get to use speech input sooner than later, which will provide a
stepping stone for future enhancements such as these.

--
Cheers
Satish

Received on Thursday, 7 October 2010 16:54:52 UTC