Re: Organized first draft of Use Case and Requirements Document

On Fri, Oct 8, 2010 at 10:24 AM, Satish Sampath <satish@google.com> wrote:
>> That said, I’m not sure I agree with having the recognizer/speech-synthesis
>> be a browser setting. A browser setting is a user control, and users aren’t
>> going to care about the recognizer/speech-sythesis they use, they’re just
>> going to expect speech features to work. However, certain types of
>> developers are going to care about the recognizer/speech-synthesis they use,
>> and as such it makes sense for this to be an (optional) facet of the
>> markup/language.
>
> I think users will care about speech recognition the same way they
> care about having a good input device/keyboard, installing the right
> voice recognition software or selecting the right operating system
> based on their needs. And users need consistency in terms of
> recognition for every website, it would be weird to have one website
> understand my voice perfectly fine and another to have problems with
> the same text.

I can definitely see that developers with the resources to do so would
like to use their own speech services, and I don't think that we
should rule it out. However, I think that we should focus on solving
the browser-specified case first, because:

- We don't want to force developers to run their own speech services.

- The basic API can be the same for both models, but site-specific
speech services also require a lot of additional spec work on the
browser - speech service communication.

I think that we should keep the possibility of adding site-specific
speech services open, maybe by allowing some optional
properties/arguments on whatever elements or methods we end up adding,
but I would prefer to leave the specification of this until after the
browser-specified case is solved.

-- 
Bjorn Bringert
Google UK Limited, Registered Office: Belgrave House, 76 Buckingham
Palace Road, London, SW1W 9TQ
Registered in England Number: 3977902

Received on Friday, 8 October 2010 10:12:15 UTC