W3C home > Mailing lists > Public > public-speech-api@w3.org > June 2012

RE: Default value of SpeechRecognition.grammars

From: Young, Milan <Milan.Young@nuance.com>
Date: Wed, 20 Jun 2012 16:24:51 +0000
To: Satish S <satish@google.com>, Jerry Carter <jerry@jerrycarter.org>
CC: Hans Wennborg <hwennborg@google.com>, "public-speech-api@w3.org" <public-speech-api@w3.org>
Message-ID: <B236B24082A4094A85003E8FFB8DDC3C1A4749B4@SOM-EXCH04.nuance.com>
I'm starting to rethink the whole idea of a fuzzy default.  Having grammars move in and out of scope without some notification to the application layer would not provide a good UX.

I believe a better solution would be to provide a set of "builtin" grammars that developers can use to activate concepts like apps, contact lists, or dictation.  Not specifying a grammar means either the engine uses builtin:dictation or generates an error event if that grammar cannot be loaded.  The app can manually backoff if it chooses.

From: Satish S [mailto:satish@google.com]
Sent: Wednesday, June 20, 2012 9:14 AM
To: Jerry Carter
Cc: Young, Milan; Hans Wennborg; public-speech-api@w3.org
Subject: Re: Default value of SpeechRecognition.grammars

Shouldn't that be up to the UA to decide? One use case is if the device did not have access to a recognizer capable of dictation-lite (e.g. recognizer is remote and device has no network access at that moment) the UA can decide to only use a local recognizer capable of recognizing names from the contact list or apps installed and nothing else.


On Wed, Jun 20, 2012 at 5:06 PM, Jerry Carter <jerry@jerrycarter.org<mailto:jerry@jerrycarter.org>> wrote:
I concur that web search is inappropriate, but the specification should provide some expectation as to what the default grammar might be.

If you want the default grammar to be of any general use, it would need to support common words & phrases for the current locality.  It need not be as rich as a dedicated dictation grammar or support utterances as long as for diction tasks (though it could be).  But I would expect a 'dictation-lite'.

-=- Jerry

On Jun 20, 2012, at 11:53 AM, Satish S wrote:

The vast majority of web apps using speech API wouldn't be doing web search with the result so it would be good to not mention it in the spec.


On Wed, Jun 20, 2012 at 4:45 PM, Young, Milan <Milan.Young@nuance.com<mailto:Milan.Young@nuance.com>> wrote:
I also support the idea of the engine choosing behavior when no grammars are present.  But it would be nice to put in the spec a few examples of what that default might be.  Dictation and web search seem like good hints.

-----Original Message-----
From: Hans Wennborg [mailto:hwennborg@google.com<mailto:hwennborg@google.com>]
Sent: Wednesday, June 20, 2012 8:27 AM
To: Jerry Carter
Cc: public-speech-api@w3.org<mailto:public-speech-api@w3.org>
Subject: Re: Default value of SpeechRecognition.grammars

On Wed, Jun 20, 2012 at 2:15 PM, Jerry Carter <jerry@jerrycarter.org<mailto:jerry@jerrycarter.org>> wrote:
> Makes sense. I assume you are thinking that the default grammar should be fairly broad, e.g. a dictation grammar.

Yes, but I don't think we should specify what the default grammar should be; it should be decided by the speech recognition engine.

Received on Wednesday, 20 June 2012 16:25:25 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:02:27 UTC