Re: Default value of SpeechRecognition.grammars

I think we both agree it is useful to fire events when user specified
grammars can't be loaded or such cases (e.g. the BAD_GRAMMAR error code in
onerror event). So my proposal is that a web application that really needs
to know when grammars come in and out of scope should use custom grammars
and select an alternate grammar if one fails.

Re: the default grammar's URI - Is there agreement on using
"builtin:default" ?

Cheers
Satish


On Fri, Jun 22, 2012 at 1:30 AM, Young, Milan <Milan.Young@nuance.com>wrote:

>  I can’t support an API that doesn’t allow the application layer to know
> when the underlying grammars come in and out of scope.  I put forward a
> solution that you didn’t like.  Do you have an alternative to suggest?****
>
> ** **
>
> ** **
>
> ** **
>
> *From:* Satish S [mailto:satish@google.com]
> *Sent:* Thursday, June 21, 2012 4:27 PM
>
> *To:* Young, Milan
> *Cc:* Jerry Carter; Hans Wennborg; public-speech-api@w3.org
> *Subject:* Re: Default value of SpeechRecognition.grammars****
>
> ** **
>
>     ·       The application layer is given control over which recognizers
> are running****
>
>  Sounds like this is the same as the serviceURI attribute.****
>
>  ****
>
> [Milan] The current service URI attribute only permits a single
> recognizer.  That recognizer could **internally** be a compound
> recognizer, but for this solution to work, the application would need to
> invoke the recognizers explicitly.  For example, invoke the local
> recognizer for contacts/appointments/apps, and a remote recognizer for
> dictation or websearch.****
>
>  ** **
>
> This seems unrelated to the grammars attribute we are discussing here. I'm
> also not sure if you are asking for supporting multiple recognizers in one
> SpeechRecognition session or if you are specifically interested in local
> recognizers.. the former would complicate the API and I prefer we continue
> addressing the single recognizer case, and the latter seems out of scope as
> it should be transparently handled by the UA.****
>
> ** **
>
> Perhaps this can be spun off into a separate thread if it needs more
> discussion.****
>
>     ·       Each recognizer publishes a single default grammar ****
>
>   ·       If the default grammar is not available in its entirety, the
> recognizer/UA must generate an error. ****
>
>   I expect the recognizer/UA to return an error if it can't recognize for
> any reason, including network issues or lack of access to other resources.
> SpeechRecognition.onerror<http://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html#dfn-onerror> is
> the event raised in that case. Perhaps what you are looking for is to add
> an error code here such as RECOGNIZER_ERROR indicating for some reason the
> recognizer can't function as expected.****
>
>  ****
>
> [Milan] Glad we are in agreement.  I just want to make clear in the spec
> that a recognizer cannot automatically change its default grammar in
> response to an error condition (eg remove dictation).  If you are OK with
> this, I will suggest text.****
>
>  ** **
>
> I think it is out of scope for the web speech API to place conditions on
> how a recognizer should and should not select grammars or other internal
> details. In many cases the recognizer may be provided by the platform and
> can't be changed by the UA.****
>
>  ****
>
>    Please don’t forget about my other request:****
>
>  ****
>
> ·       The default grammar must be addressable in URI format should the
> developer want to explicitly invoke the default.  For example:
> builtin:dictation and builtin:contacts.****
>
>  A URI like builtin:default seems appropriate here. If the web app needs
> specific capabilities it should set one of the other builtin URIs such as
> the ones mentioned above (assuming we define a set of builtin grammars)***
> *
>

Received on Friday, 22 June 2012 08:45:44 UTC