Re: Default value of SpeechRecognition.grammars

>
> ·       The application layer is given control over which recognizers are
> running****
>
>  Sounds like this is the same as the serviceURI attribute.****
>
> ** **
>
> [Milan] The current service URI attribute only permits a single
> recognizer.  That recognizer could **internally** be a compound
> recognizer, but for this solution to work, the application would need to
> invoke the recognizers explicitly.  For example, invoke the local
> recognizer for contacts/appointments/apps, and a remote recognizer for
> dictation or websearch.
>

This seems unrelated to the grammars attribute we are discussing here. I'm
also not sure if you are asking for supporting multiple recognizers in one
SpeechRecognition session or if you are specifically interested in local
recognizers.. the former would complicate the API and I prefer we continue
addressing the single recognizer case, and the latter seems out of scope as
it should be transparently handled by the UA.

Perhaps this can be spun off into a separate thread if it needs more
discussion.

> ****
>
> ·       Each recognizer publishes a single default grammar ****
>
>   ·       If the default grammar is not available in its entirety, the
> recognizer/UA must generate an error. ****
>
>  I expect the recognizer/UA to return an error if it can't recognize for
> any reason, including network issues or lack of access to other resources.
> SpeechRecognition.onerror<http://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html#dfn-onerror> is
> the event raised in that case. Perhaps what you are looking for is to add
> an error code here such as RECOGNIZER_ERROR indicating for some reason the
> recognizer can't function as expected.****
>
> ** **
>
> [Milan] Glad we are in agreement.  I just want to make clear in the spec
> that a recognizer cannot automatically change its default grammar in
> response to an error condition (eg remove dictation).  If you are OK with
> this, I will suggest text.
>

I think it is out of scope for the web speech API to place conditions on
how a recognizer should and should not select grammars or other internal
details. In many cases the recognizer may be provided by the platform and
can't be changed by the UA.


> ****
>
> Please don’t forget about my other request:
>
> ** **
>
> **·       **The default grammar must be addressable in URI format should
> the developer want to explicitly invoke the default.  For example:
> builtin:dictation and builtin:contacts.
>
A URI like builtin:default seems appropriate here. If the web app needs
specific capabilities it should set one of the other builtin URIs such as
the ones mentioned above (assuming we define a set of builtin grammars)

Received on Thursday, 21 June 2012 23:27:41 UTC