W3C home > Mailing lists > Public > public-speech-api@w3.org > June 2012

Multiple recognition engines in paralell

From: Young, Milan <Milan.Young@nuance.com>
Date: Fri, 22 Jun 2012 00:52:53 +0000
To: "public-speech-api@w3.org" <public-speech-api@w3.org>
Message-ID: <B236B24082A4094A85003E8FFB8DDC3C1A47540C@SOM-EXCH04.nuance.com>
The subject of running multiple recognition engines in parallel has recently come up for discussion.  First on the EMMA thread (Jerry has a good summary at: [1]), and later on the default grammar thread [2].

I suggest that the main use case for running multiple recognizers is to support local and remote recognition in the same session.  This is because: A) several remote engines can usually be proxied under a single remote engine if necessary, and B) local engines can access remote resource already if needed.

Satish has made the suggestion that specifying multiple engines in parallel would complicate the spec [3].  While I agree with his statement, I believe this is one of the most important areas that we need to sort out before entering a WG.  How do the other folks in the community feel about this?


[1] http://lists.w3.org/Archives/Public/public-speech-api/2012Jun/0142.html
[2] http://lists.w3.org/Archives/Public/public-speech-api/2012Jun/0172.html
[3] http://lists.w3.org/Archives/Public/public-speech-api/2012Jun/0175.html
Received on Friday, 22 June 2012 00:53:23 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:02:27 UTC