W3C home > Mailing lists > Public > public-speech-api@w3.org > June 2012

RE: Confidence property

From: Young, Milan <Milan.Young@nuance.com>
Date: Sat, 2 Jun 2012 00:20:14 +0000
To: Glen Shires <gshires@google.com>
CC: Satish S <satish@google.com>, "public-speech-api@w3.org" <public-speech-api@w3.org>
Message-ID: <B236B24082A4094A85003E8FFB8DDC3C1A45EF88@SOM-EXCH04.nuance.com>
Glen, it's clear that you put a lot of thought into trying to come up with a compromise.  I appreciate the effort.

My contention, however, is that this new mechanism for manipulating confidence is just as recognizer dependent as the much simpler mechanism of just setting the value.  All you have done is precisely define a new term using existing terminology that has no precise definition.  An "adjustment" of 0.3 doesn't have any more of grounded or recognizer independent meaning than a "threshold" of 0.3.

Furthermore, you've introduced yet another parameter to jiggle, and this will cause all sorts of headaches during the tuning phase.  That's because the engine, logged results, and training tools will all be based on absolute confidence thresholds, and the user will need to figure out how to map those absolute thresholds onto the relative scale.  And they still need to perform this exercise independently for each engine.

One of the things I do like about your proposal is that it circumvents the need to read the confidence threshold before setting it in incremental mode.  But this could just as easily be accomplished with syntax such as recognizer.confidence = "+.1".  If I added such a plus/minus prefix to my previous proposal would you be satisfied?


From: Glen Shires [mailto:gshires@google.com]
Sent: Friday, June 01, 2012 9:01 AM
To: Young, Milan
Cc: Satish S; public-speech-api@w3.org
Subject: Re: Confidence property

I propose the following definition:

attribute float confidenceThresholdAdjustment;

- confidenceThresholdAdjustment attribute - This attribute defines a relative threshold for rejecting recognition results based on the estimated confidence score that they are correct.  The value of confidenceThresholdAdjustment ranges from -1.0 (least confidence) to 1.0 (most confidence), with 0.0 mapping to the default confidence threshold as defined by the recognizer. confidenceThresholdAdjustment is monotonically increasing such that larger values will return an equal or fewer number of results than lower values.  (Note that the confidence scores reported within the SpeechRecognitionResult and within the EMMA results use a 0.0 - 1.0 scale, and the correspondence between these scores and confidenceThresholdAdjustment may vary across UAs, recognition engines, and even task to task.) Unlike maxNBest, there is no defined mapping between the value of the threshold and how many results will be returned.

This definition has these advantages:

For web developers, it provides flexibility and simplicity in a recognizer-independent manner. It covers the vast majority of the ways in which developers use confidence values:

- Developers can easily adjust the threshold for certain tasks. For example, to confirm a transaction, the developer may increase the threshold to be more stringent than the recognizer's default, e.g. confidenceThresholdAdjustment = 0.3

- Developer can adjust the threshold based on prior usage. For example, if not getting enough (or any) results, he may bump down the confidence to be more lenient, e.g: confidenceThreshold -= 0.1 (Developers should ensure they don't underflow/overflow the -1.0 - 1.0 scale.)

- Developers can perform their own processing of the results by comparing confidence scores in the normal manner.  (The confidence scores in the results use the recognizer's native scale, so they are not mapped or skewed and so relative comparisons are not affected by "inflated" or "deflated" ranges.)

It provides clear semantics that are recognizer-independent:

- It avoids all issues of latency and asynchrony issues. The UA does not have to inquire the recognizer's default threshold value from the [potentially remote] recognizer before the UA returns the value when this JavaScript attribute is read. Instead, the UA maintains the value of this attribute, and simply sends it to the recognizer along with the recognition request.

- It avoids all issues of threshold values change due to changes in the selected recognizer or task or grammar.

- It allows recognition engines the freedom to define any mapping that is appropriate, and use any internal default threshold value they choose (which may vary from engine to engine and/or from task to task).

The one drawback is that the confidenceThresholdAdjustment mapping may "require significant skewing of the range" and "squeeze" and "inflate". However, I see this as a minimal disadvantage, particularly when weighed against all the advantages above.

Earlier in this thread we looked at four different options [1]. This solution is a variation of option 1 in that list. All the other options in that list have significant drawbacks:

Option 2) Let speech recognizers define the default: has these disadvantages:

- If a new recognizer is selected, it's default threshold needs to be retrieved, an operation that may have latency. If the developer then reads the confidenceThreshold attribute, the read can't stall until the threshold is read. Fixing this requires defining an asynchronous event to indicate that the confidenceThreshold value is now available to be read. All very messy for both the web developer and the UA implementer.

- The semantics are unclear and recognizer-dependent. If the developer set the confidenceThreshold = 0.4, then selects a new recognizer (or perhaps a new task or grammar), does the confidenceThreshold change? When, and if so, how does the developer know to what value - does it get reset to the recognizer's default? If not, what does 0.4 now mean in this new context?

Option 3) Make it write-only (not readable): has these disadvantages:

- A developer must write recognizer-dependent code. Since he can't read the value, he can't increment/decrement it, so he must blindly set it. He must know what set confidenceThreshold = 0.4 means for the current recognizer.

Thus I propose the solution above, with it's many advantages and only a minor drawback.

[1] http://lists.w3.org/Archives/Public/public-speech-api/2012Apr/0051.html

On Wed, May 23, 2012 at 3:56 PM, Young, Milan <Milan.Young@nuance.com<mailto:Milan.Young@nuance.com>> wrote:
>> The benefit of minimizing deaf periods is therefore again recognizer specific

Most (all?) of the recognition engines which can be embedded within an HTML browser currently operate over a network.  In fact if you study the use cases, you'd find that the majority of those transactions are over a 3G network which is notoriously latent.

It's possible that this may begin to change over the next few year, but it's surely not going to be in the lifetime of our 1.0 spec (at least I hope we can come to agreement before then :)).  Thus the problem can hardly be called engine specific.

Yes, the semantics are unclear, but that wouldn't be any different than a quasi-standard which would undoubtedly emerge in the absence of a specification.

From: Satish S [mailto:satish@google.com<mailto:satish@google.com>]
Sent: Wednesday, May 23, 2012 6:28 AM
To: Young, Milan
Cc: public-speech-api@w3.org<mailto:public-speech-api@w3.org>
Subject: Re: Confidence property

Hi Milan,

Summarizing previous discussion, we have:
  Pros:  1) Aids efficient application design, 2) minimizes deaf periods, 3) avoids a proliferation of semi-standard custom parameters.
  Cons: 1) Semantics of the value are not precisely defined, and 2) Novice users may not understand how confidence differs from maxnbest.

My responses to the cons are: 1) Precedent from the speech industry, and 2) Thousands of VoiceXML developers do understand the difference and will balk at an API that does not accommodate their needs.

This was well debated in the earlier thread and it is clear that confidence threshold semantics are tied to the recognizer (not portable). The benefit of minimizing deaf periods is therefore again recognizer specific and not portable. This is a well suited use case for custom parameters and I'd suggest we start with that.

Thousands of VoiceXML developers do understand the difference and will balk at an API that does not accommodate their needs.

I hope we aren't trying to replicate VoiceXML in the browser. If it is indeed a must have feature for web developers we'll be receiving requests for it from them very soon, so it would be easy to discuss and add it in future.
Received on Saturday, 2 June 2012 00:20:47 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Saturday, 2 June 2012 00:20:48 GMT