W3C home > Mailing lists > Public > public-xg-htmlspeech@w3.org > May 2011

Re: Agreed recognition API?

From: Eric S. Johansson <esj@harvee.org>
Date: Thu, 19 May 2011 21:12:15 -0400
Message-ID: <4DD5BFEF.3060409@harvee.org>
To: public-xg-htmlspeech@w3.org
On 5/19/2011 11:39 AM, Olli Pettay wrote:
> On 05/19/2011 05:59 PM, Bjorn Bringert wrote:
>> By now the draft final report
>> (http://www.w3.org/2005/Incubator/htmlspeech/live/NOTE-htmlspeech.html)
>> contains a number of design agreements for the JavaScript API for
>> speech recognition. I thought it would be a useful exercise to
>> translate those agreements into a concrete API.
>> The below IDL describes my interpretation of the parts of the API that
>> we have agreed on so far. Many of the interface/function/attribute
>> names are not yet agreed, so I mixed and matched from the Microsoft,
>> Mozilla and Google proposals.
>> interface SpeechInputRequest {
>>     // URL (http: or data:) for an SRGS XML document, with or without SISR tags,
>>     // or a URI for one of the predefined grammars
>>     attribute DOMString grammar;
> I think we need to support either multiple simultaneous grammars or
> SIRs. MS has GrammarCollection, so it supports multiple grammars, 
> SpeechRequest API support multiple active recognition objects.

I also suggest giving the user the ability to inject their own  grammar, code, 
css into the applicatiion. I believe this capability is critical to 
accessibility because in my opinion, accessibility is defined by what the user 
needs, not what a vendor is willing to give them.  what the user needs is 
something no vendor can afford to create which explains a lot about the current 
state accessibility.

--- eric
Received on Friday, 20 May 2011 01:13:36 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:16:49 UTC