Re: Agreed recognition API?

On 5/19/2011 11:39 AM, Olli Pettay wrote:
> On 05/19/2011 05:59 PM, Bjorn Bringert wrote:
>> By now the draft final report
>> (http://www.w3.org/2005/Incubator/htmlspeech/live/NOTE-htmlspeech.html)
>> contains a number of design agreements for the JavaScript API for
>> speech recognition. I thought it would be a useful exercise to
>> translate those agreements into a concrete API.
>>
>> The below IDL describes my interpretation of the parts of the API that
>> we have agreed on so far. Many of the interface/function/attribute
>> names are not yet agreed, so I mixed and matched from the Microsoft,
>> Mozilla and Google proposals.
>>
>> interface SpeechInputRequest {
>>     // URL (http: or data:) for an SRGS XML document, with or without SISR tags,
>>     // or a URI for one of the predefined grammars
>>     attribute DOMString grammar;
>
> I think we need to support either multiple simultaneous grammars or
> SIRs. MS has GrammarCollection, so it supports multiple grammars, 
> SpeechRequest API support multiple active recognition objects.

I also suggest giving the user the ability to inject their own  grammar, code, 
css into the applicatiion. I believe this capability is critical to 
accessibility because in my opinion, accessibility is defined by what the user 
needs, not what a vendor is willing to give them.  what the user needs is 
something no vendor can afford to create which explains a lot about the current 
state accessibility.

--- eric

Received on Friday, 20 May 2011 01:13:36 UTC