- From: Bjorn Bringert <bringert@google.com>
- Date: Tue, 1 Feb 2011 11:06:14 -0800
- To: "Young, Milan" <Milan.Young@nuance.com>
- Cc: Olli@pettay.fi, public-xg-htmlspeech@w3.org
I think that the default speech services need a direct API, with an implementation-specific interface between the browser and speech service (e.g. Microsoft SAPI, some vendor-specific network protocol, or embedding the speech service directly into the browser). Requiring default services to implement some protocol would add a lot of complexity and constraints to the design. In the case of local default speech services, it would also increase the attack surface by exposing the speech services to the network unnecessarily. /Bjorn On Tue, Feb 1, 2011 at 10:55 AM, Young, Milan <Milan.Young@nuance.com> wrote: > I agree that it is important to keep the default and network APIs as > consistent as possible. Consider the scenario where the application > requests the remote service, but it is unavailable. Ideally, the code > simply modifies a couple variables and proceeds with the default > services. > > I also agree with the somewhat conflicting goal of using existing web > technologies (like WebSockets) to connect with the remote speech > services. I don't think it's a good use of our time to re-invent the > proverbial wheel. > > What do folks think about exposing local services via protocol? This > would meet both of the above considerations. > > Thanks > > > -----Original Message----- > From: public-xg-htmlspeech-request@w3.org > [mailto:public-xg-htmlspeech-request@w3.org] On Behalf Of Olli Pettay > Sent: Tuesday, February 01, 2011 4:40 AM > To: Bjorn Bringert > Cc: public-xg-htmlspeech@w3.org > Subject: Re: Proposal categories > > On 01/31/2011 11:13 PM, Bjorn Bringert wrote: >> Here are the things that I would personally like to see proposals for, >> in my priority order (high to low): >> >> 1. Specify simple APIs for speech recognition and speech synthesis >> using speech service implementations provided by the browser or >> platform ("default speech services" in our requirements terminology). >> >> 2. Work with other groups (e.g. RTC-Web) to add a general mechanism >> for audio streaming with the features needed for speech recognition. >> >> 3. Enhance existing and proposed audio playback APIs (such as HTML >> <audio> and the proposed JS audio APIs) to work for TTS from web-app >> specified network speech synthesizers. >> >> What do you think of this division? > > In general, I like it. > > >> Who is planning to submit >> proposals in what categories? > What I have currently in my mind is close to category (1), though > with the addition to enable remote speech services reasonable easily > (from API point of view) possibly in V2. But indeed, I should think > about more how remote speech services could be supported by > using common web technologies, yet still keep the API sane and > consistent with the local-speech-resources-API. > > > -Olli > > -- Bjorn Bringert Google UK Limited, Registered Office: Belgrave House, 76 Buckingham Palace Road, London, SW1W 9TQ Registered in England Number: 3977902
Received on Tuesday, 1 February 2011 19:06:46 UTC