W3C

HTML Speech Web API

Non-standards track internal editor's draft of webapi proposal, 8 September 2011

This Version:
insert link
Previous Version:
here
Editors:
Michael Bodell, Microsoft
Deborah Dahl, Invited Expert
Dan Druta, AT&T
Charles Hemphill, Everspeech
Olli Pettay, Mozilla
Björn Bringert, Google
Note

I added everyone who sent in information to the mailing list from the webapi subgroup division of tasks. If people didn't send in information, I didn't add them to editor, although everyone in the group is mentioned in the acknoledgement section.


Abstract

This proposed API represents the web API for doing speech in HTML. This proposal is the HTML bindings and JS functions that sits on top of the protocol work that is also being proposed by the HTML Speech Incubator Group. This includes:

The section on Design Decisions [DESIGN] covers the design decisions the group agreed to that helped direct this API proposal.

The section on Requirements and Use Cases [REQ] covers the motivation behind this proposal.

This API is designed to be used in conjunction with other APIs and elements on the web platform, including APIs to capture input and APIs to do bidirectional communications with a server (WebSockets).

Status of this Document

This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.

This document is the 8 September 2011 Editor's Draft of the HTML Speech Web API proposal. It is not a web standards track document and does not define a web standard. This proposal, or one similar to it, is likely to be included in Incubator Group's final report, along with Requirements, Design Decisions, and the Protocol proposal. The hope is an official web standards group will develop a web standard based on all of these inputs.

This document is produced by the HTML Speech Incubator Group.

This document being an Editor's Draft does not imply endorsement by the W3C Membership nor necessarily the membership of the HTML Speech incubator group. It is inteded to reflect and colate previous discussions and proposals that have taken place on the public email alias and in group teleconferences. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.

Table of Contents

1. Introduction

Web applications should have the ability to use speech to interact with users. That speech could be for output through synthesized speech, or could be for input through the user speaking to fill form items, the user speaking to control page navagation or many other collected use cases. A web application author should be able to add speech to a web application using methods familiar to web developers and should not require extensive specialized speech expertise. The web application should build on existing W3C web standards and support a wide variety of use cases. The web application author should have the flexibility to control the recognition service the web application uses, but should not have the obligation of needing to support a service. This proposal defines the basic representations for how to use grammars, parameters, and recognition results and how to process them. The interfaces and API defined in this proposal can be used with other interfaces and APIs exposed to the web platform.

Note that privacy and security concerns exist around allowing web applications to do speech recognition. User agents should make sure that end users are aware that speech recognition is occuring, and that the end users have given informed consent for this to occur. The exact mechanism of consent is user agent specific, but the privacy and security concerns have shaped many aspects of the proposal.

Example

In the example below the speech API is used to do basic speech web search.

Speech Web Search
To do

insert examples

2. Conformance

Everything in this proposal is informative since this is not a standards track document. However, RFC2119 normative language is uesd where appropriate to aid in the future should this proposal be moved into a standards track process.

The keywords "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "RECOMMENDED", "MAY" and "OPTIONAL" in this document are to be interpreted as described in Key words for use in RFCs to Indicate Requirement Levels [RFC2119].

3. The Speech Service Interface

The Speech Service interface represents the API to query and bind to the underlying speech service.

Reference

This section is based on Dan Druta's proposal and the meeting discussion.

IDL

    interface SpeechService {
        // attributes
        attribute unsigned short serviceType;
        attribute DOMString serviceURI;
        attribute DOMString serviceName;


        // types
        const unsigned short TTS = 0;
        const unsigned short ASR = 1;

        // methods for creating requests
        SpeechInputRequest createSpeechInputRequest();
        SpeechOutputRequest createSpeechOutputRequest();
    };
    

Web applictions get SpeechServices by using the SpeechServiceQuery interface. There can be multiple SpeechServices present in a given web application. The lifecycle of SpeechService follows normal JS garbage collection models.

3.1. Attributes

serviceType
A bitfield that represents what sort of speech service capabilities are desired and supported.

The serviceType can be one of 3 values. The serviceType attribute, on getting, MUST return a bitfield representing the type that the service supports, which must have the values represented as follows:

TTS (numeric value 1)
The service need only support speech synthesis.
ASR (numeric value 2)
The service need only support speech recognition.
serviceURI
Represents the URI of the speech engine.
serviceName
Represents the name of the service as it can be presented to the user.

3.2. Methods

The createSpeechInputRequest method
Method that creates an object with the speech input request interface.
The createSpeechOutputRequest method
Method that creates an object with the speech output request interface.
Question:
We talked about an isAuthorized method on either service or request. What about a trinary value readonly authorized variable that is one of NOT_AUTHORIZED | UNKNOWN | AUTHORIZED. I think we need the UNKNOWN for the case that the UA wants to dialog the user for permission on request. Is it separate for reco and tts? Or should it be on the requests?

4. Speech Service Query Interface

The Speech Service Query Interface provides the developer with a runtime capability to query the service, obtain specific information about the features it implements and allow the developer to implement a deterministic and satisfying user experience. This lets a web application find out if speech recognition is supported before displaying a microphone button to do recognition. It also allows the web application author to query about specific pieces of information (like is a certain language supported, or is a certain service supported). This is used in the same way to ensure the UI doesn't suggest a mode of input that is going to fail.

Reference

This section is based on Dan Druta's proposal and the meeting discussion.

IDL

    interface SpeechServiceQuery {
        // methods
        void speechServiceQuery(in speechServiceCallback serviceCB,
                                optional speechServiceErrorCallback errorCB,
                                optional QueryOptions options);
    }

    interface Criteria {
        
Question:
Nothing yet specified for how these filters or criteria are specified. The criteria we want to support are services by URI, restrictions on languages, grammars, service type (ASR, TTS, etc.), audio codecs, and possibly something about authorization. We may want a separate authorization CB in the Query as well.
} interface QueryOptions { // attributes attribute unsigned short timeout; // 0 means forever optional Criteria filter
Question:
not sure what other options we need, if it was just timeout don't have a class for it.
} [Callback=FunctionOnly, NoInterfaceObject] interface speechServiceCallback { void handleEvent(in SpeechService speechService); }; [Callback=FunctionOnly, NoInterfaceObject] interface speechServiceErrorCallback { void handleEvent(in SpeechInputError speechService); };
Window implements SpeechServiceQuery;

4.1. Attributes

timeout
The timeout attribute of QueryOptions defines how long the query should take before timing out. A value of 0 means no timeout.
Question:
What is the units? milliseconds? Seconds?

4.2. Methods

The speechServiceQuery method
This is the method that may be called to see if a speech service satisfies a set of filter criteria. If it does, the speechServiceCallback will be called and passed a SpeechService option. If there is an issue, the speechServiceErrorCallback will be called.
Questions:
What happens if a service doesn't meet the criteria? Is that an error? Can you pass a URI of the service in the criteria? Somewhere here we need the bit about how there is a possible security issue if you ask about the supported languages, and how we wanted maybe to be an answer from the query. And again, how does this play with the reco element? Can you query the reco element, or do you query the window and then assume the reco element is the same (or set the reco element serviceURI to the one you get from the window?). Also, is the error callback the correct one here? That error class is our only one currently, but it is based off the Request interface.

5. Reco Element

The reco element is the way to do speech recognition using markup bindings. The reco element is legal where ever phrasing content is expected, and can contain any phrasing content, except with no descendant recoable elements unless it is the element's reco comtrol, and no descendant reco elements.

Reference

This section is based on Michael Bodell's proposal and the meeting discussion.

IDL

  [NamedConstructor=Reco(),
  NamedConstructor=Reco(in DOMString for)]
    interface HTMLRecoElement : HTMLElement {
        // Attributes
        readonly attribute HTMLFormElement? form;
        attribute DOMString htmlFor;
        readonly attribute HTMLElement? control;
        attribute SpeechInputRequest request;
        attribute DOMString serviceURI;
    };

          

The reco represents a speech input in a user interface. The speech input can be associated with a specific form control, known as the reco element's reco control, either using for attribute, or by putting the form control inside the reco element itself.

Except where otherwise specified by the following rules, a reco element has no reco control.

Some elements are catagorized as recoable elements. These are elements that can be associated with a reco element:

The reco element's exact default presentation and behavior, in particular what its activation behavior might be and what implicit grammars might be defined, if any, is unspecified and user agent specific. The activation behavior of a reco element for events targetted at interactive content descendants of a reco element, and any descendants of those interactive content descendants, MUST be to do nothing. When a reco element with a reco control is activated and gets a reco result, the default action of the recognition event SHOULD be to set the value of the reco control to the top n-best interpretation of the recognition (in the case of single recognition) or an appended latest top n-best interpretation (in the case of dictation mode with multiple inputs).

Warning:
Not all implementors see value in linking the recognition behavior to markup, versus an all scripting API. Some user agents like the possiblity of good defaults based on the associations. Some user agents like the idea of different consent bars based on the user clicking a markup button, rather then just relying on scripting. User agents are cautioned to remember click-jacking and SHOULD NOT automatically assume that when a reco element is activated it means the user meant to start recognition in all situations.

5.1. Attributes

form

The form attribute is used to explicitly associate the reco element with its form owner.

The form IDL attribute is part of the element's forms API.

htmlFor

The htmlFor IDL attribute MUST reflect the for content attribute.

The for attribute MAY be specified to indicate a form control with which a speech input is to be associated. If the attribute is specified, the attribute's value MUST be the ID of a recoable element in the same Document as the reco element. If the attribute is specified and there is an element in the Document whose ID is equal to the value of the for attribute, and the first such element is a recoable element, then that element is the reco element's reco control.

If the for attribute is not specified, but the reco element has a recoable element descendant, then the first such descendant in tree order is the reco element's reco control.

control

The control attribute returns the form control that is associated with this element. The control IDL attribute MUST return the reco element's reco control, if any, or null if there isn't one.

control . recos returns a NodeList of all the reco elements that the form control is associated with.

Recoable elements have a NodeList object associated with them that represents the list of reco elements, in tree order, whose reco control is the element in question. The reco IDL attribute of recoable elements, on getting, MUST return that NodeList object.

request

The request attribute represents the SpeechInputRequest associated with this reco element. By default the User Agent sets up the speech service specified by serviceURI and the default speech input request associated with this reco. The author MAY set this attribute to associate a markup reco element with a author created speech input request. In this way the author has control over the reco involved.

serviceURI

The serviceURI attribute specifies the speech service to use in the constructed default request. If the serviceURI is unset then the User Agent MUST use the User Agent default service.

5.2. Constructors

Two constructors are provided for creating HTMLRecoElement objects (in addition to the factory methods from DOM Core such as createElement()): Reco() and Reco(for). When invoked as constructors, these MUST return a new HTMLRecoElement object (a new reco element). If the for argument is present, the object created MUST have its for content attribute set to the provided value. The element's document MUST be the active document of the browsing context of the Window object on which the interface object of the invoked constructor is found.

Questions:
Is the HTMLRecoElement having a SpeechInputRequest enough to hook this section to the events and paramaters section that Bjorn and Debbie defined? Should the events from the associated SpeechInputRequest not be raised on the reco element itself instead/in addition to on the SpeechInputRequest? Can that work?

6. TTS Element

The TTS element is the way to do speech synthesis using markup bindings. The TTS element is legal where embedded content is expected. If the TTS element has a src attibute, then its content model is zero or more track elements, then transparent, but with no media element descendants. If the element does not have a src attibute, then one or more source elements, then zero or more track elements, then transparent, but with no media element descendants.

Reference

This section is based on Michael Bodell's proposal and the meeting discussion.

IDL

  [NamedConstructor=TTS(),
  NamedConstructor=TTS(in DOMString src)]
    interface HTMLTTSElement : HTMLMediaElement {
        attribute SpeechOutputRequest request;
        attribute DOMString serviceURI;
    };

          
HTMLTTSElement implements SpeechOutputRequest;

A TTS element represents a synthesized audio stream. A TTS element is a media element whose media data is ostensibly synthesized audio data.

When a TTS element is potentially playing, it must have its TTS data played synchronized with the current playback position, at the element's effective media volume.

When a TTS element is not potentially playing, TTS must not play for the element.

Content MAY be provided inside the TTS element. User agents SHOULD NOT show this content to the user; it is intended for older Web browsers which do not support TTS.

In particular, this content is not intended to address accessibility concerns. To make TTS content accessible to those with physical or cognitive disabilities, authors are expected to provide alternative media streams and/or to embed accessibility aids (such as transcriptions) into their media streams.

6.1. Attributes

The src, preload, autoplay, mediagroup, loop, muted, and controls attributes are the attributes common to all media elements.

request

The request attribute represents the SpeechOutputRequest associated with this reco element. By default the User Agent sets up the speech service specified by serviceURI and the default speech output request associated with this reco. The author MAY set this attribute to associate a markup reco element with a author created speech output request. In this way the author has control over the tts involved.

serviceURI

The serviceURI attribute specifies the speech service to use in the constructed default request. If the serviceURI is unset then the User Agent MUST use the User Agent default service.

6.2. Constructors

Two constructors are provided for creating HTMLTTSElement objects (in addition to the factory methods from DOM Core such as createElement()): TTS() and TTS(src). When invoked as constructors, these MUST return a new HTMLTTSElement object (a new tts element). The element MUST have its preload attribute set to the literal value "auto". If the src argument is present, the object created MUST have its src content attribute set to the provided value, and the user agent MUST invoke the object's resource selection algorithm before returning. The element's document MUST be the active document of the browsing context of the Window object on which the interface object of the invoked constructor is found.

Question:
Is the connetion to Charles's details with implements SpeechInputRequest sufficient? Will that object specify all the bits we've talked about at the F2F with respect to marks, and timing information, as well as bargein for a service that supports both TTS and reco?

7. The Speech Input Request Interface

The speech input request interface is the scripted web API for controlling a given recognition.

IDL

    interface SpeechInputRequest {
        // recognition property methods
        // grammar methods
        void resetGrammars();
        void addGrammar(in DOMString src,
                        optional float weight,
                        optional boolean modal);
        void addGrammarName(in DOMString name,
                        optional float weight,
                        optional boolean modal);
        void disableGrammar(in DOMString src);

        // misc parameter methods
        void setmaxnbest(in integer maxnbest);
        void setlanguage(in DOMString language);
        void setsaveforrereco(in boolean saveforrereco);
        void setendpointdetection(in boolean endpointdetection);
        void setfinalizebeforeend(in boolean finalizebeforeend);
        void setinterimresults(in boolean interimresults);
        void setinterimresultsfreq(in integer interimresultsfreq);
        void setconfidencethreshold(in float confidencethreshold);
        void setsensitivity(in float sensitivity);
        void setspeedversusaccuracy(in float speedvsaccuracy);
        void setcompletetimeout(in integer completetimeout);
        void setincompletetimeout(in integer incompletetimeout);
        void setmaxspeechtimeout(in integer maxspeechtimeout);

        // the generic set parameter
        void setparameter(in DOMString name, in DOMString value);

        // waveform methods
        void setsavewaveformURI(in DOMString savewaveformURI);
        void setinputwaveformURI(in DOMString inputwaveformURI);

        
Question:
The properties proposal from Debbie had all of these as methods (modulo some renaming I did), is this all we want? I feel like usually the attribute representation of this data would be reflected in the API, and someone could set the attributes directly, or using these helper functions. I.e., if there is a maxresults attribute it could be set directly or through the call of the setmaxresults method. But right now we just have the methods.
// attributes attribute MediaStream input; // event methods attribute Function onaudiostart; attribute Function onsoundstart; attribute Function onspeechstart; attribute Function onspeechend; attribute Function onsoundend; attribute Function onaudioend; attribute Function onresult; attribute Function onnomatch; attribute Function onerror; }; SpeechInputRequest implements EventTargt; interface SpeechInputResultEvent : Event { readonly attribute SpeechInputResult result; }; interface SpeechInputNomatchEvent : Event { readonly attribute SpeechInputResult result; }; interface SpeechInputErrorEvent : Event { readonly attribute SpeechInputError error; }; interface SpeechInputError { const unsigned short SPEECH_INPUT_ERR_OTHER = 0; const unsigned short SPEECH_INPUT_ERR_NO_SPEECH = 1; const unsigned short SPEECH_INPUT_ERR_ABORTED = 2; const unsigned short SPEECH_INPUT_ERR_AUDIO_CAPTURE = 3; const unsigned short SPEECH_INPUT_ERR_NETWORK = 4; const unsigned short SPEECH_INPUT_ERR_NOT_ALLOWED = 5; const unsigned short SPEECH_INPUT_ERR_SERVICE_NOT_ALLOWED = 6; const unsigned short SPEECH_INPUT_ERR_BAD_GRAMMAR = 7; const unsigned short SPEECH_INPUT_ERR_LANGUAGE_NOT_SUPPORTED = 8; readonly attribute unsigned short code; readonly attribute DOMString message; }; interface SpeechInputResult {
To do
Need to fill in this set of inputs, including how interim results work.
};
Question:
Should there be some sort of endpointing event that describes the timing information for the onsoundstart type events? Give some sort of offset or other information? We say that the DOM 2 timestamp can be used, but does that capture all the bit we talked about how these events should be in the user agent's clock but representing the audio stream position - not the wall clock time?

7.1. Speech Input Request Recognition Property Methods

The resetGrammars method
This means remove all explicitly set grammars and just "use the default language model" of the implementation.
The addGrammar method
This method adds a grammar to the set of active grammars. The URI for the grammar is specified by the src parameter, which represents the URI for the grammar. If the weight parameter is present it represents this grammar's weight relative to the other grammar. If the weight parameter is not present, the default value of 1.0 is used. If the modal parameter is set to true, then all other already active grammars are disabled. If the modal parameter is not present, the default value is false.
The addGrammarName method
This method adds a grammar to the set of active grammars. The builtin string for the grammar is specified by the name parameter, which represents the name for the grammar. If the weight parameter is present it represents this grammar's weight relative to the other grammar. If the weight parameter is not present, the default value of 1.0 is used. If the modal parameter is set to true, then all other already active grammars are disabled. If the modal parameter is not present, the default value is false.
The disableGrammar method
This method disables a grammar with the URI matching the src parameter.
The setmaxnbest method
This method will set the maximum number of recognition results that should be returned to the value of the maxnbest parameter. The default value is 1.
The setlangugage method
This method will set the language of the recognition to the language paramter, using the ISO language codes.
Default?
The setsaveforrereco method
This method will save the utterance for later use in a rerecognition depending on the value of the saveforrereco boolean value (true means save). The default value is false.
The setendpointdetection method
This method will determin if the user agent should do a low latency endpoint detection depending on the value of the endpointdetection parameter (true means do endpointing).
Default?
The setfinalizebeforeend method
This method sets if final results can be returned before the user is done talking based on the value of the finalizebeforeend paramter (true means yes).
Default?
The setinterimresults method
This method sets if interim results should be sent or if the recognizer should wait for final results only based on the value of the interimresults paramter (true means send interim results).
Default?
The setinterimresultsfreq method
This method sets the frequency with which interim results are desired from the recognition service. The recognition service may not be able to meet exactly this frequency as it depends on the details of the grammars and utterances that are being used how often an interim result is likely to occur or change. The value of the interimresultsfreq is the number of milliseconds desired between successive interim results.
The setconfidencethreshold method
This method sets the confidence threshold to the value of the paramter confidencethreshold which represents some value between 0.0 (least confidence needed) and 1.0 (most confidence) with 0.5 as the default.
The setsensitivity method
This method sets the sensitivity to the value of the paramter sensitivity which represents some value between 0.0 (least sensitive) and 1.0 (most sensitivity) with 0.5 as the default.
The setspeedvsaccuracy method
This method sets the trade off desired between low latency and high speed to the value of the paramter speedvsaccuracy which represents some value between 0.0 (least accurate) and 1.0 (most accurate) with 0.5 as the default.
The setcompletetimeout method
This method sets the completetimeout to the value of the completetimeout parameter. This represents the amount of silence needed to match a grammar when a hypothesis is at a complete match of the grammar (that is the hypothesis matches a grammar, and no larger input can possibly match a grammar).
The setincompletetimeout method
This method sets the incompletetimeout to the value of the incompletetimeout parameter. This represents the amount of silence needed to match a grammar when a hypothesis is not at a complete match of the grammar (that is the hypothesis does not match a grammar, or it does match a grammar but so could a larger input).
The setmaxspeechtimeout method
This method sets the maxspeechtimeout to the value of the maxspeechtimeout parameter. This represents how much speech we should have before an end of speech or an error.
The setparameter method
This method allows arbitrary recognition service paramters to be set. The name of the parameter is given by the name parameter and the value by the value parameter.
The setsavewaveformURI method
The setsavewaveformURI method specifies where the web application would like the utterance to be stored, if it is to be stored. The value of the parameter savewaveformURI is this URI.
The setinputwaveformURI method
The method says to get the input waveform URI from the URI specified in the inputwaveformURI parameter.
When do we start recognizing in this case? There is no reco() function, as we assume the capture does that.

7.2. Speech Input Request Attributes

input
The input attibute is the MediaStream that we are recognizing against. If input is not set, the Speech Input Request uses the default UA provided capture (which MAY be nothing), in which case the value of input will be null.

7.3. Speech Input Request Events

The DOM Level 2 Event Model is used for speech recognition events. The methods in the EventTarget interface should be used for registering event listeners. The SpeechInputRequest interface also contains convenience attributes for registering a single event handler for each event type.

For all these events, the timeStamp attribute defined in the DOM Level 2 Event interface must be set to the best possible estimate of when the real-world event which the event object represents occurred.

Unless specified below, the ordering of the different events is undefined. For example, some implementations may fire audioend before speechstart or speechend if the audio detector is client-side and the speech detector is server-side.

audiostart event
Fired when the user agent has started to capture audio.
soundstart event
Some sound, possibly speech, has been detected. This MUST be fired with low latency, e.g. by using a client-side energy detector.
speechstart event
The speech that will be used for speech recognition has started.
speechend event
The speech that will be used for speech recognition has ended. speechstart MUST always have been fire before speechend.
soundend event
Some sound is no longer detected. This MUST be fired with low latency, e.g. by using a client-side energy detector. soundstart MUST always have been fired before soundend.
audioend event
Fired when the user agent has finished capturing audio. audiostart MUST always have been fired before audioend.
result event
Fired when the speech recognizer returns a final result with at least one recognition hypothesis that meets or exceeds the confidence threshold. The result field in the event MUST contain the speech recognition result. All the following events MUST have been fired before result is fired: audiostart, soundstart, speechstart, speechend, soundend, audioend.
Is it really true all events need be received? What about if this is an interim result? how are they returned if not through this event?
nomatch event
Fired when the speech recognizer returns a final result with no recognition hypothesis that meet or exceed the confidence threshold. The result field in the event MAY contain speech recognition results that are below the confidence threshold or MAY be null.
onerror event
Fired when a speech recognition error occurs. The error attribute MUST be set to a SpeechInputError object.

7.4. Speech Input Error

The speech input error object has two attributes code and message.

code
The code is a numeric error code for has gone wrong. The values are:
SPEECH_INPUT_ERR_OTHER (numeric code 0)
This is the catch all error code.
SPEECH_INPUT_ERR_NO_SPEECH (numeric code 1)
No speech was detected.
SPEECH_INPUT_ERR_ABORTED (numeric code 2)
Speech input was aborted somehow, maybe by some UA-specific behavior such as UI that lets the user cancel speech input.
SPEECH_INPUT_ERR_AUDIO_CAPTURE (numeric code 3)
Audio capture failed.
SPEECH_INPUT_ERR_NETWORK (numeric code 4)
Some network communication that was required to complete the recognition failed.
SPEECH_INPUT_ERR_NOT_ALLOWED (numeric code 5)
The user agent is not allowing any speech input to occur for reasons of security, privacy or user preference.
SPEECH_INPUT_ERR_SERVICE_NOT_ALLOWED (numeric code 6)
The user agent is not allowing the web application requested speech service, but would allow some speech service, to be used either because the user agent doesn't support the selected one or because of reasons of security, privacy or user preference.
SPEECH_INPUT_ERR_BAD_GRAMMAR (numeric code 7)
There was an error in the speech recognition grammar.
SPEECH_INPUT_ERR_LANGUAGE_NOT_SUPPORTED (numeric code 8)
The language was not supported.
message
The message content is implementation specific. This attribute is primarily intended for debugging and developers should not use it directly in their application user interface.

7.5. Speech Input Result

To do
Need to fill this in

8. The Speech Output Request Interface

The speech output request interface is where we hang all the TTS specific information (similar to the Speech Input Request Interface, but for synthesis).

Reference

This section is based on Charles Hemphill's proposal.

Questions:
Is there a more complete version of this? I wasn't able to easily incorporate Charles's proposal. I think the only thing I could see was the event handlers onspeechstart, onspeechend, onerror, and some descussion about a handler for the recognition result (does that have to do with bargein?).
Questions:
Are there any other major content sections missing? What about anything that the protocol requries of us? Do the above sections fit together with each other and with the protocol work?

9. Design Decisions

Here are the design decisions from the XG that are relevant to the Web API proposal:

To do

insert other design decisions as we receive them and review them

10. Requirements and Use Cases

This section covers what some of the requirements were for this API, as well as illustrates some use cases. Note more extensive information can be found at HTML Speech XG Use Cases and Requirements as well as in the final XG note including requirements and use cases.

11. Acknowledgements

This proposal was developed by the HTML Speech XG.

This work builds on the existing work including:

Special thanks to the members of the XG: Andrei Popescu, Andy Mauro, Björn Bringert, Chaitanya Gharpure, Charles Chen, Dan Druta, Daniel Burnett, Dave Burke, David Bolter, Deborah Dahl, Fabio Paternò, Glen Shires, Ingmar Kliche, Jerry Carter, Jim Larson, Kazuyuki Ashimura, Marc Schröder, Markus Gylling, Masahiro Araki, Matt Womer, Michael Bodell, Michael Johnston, Milan Young, Olli Pettay, Paolo Baggia, Patrick Ehlen, Raj Tumuluri, Rania Elnaggar, Ravi Reddy, Robert Brown, Satish Kumar Sampath, Somnath Chandra, and T.V. Raman.

12. References

RFC2119
Key words for use in RFCs to Indicate Requirement Levels, S. Bradner. IETF.
HTML5
HTML 5: A vocabulary and associated APIs for HTML and XHTML (work in progress), I. Hickson. W3C.