W3C

HTML Speech Web API

Non-standards track internal editor's draft of webapi proposal, 20 October 2011

This Version:
insert link
Previous Version:
here
Editors:
Michael Bodell, Microsoft
Deborah Dahl, Invited Expert
Dan Druta, AT&T
Charles Hemphill, Everspeech
Olli Pettay, Mozilla
Björn Bringert, Google

Abstract

This proposed API represents the web API for doing speech in HTML. This proposal is the HTML bindings and JS functions that sits on top of the protocol work that is also being proposed by the HTML Speech Incubator Group. This includes:

The section on Design Decisions [DESIGN] covers the design decisions the group agreed to that helped direct this API proposal.

The section on Requirements and Use Cases [REQ] covers the motivation behind this proposal.

This API is designed to be used in conjunction with other APIs and elements on the web platform, including APIs to capture input and APIs to do bidirectional communications with a server (WebSockets).

Status of this Document

This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.

This document is the 20 October 2011 Editor's Draft of the HTML Speech Web API proposal. It is not a web standards track document and does not define a web standard. This proposal, or one similar to it, is likely to be included in Incubator Group's final report, along with Requirements, Design Decisions, and the Protocol proposal. The hope is an official web standards group will develop a web standard based on all of these inputs.

This document is produced by the HTML Speech Incubator Group.

This document being an Editor's Draft does not imply endorsement by the W3C Membership nor necessarily the membership of the HTML Speech incubator group. It is inteded to reflect and colate previous discussions and proposals that have taken place on the public email alias and in group teleconferences. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.

Table of Contents

1. Introduction

Web applications should have the ability to use speech to interact with users. That speech could be for output through synthesized speech, or could be for input through the user speaking to fill form items, the user speaking to control page navagation or many other collected use cases. A web application author should be able to add speech to a web application using methods familiar to web developers and should not require extensive specialized speech expertise. The web application should build on existing W3C web standards and support a wide variety of use cases. The web application author should have the flexibility to control the recognition service the web application uses, but should not have the obligation of needing to support a service. This proposal defines the basic representations for how to use grammars, parameters, and recognition results and how to process them. The interfaces and API defined in this proposal can be used with other interfaces and APIs exposed to the web platform.

Note that privacy and security concerns exist around allowing web applications to do speech recognition. User agents should make sure that end users are aware that speech recognition is occuring, and that the end users have given informed consent for this to occur. The exact mechanism of consent is user agent specific, but the privacy and security concerns have shaped many aspects of the proposal.

Example

In the example below the speech API is used to do basic speech web search.

Speech Web Search
To do

insert examples

2. Conformance

Everything in this proposal is informative since this is not a standards track document. However, RFC2119 normative language is uesd where appropriate to aid in the future should this proposal be moved into a standards track process.

The keywords "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "RECOMMENDED", "MAY" and "OPTIONAL" in this document are to be interpreted as described in Key words for use in RFCs to Indicate Requirement Levels [RFC2119].

5. Reco Element

The reco element is the way to do speech recognition using markup bindings. The reco element is legal where ever phrasing content is expected, and can contain any phrasing content, except with no descendant recoable elements unless it is the element's reco comtrol, and no descendant reco elements.

Reference

This section is based on Michael Bodell's proposal and the meeting discussion.

IDL

  [NamedConstructor=Reco(),
  NamedConstructor=Reco(in DOMString for)]
    interface HTMLRecoElement : HTMLElement {
        // Attributes
        readonly attribute HTMLFormElement? form;
        attribute DOMString htmlFor;
        readonly attribute HTMLElement? control;
        attribute SpeechInputRequest request;
        attribute DOMString serviceURI;
    };

          

The reco represents a speech input in a user interface. The speech input can be associated with a specific form control, known as the reco element's reco control, either using for attribute, or by putting the form control inside the reco element itself.

Except where otherwise specified by the following rules, a reco element has no reco control.

Some elements are catagorized as recoable elements. These are elements that can be associated with a reco element:

The reco element's exact default presentation and behavior, in particular what its activation behavior might be and what implicit grammars might be defined, if any, is unspecified and user agent specific. The activation behavior of a reco element for events targetted at interactive content descendants of a reco element, and any descendants of those interactive content descendants, MUST be to do nothing. When a reco element with a reco control is activated and gets a reco result, the default action of the recognition event MUST be to set the value of the reco control to the top n-best interpretation of the recognition (in the case of single recognition) or an appended latest top n-best interpretation (in the case of dictation mode with multiple inputs). In addition for input of type checkbox and radiobutton the checked property MUST be set.

Warning:
Not all implementors see value in linking the recognition behavior to markup, versus an all scripting API. Some user agents like the possiblity of good defaults based on the associations. Some user agents like the idea of different consent bars based on the user clicking a markup button, rather then just relying on scripting. User agents are cautioned to remember click-jacking and SHOULD NOT automatically assume that when a reco element is activated it means the user meant to start recognition in all situations.

5.1. Attributes

form

The form attribute is used to explicitly associate the reco element with its form owner.

The form IDL attribute is part of the element's forms API.

htmlFor

The htmlFor IDL attribute MUST reflect the for content attribute.

The for attribute MAY be specified to indicate a form control with which a speech input is to be associated. If the attribute is specified, the attribute's value MUST be the ID of a recoable element in the same Document as the reco element. If the attribute is specified and there is an element in the Document whose ID is equal to the value of the for attribute, and the first such element is a recoable element, then that element is the reco element's reco control.

If the for attribute is not specified, but the reco element has a recoable element descendant, then the first such descendant in tree order is the reco element's reco control.

control

The control attribute returns the form control that is associated with this element. The control IDL attribute MUST return the reco element's reco control, if any, or null if there isn't one.

control . recos returns a NodeList of all the reco elements that the form control is associated with.

Recoable elements have a NodeList object associated with them that represents the list of reco elements, in tree order, whose reco control is the element in question. The reco IDL attribute of recoable elements, on getting, MUST return that NodeList object.

request

The request attribute represents the SpeechInputRequest associated with this reco element. By default the User Agent sets up the speech service specified by serviceURI and the default speech input request associated with this reco. The author MAY set this attribute to associate a markup reco element with a author created speech input request. In this way the author has control over the reco involved.

serviceURI

The serviceURI attribute specifies the speech service to use in the constructed default request. If the serviceURI is unset then the User Agent MUST use the User Agent default service.

5.2. Constructors

Two constructors are provided for creating HTMLRecoElement objects (in addition to the factory methods from DOM Core such as createElement()): Reco() and Reco(for). When invoked as constructors, these MUST return a new HTMLRecoElement object (a new reco element). If the for argument is present, the object created MUST have its for content attribute set to the provided value. The element's document MUST be the active document of the browsing context of the Window object on which the interface object of the invoked constructor is found.

6. TTS Element

The TTS element is the way to do speech synthesis using markup bindings. The TTS element is legal where embedded content is expected. If the TTS element has a src attibute, then its content model is zero or more track elements, then transparent, but with no media element descendants. If the element does not have a src attibute, then one or more source elements, then zero or more track elements, then transparent, but with no media element descendants.

Reference

This section is based on Michael Bodell's proposal and the meeting discussion.

IDL

  [NamedConstructor=TTS(),
  NamedConstructor=TTS(in DOMString src)]
    interface HTMLTTSElement : HTMLMediaElement {
        attribute DOMString serviceURI;
        attribute DOMString lastMark;
    };

          

A TTS element represents a synthesized audio stream. A TTS element is a media element whose media data is ostensibly synthesized audio data.

When a TTS element is potentially playing, it must have its TTS data played synchronized with the current playback position, at the element's effective media volume.

When a TTS element is not potentially playing, TTS must not play for the element.

Content MAY be provided inside the TTS element. User agents SHOULD NOT show this content to the user; it is intended for older Web browsers which do not support TTS.

In particular, this content is not intended to address accessibility concerns. To make TTS content accessible to those with physical or cognitive disabilities, authors are expected to provide alternative media streams and/or to embed accessibility aids (such as transcriptions) into their media streams.

Implementations SHOULD support at least UTF-8 encoded text/plain and application/ssml+xml (both SSML 1.0 and 1.1 SHOULD be supported).

The existing timeupdate event is dispatched to report progress through the synthesized speech. If the synthesis is of type application/ssml+xml, timeupdate events should be fired for each mark element that is encountered.

6.1. Attributes

The src, preload, autoplay, mediagroup, loop, muted, and controls attributes are the attributes common to all media elements.

serviceURI

The serviceURI attribute specifies the speech service to use in the constructed default request. If the serviceURI is unset then the User Agent MUST use the User Agent default service.

lastMark

The new lastMark attribute must, on getting, return the name of the last SSML mark element that was encountered during playback. If no mark has been encountered yet, the attribute must return null.

6.2. Constructors

Two constructors are provided for creating HTMLTTSElement objects (in addition to the factory methods from DOM Core such as createElement()): TTS() and TTS(src). When invoked as constructors, these MUST return a new HTMLTTSElement object (a new tts element). The element MUST have its preload attribute set to the literal value "auto". If the src argument is present, the object created MUST have its src content attribute set to the provided value, and the user agent MUST invoke the object's resource selection algorithm before returning. The element's document MUST be the active document of the browsing context of the Window object on which the interface object of the invoked constructor is found.

7. The Speech Input Request Interface

The speech input request interface is the scripted web API for controlling a given recognition.

IDL

    [Constructor]
    interface SpeechInputRequest {
        // recognition property methods
        // grammar methods
        void resetGrammars();
        void addGrammar(in DOMString src,
                        optional float weight,
                        optional boolean modal);
        void disableGrammar(in DOMString src);

        // misc parameter attributes
        integer maxnbest;
        DOMString language;
        boolean saveforrereco;
        boolean endpointdetection;
        boolean finalizebeforeend;
        integer interimresults;
        float confidencethreshold;
        float sensitivity;
        float speedvsaccuracy;
        integer completetimeout;
        integer incompletetimeout;
        integer maxspeechtimeout;
        DOMString inputwaveformURI;

        // the generic set parameter
        void setcustomparameter(in DOMString name, in DOMString value);


        // the generic send info method
        void sendInfo(in DOMString type, in DOMString value)

        // methods to drive the speech interaction
        void open();
        void start();
        void stop();
        void abort();
        void interpret(in DOMString text);

        // attributes
        attribute DOMString uri;
        attribute MediaStream input;
        const unsigned short SPEECH_AUTHORIZATION_UNKNOWN = 0;
        const unsigned short SPEECH_AUTHORIZATION_AUTHROIZED = 1;
        const unsigned short SPEECH_AUTHORIZATION_NOT_AUTHORIZED = 2;
        readonly attribute unsigned short authorizationState;
        attribute boolean continuous;

        // event methods
        attribute Function onaudiostart;
        attribute Function onsoundstart;
        attribute Function onspeechstart;
        attribute Function onspeechend;
        attribute Function onsoundend;
        attribute Function onaudioend;
        attribute Function onresult;
        attribute Function onnomatch;
        attribute Function onerror;
        attribute Function onauthorizationchange;
        attribute Function onopen;
        attribute Function onstart;
        attribute Function onend;
    };
    SpeechInputRequest implements EventTargt;

    interface SpeechInputNomatchEvent : Event {
        readonly attribute SpeechInputResult result;
    };

    interface SpeechInputErrorEvent : Event {
        readonly attribute SpeechInputError error;
    };

    interface SpeechInputError {
        const unsigned short SPEECH_INPUT_ERR_OTHER = 0;
        const unsigned short SPEECH_INPUT_ERR_NO_SPEECH = 1;
        const unsigned short SPEECH_INPUT_ERR_ABORTED = 2;
        const unsigned short SPEECH_INPUT_ERR_AUDIO_CAPTURE = 3;
        const unsigned short SPEECH_INPUT_ERR_NETWORK = 4;
        const unsigned short SPEECH_INPUT_ERR_NOT_ALLOWED = 5;
        const unsigned short SPEECH_INPUT_ERR_SERVICE_NOT_ALLOWED = 6;
        const unsigned short SPEECH_INPUT_ERR_BAD_GRAMMAR = 7;
        const unsigned short SPEECH_INPUT_ERR_LANGUAGE_NOT_SUPPORTED = 8;

        readonly attribute unsigned short code;
        readonly attribute DOMString message;
    };

    // Item in N-best list
    interface SpeechInputAlternative {
        readonly attribute DOMString utterance;
        readonly attribute float confidence;
        readonly attribute any interpretation;
    };

    // A complete one-shot simple response
    interface SpeechInputResult {
        readonly attribute Document resultEMMAXML;
        readonly attribute DOMString resultEMMAText;
        readonly attribute unsigned long length;
        getter SpeechInputAlternative item(in unsigned long index);
        readonly attribute boolean final;
    };

    // A full response, which could be interim or final, part of a continuous response or not
    interface SpeechInputResultEvent : Event {
        readonly attribute SpeechInputResult result;
        readonly attribute short resultIndex;
        readonly attribute SpeechInputResult[] results;
        readonly attribute DOMString sessionId;
    };



          

7.1. Speech Input Request Recognition Properties

The resetGrammars method
This means remove all explicitly set grammars and just "use the default language model" of the implementation.
The addGrammar method
This method adds a grammar to the set of active grammars. The URI for the grammar is specified by the src parameter, which represents the URI for the grammar. Note, some services may support builtin grammars that can be specified by URI. If the weight parameter is present it represents this grammar's weight relative to the other grammar. If the weight parameter is not present, the default value of 1.0 is used. If the modal parameter is set to true, then all other already active grammars are disabled. If the modal parameter is not present, the default value is false.
The disableGrammar method
This method disables a grammar with the URI matching the src parameter.
maxnbest attribute
This attribute will set the maximum number of recognition results that should be returned. The default value is 1.
langugage attribute
This attribute will set the language of the recognition for the request, using a valid BCP 47 language tag. If unset it remains unset for getting in script, but will default to use the lang of its recoable element, if tied to an html element, and the lang of the html document root element and associated heirachy are used when the SpeechInputRequest is not associated with a recoable element. This default value is computed and used when the input request opens a connection to the recognition service.
saveforrereco attribute
This attribute instructs the speech recognitoin service if the utterance should be saved for later use in a rerecognition (true means save). The default value is false.
endpointdetection attribute
This attribute instructs the user agent if it should do a low latency endpoint detection (true means do endpointing). The user agent default SHOULD be true.
finalizebeforeend attribute
This attribute instructs the recognition service if it should send final results when it gets them, even if the user is not done talking (true means yes it should send the results early). The user agent default SHOULD be true.
interimresults attribute
If interimresults is set to 0, that instructs the recognition service that it MUST NOT send any interim results. Other vales represent a hint to the service that the web application would like interim results every this many milliseconds. The service MAY not follow the hint, as the exact interval between interim results depends on a combination of the recognition service, the grammars in use, and the utterance being recognized. The user agent default value SHOULD be 0.
confidencethreshold attribute
This attribute represents the degree of confidence the recognition system needs in order to return a recognition match instead of a nomatch. The confidence threshold is a value between 0.0 (least confidence needed) and 1.0 (most confidence) with 0.5 as the default.
sensitivity attribute
This attribute represents the sensitivity to quiet input. The sensitivity is a value between 0.0 (least sensitive) and 1.0 (most sensitivity) with 0.5 as the default.
speedvsaccuracy attribute
This attribute instructs the recognition service on the desired trade off between low latency and high speed. The speedvsaccuracy is a value between 0.0 (least accurate) and 1.0 (most accurate) with 0.5 as the default.
completetimeout attribute
This attribute represents the amount of silence, in milliseconds, needed to match a grammar when a hypothesis is at a complete match of the grammar (that is the hypothesis matches a grammar, and no larger input can possibly match a grammar).
incompletetimeout attribute
This attribute represents the amount of silence, in milliseconds, needed to match a grammar when a hypothesis is not at a complete match of the grammar (that is the hypothesis does not match a grammar, or it does match a grammar but so could a larger input).
maxspeechtimeout attribute
This attribute represents how much speech, in milliseconds, the recognition service should process before an end of speech or an error occurs.
inputwaveformURI attributes
This attribute, if set, instructs the speech recognition service to recognize from this URI instead of from the input MediaStream attribute.
The setcustomparameter method
This method allows arbitrary recognition service paramters to be set. The name of the parameter is given by the name parameter and the value by the value parameter. This arbitrary paramter mechanism allows services that want to have extensions or to set user specific information (such as profile, gender, or age information) to accomplish the task.
The sendInfo method
The method allows one to pass arbitrary information to the recognition service, even while recognition is on going. Each set info call get transmitted immediately to the recognition service. The type attribute specifies the content-type of the info message and the value attribute specifies the payload of the info method.
The open method
When the open method is called the user agent MUST connect to the speech service. All of the attributes and parameters of the SpeechInputResult (I.e., languages, grammars, service uri, etc.) MUST be set before this method is called, because they will be fixed with the values they have at the time open is called, at least until open is called again. Note that the user agent MAY need to have a permissions dialog at this point to ensure that the end user has given informed consent for the web application to listen to the user and recognize. Errors MAY be raised at this point for a variety of reasons including: not authorized to do recognition, failure to connect to the service, the service can not handle the languages or grammars needed for this turn, etc. When the service is successfully completed the open with no errors the user agent MUST raise an open event.
The start method
When the start method is called it represents the moment in time the web application wishes to begin recognition. When the speech input is streaming live through the input media stream, then this start call represents the moment in time that the service MUST begin to listen and try to match the grammars associated with this request. If the SpeechInputRequest has not yet called open before the start call is made, a call to open is made by the start call (complete with the open event being raised). Once the system is successfully listening to the recognition the user agent MUST raise a start event.
The stop method
The stop method represents an instruction to the recognition service to stop listening to more audio, and to try and return a result using just the audio that it has received to date. A typical use of the stop method might be for a web application where the end user is doing the end pointing, similar to a walkie-talkie. The end user might press and hold the space bar to talk to the system and on the space down press the start call would have occurred and when the space bar is released the stop method is called to ensure that the system is no longer listening to the user. Once the stop method is called the speech service MUST NOT collect additional audio and MUST NOT continue to listen to the user. The speech service MUST attempt to return a recognition result (or a nomatch) based on the audio that it has collected to date.
The abort method
The abort method is a request to immediately stop listening and stop recognizing and do not return any information but that the system is done. When the stop method is called the speech service MUST stop recognizing. The user agent MUST raise a end event once the speech service is no longer connected.
The interpret method
The interpret method provides a mechanism to request recognition using text, rather than audio. The text parameter is the string of text to recognize against. When bypassing audio recognition a number of the normal parameters MAY be ignored and the sound and audio events SHOULD NOT be generated. Other normal SpeechInputRequest events SHOULD be generated.

7.2. Speech Input Request Attributes

uri
The uri attribute specifies the location of the speech service the web application wishes to connect to. If this attribute is unset at the time of the open call, then the user agent MUST use the user agent default speech service.
input
The input attibute is the MediaStream that we are recognizing against. If input is not set, the Speech Input Request uses the default UA provided capture (which MAY be nothing), in which case the value of input will be null. In cases where the MediaStream is set but the SpeechInputRequest hasn't yet called start the User Agent SHOULD NOT buffer the audio, the semantics are that the web application wants to start listening to the Media Stream at the moment it calls Start, and not earlier than that.
authorizationState
The authorizedState variable tracks if the web application is authorized to do speech recognition. The UA SHOULD start in SPEECH_AUTHORIZATION_UNKNOWN if the user agent can not determine if the web application is able to be authorized. The state variable may change values in response to policies of the user agent and possibly security interactions with the end user. If the web application is authorized then the user agent MUST set this variable to SPEECH_AUTHORIZATION_AUTHORIZED. If the web applicaiton is not authorized then the user agent MUST set this variable to SPEECH_AUTHORIZATION_NOT_AUTHORIZED. Any time this state variable changes in value the user agent MUST raise a authorizationchange event.
continuous
When the continuous attribute is set to false the service MUST only return a single simple recognition response as a result of starting recognition. This represents a request/response single turn pattern of interaction. When the continuous attribute is set to true the service MUST return a set of recognitions representing more a dictation of multiple recognitions in response to a single starting of recognition. The user agent default value SHOULD be false.

7.3. Speech Input Request Events

The DOM Level 2 Event Model is used for speech recognition events. The methods in the EventTarget interface should be used for registering event listeners. The SpeechInputRequest interface also contains convenience attributes for registering a single event handler for each event type.

For all these events, the timeStamp attribute defined in the DOM Level 2 Event interface must be set to the best possible estimate of when the real-world event which the event object represents occurred.

Unless specified below, the ordering of the different events is undefined. For example, some implementations may fire audioend before speechstart or speechend if the audio detector is client-side and the speech detector is server-side.

audiostart event
Fired when the user agent has started to capture audio.
soundstart event
Some sound, possibly speech, has been detected. This MUST be fired with low latency, e.g. by using a client-side energy detector.
speechstart event
The speech that will be used for speech recognition has started.
speechend event
The speech that will be used for speech recognition has ended. speechstart MUST always have been fire before speechend.
soundend event
Some sound is no longer detected. This MUST be fired with low latency, e.g. by using a client-side energy detector. soundstart MUST always have been fired before soundend.
audioend event
Fired when the user agent has finished capturing audio. audiostart MUST always have been fired before audioend.
result event
Fired when the speech recognizer returns a result. See here for more information.
nomatch event
Fired when the speech recognizer returns a final result with no recognition hypothesis that meet or exceed the confidence threshold. The result field in the event MAY contain speech recognition results that are below the confidence threshold or MAY be null.
error event
Fired when a speech recognition error occurs. The error attribute MUST be set to a SpeechInputError object.
authorizationchange event
Fired whenever the state variable tracking if the web application is authorized to listen to the user and do speech recognition changes its value.
open event
Fired whenever the SpeechInputRequest has successfully connected to the speech service and the various parameters of the request can be satisfied with the service.
start event
Fired when the recognition service has begun to listen to the audio with the intention of recognizing.
end event
Fired when the service has disconnected. The event MUST always be generated when the session ends no matter the reason for the end.

7.4. Speech Input Error

The speech input error object has two attributes code and message.

code
The code is a numeric error code for has gone wrong. The values are:
SPEECH_INPUT_ERR_OTHER (numeric code 0)
This is the catch all error code.
SPEECH_INPUT_ERR_NO_SPEECH (numeric code 1)
No speech was detected.
SPEECH_INPUT_ERR_ABORTED (numeric code 2)
Speech input was aborted somehow, maybe by some UA-specific behavior such as UI that lets the user cancel speech input.
SPEECH_INPUT_ERR_AUDIO_CAPTURE (numeric code 3)
Audio capture failed.
SPEECH_INPUT_ERR_NETWORK (numeric code 4)
Some network communication that was required to complete the recognition failed.
SPEECH_INPUT_ERR_NOT_ALLOWED (numeric code 5)
The user agent is not allowing any speech input to occur for reasons of security, privacy or user preference.
SPEECH_INPUT_ERR_SERVICE_NOT_ALLOWED (numeric code 6)
The user agent is not allowing the web application requested speech service, but would allow some speech service, to be used either because the user agent doesn't support the selected one or because of reasons of security, privacy or user preference.
SPEECH_INPUT_ERR_BAD_GRAMMAR (numeric code 7)
There was an error in the speech recognition grammar.
SPEECH_INPUT_ERR_LANGUAGE_NOT_SUPPORTED (numeric code 8)
The language was not supported.
message
The message content is implementation specific. This attribute is primarily intended for debugging and developers should not use it directly in their application user interface.

7.5. Speech Input Alternative

The SpeechInputAlternative represents a simple view of the response that gets used in a n-best list.

utterance
The utterance string represents the raw words that the user spoke.
confidence
The confidence represents a numeric estimate between 0 and 1 of how confident the recognition system is that the recognition is correct. A higher number means the system is more confident.
interpretation
The interpretation represnts the semantic meaning from what the user said. This might be determined, for instance, through the SISR specification of semantics in a grammar.

7.6. Speech Input Result

The SpeechInputResult object represents a single one-shot recognition match, either as one small part of a continous recognition or as the complete return result of a non-continuous recognition.

resultEMMAXML
The resultEMMAXML is a Document that contains the complete EMMA document the recognition service returned from the recognition. The Document has all the normal XML DOM processing to inspect the content.
resultEMMAText
The resultEMMAText is a text representation of the resultEMMAXML.
length
The long attribute represents how many n-best alterantives are represented in the item array. The user agent MUST not return more SpeechInputAlternatives than the value of the maxnbest attribute on the recognition request.
item
The item getter returns a SpeechInputAlternative from the index into an array of n-best values. The user agent MUST ensure that there are not more elements in the array then the maxnbest attribute was set. The user agent MUST ensure that the length attribute is set to the number of elements in the array. The user agent MUST ensure that the n-best list is sorted in non-increasing confidence order (each element must be less than or equal to the confidence of the preceeding elements).

7.7. Speech Input Result Event

The Speech Input Result event is the event that is raised each time there is an interim or final result. The event contains both the current most recent recognized bit (in the result object) as well as a history of the complete recognition session so far (in the results object).

result
The result element is the one single SpeechInputResult that is new as of this request.
resultIndex
The resultIndex MUST be set to the place in the results array that this particular new result goes. The resultIndex MAY refer to a previous occupied array index from a previous SpeechInputResultEvent. When this is the case this new result overwrites the earlier result and is a more accurate result; however, when this is the case the previous value MUST NOT have been a final result. When continuous was false, the resultIndex MUST always be 0.
results
The array of all of the recognition results that have been returned as part of this session. This array MUST be identical to the array that was present when the last SpeechInputResutlEvent was raised, with the exception of the new result value.
sessionId
The sessionId is a unique identifier of this SpeechInputRequest object that identifies the session. This id MAY be used to correlate logging and also as part of rerecognition.
final
The final boolean MUST be set to true if this is the final time the speech service will return this particular indx value. If the value is false, then this represents an interim result that could still be changed.
Questions:
Are there any other major content sections missing? What about anything that the protocol requries of us? Do the above sections fit together with each other and with the protocol work?

9. Design Decisions

Here are the design decisions from the XG that are relevant to the Web API proposal:

To do

insert other design decisions as we receive them and review them

10. Requirements and Use Cases

This section covers what some of the requirements were for this API, as well as illustrates some use cases. Note more extensive information can be found at HTML Speech XG Use Cases and Requirements as well as in the final XG note including requirements and use cases.

11. Acknowledgements

This proposal was developed by the HTML Speech XG.

This work builds on the existing work including:

Special thanks to the members of the XG: Andrei Popescu, Andy Mauro, Björn Bringert, Chaitanya Gharpure, Charles Chen, Dan Druta, Daniel Burnett, Dave Burke, David Bolter, Deborah Dahl, Fabio Paternò, Glen Shires, Ingmar Kliche, Jerry Carter, Jim Larson, Kazuyuki Ashimura, Marc Schröder, Markus Gylling, Masahiro Araki, Matt Womer, Michael Bodell, Michael Johnston, Milan Young, Olli Pettay, Paolo Baggia, Patrick Ehlen, Raj Tumuluri, Rania Elnaggar, Ravi Reddy, Robert Brown, Satish Kumar Sampath, Somnath Chandra, and T.V. Raman.

12. References

RFC2119
Key words for use in RFCs to Indicate Requirement Levels, S. Bradner. IETF.
HTML5
HTML 5: A vocabulary and associated APIs for HTML and XHTML (work in progress), I. Hickson. W3C.