Copyright © 2011 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark and document use rules apply.
This proposed API represents the web API for doing speech in HTML. This proposal is the HTML bindings and JS functions that sits on top of the protocol work that is also being proposed by the HTML Speech Incubator Group. This includes:
The section on Design Decisions [DESIGN] covers the design decisions the group agreed to that helped direct this API proposal.
The section on Requirements and Use Cases [REQ] covers the motivation behind this proposal.
This API is designed to be used in conjunction with other APIs and elements on the web platform, including APIs to capture input and APIs to do bidirectional communications with a server (WebSockets).
This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.
This document is the 29 October 2011 Editor's Draft of the HTML Speech Web API proposal. It is not a web standards track document and does not define a web standard. This proposal, or one similar to it, is likely to be included in Incubator Group's final report, along with Requirements, Design Decisions, and the Protocol proposal. The hope is an official web standards group will develop a web standard based on all of these inputs.
This document is produced by the HTML Speech Incubator Group.
This document being an Editor's Draft does not imply endorsement by the W3C Membership nor necessarily the membership of the HTML Speech incubator group. It is inteded to reflect and colate previous discussions and proposals that have taken place on the public email alias and in group teleconferences. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.
Web applications should have the ability to use speech to interact with users. That speech could be for output through synthesized speech, or could be for input through the user speaking to fill form items, the user speaking to control page navagation or many other collected use cases. A web application author should be able to add speech to a web application using methods familiar to web developers and should not require extensive specialized speech expertise. The web application should build on existing W3C web standards and support a wide variety of use cases. The web application author should have the flexibility to control the recognition service the web application uses, but should not have the obligation of needing to support a service. This proposal defines the basic representations for how to use grammars, parameters, and recognition results and how to process them. The interfaces and API defined in this proposal can be used with other interfaces and APIs exposed to the web platform.
Note that privacy and security concerns exist around allowing web applications to do speech recognition. User agents should make sure that end users are aware that speech recognition is occuring, and that the end users have given informed consent for this to occur. The exact mechanism of consent is user agent specific, but the privacy and security concerns have shaped many aspects of the proposal.
In the example below the various speech APIs are used to do basic speech web search.
<!DOCTYPE html>
<html>
<head>
<title>Example Speech Web Search Markup Only</title>
</head>
<body>
<form id="f" action="/search" method="GET">
<label for="q">Search</label>
<reco for="q"/>
<input id="q" name="q" type="text"/>
<input type="submit" value="Example Search"/>
</form>
</body>
</html>
<!DOCTYPE html>
<html>
<head>
<title>Example Speech Web Search JS API and Bindings</title>
</head>
<body>
<script type="text/javascript">
function speechClick() {
var q = document.getElementById('q');
var sir = new SpeechInputRequest();
sir.addGrammarFrom(q);
sir.outputToElement(q);
// Set what ever other parameters you want on the sir
sir.serviceURI = "https://example.org/recoService";
sir.speedVsAccuracy = 0.75;
sir.start();
}
</script>
<form id="f" action="/search" method="GET">
<label for="q">Search</label>
<input id="q" name="q" type="text" />
<button name="mic" onclick="speechClick()">
<img src="http://www.openclipart.org/image/15px/svg_to_png/audio-input-microphone.png" alt="microphone picture" />
</button>
<br />
<input type="submit" value="Example Search" />
</form>
</body>
</html>
<!DOCTYPE html>
<html>
<head>
<title>Example Speech Web Search</title>
</head>
<body>
<script type="text/javascript">
function speechClick() {
var sir = new SpeechInputRequest();
// Build grammars from scratch
var g = new SpeechGrammar();
g.src = "http://example.org/topChoices.srgs";
g.weight = 1.5;
var g1 = new SpeechGrammar();
g1.src = "builtin:input?type=text";
g1.weight = 0.5;
var g2 = new SpeechGrammar();
g2.src = "builtin:websearch";
g2.weight = 1.1;
// This 3rd grammar is an inline version of the http://www.example.com/places.grxml grammar from Appendix J.2 of the SRGS document (without the comments and xml and doctype)
var g3 = new SpeechGrammar();
g3.src = "data:application/srgs+xml;base64,PGdyYW1tYXIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDEvMDYvZ3JhbW1hciINCiAgICAgICAgIHhtbG5zOnhzaT0iaHR0cDovL3d3dy53My5vcmcvMjAwMS9YTUxTY2" +
"hlbWEtaW5zdGFuY2UiIA0KICAgICAgICAgeHNpOnNjaGVtYUxvY2F0aW9uPSJodHRwOi8vd3d3LnczLm9yZy8yMDAxLzA2L2dyYW1tYXIgDQogICAgICAgICAgICAgICAgICAgICAgICAgICAgIGh0dHA6Ly93d3cudzMub" +
"3JnL1RSL3NwZWVjaC1ncmFtbWFyL2dyYW1tYXIueHNkIg0KICAgICAgICAgeG1sOmxhbmc9ImVuIiB2ZXJzaW9uPSIxLjAiIHJvb3Q9ImNpdHlfc3RhdGUiIG1vZGU9InZvaWNlIj4NCg0KICAgPHJ1bGUgaWQ9ImNpdHki" +
"IHNjb3BlPSJwdWJsaWMiPg0KICAgICA8b25lLW9mPg0KICAgICAgIDxpdGVtPkJvc3RvbjwvaXRlbT4NCiAgICAgICA8aXRlbT5QaGlsYWRlbHBoaWE8L2l0ZW0+DQogICAgICAgPGl0ZW0+RmFyZ288L2l0ZW0+DQogICA" +
"gIDwvb25lLW9mPg0KICAgPC9ydWxlPg0KDQogICA8cnVsZSBpZD0ic3RhdGUiIHNjb3BlPSJwdWJsaWMiPg0KICAgICA8b25lLW9mPg0KICAgICAgIDxpdGVtPkZsb3JpZGE8L2l0ZW0+DQogICAgICAgPGl0ZW0+Tm9ydG" +
"ggRGFrb3RhPC9pdGVtPg0KICAgICAgIDxpdGVtPk5ldyBZb3JrPC9pdGVtPg0KICAgICA8L29uZS1vZj4NCiAgIDwvcnVsZT4NCg0KICAgPHJ1bGUgaWQ9ImNpdHlfc3RhdGUiIHNjb3BlPSJwdWJsaWMiPg0KICAgICA8c" +
"nVsZXJlZiB1cmk9IiNjaXR5Ii8+IDxydWxlcmVmIHVyaT0iI3N0YXRlIi8+DQogICA8L3J1bGU+DQo8L2dyYW1tYXI+";
g3.weight = 0.01;
sir.grammars[0] = g;
sir.grammars[1] = g1;
sir.grammars[2] = g2;
sir.grammars[3] = g3;
// Say what happens on a match
sir.onresult = function(event) {
var q = document.getElementById('q');
q.value = event.result.item(0).interpretation;
var f = document.getElementById('f');
f.submit();
};
// Also do something on a nomatch
sir.onnomatch() = function(event) {
// even though it is a no match we might have a result
alert("no match: " + event.result.item(0).interpretation);
}
// Set what ever other parameters you want on the sir
sir.serviceURI = "https://example.org/recoService";
sir.speedVsAccuracy = 0.75;
// Start will call open for us, if we wanted to open the sir on page start we could have to do initial permission checking
sir.start();
}
</script>
<form id="f" action="/search" method="GET">
<label for="q">Search</label>
<input id="q" name="q" type="text" />
<button name="mic" onclick="speechClick()">
<img src="http://www.openclipart.org/image/15px/svg_to_png/audio-input-microphone.png" alt="microphone picture" />
</button>
<br />
<input type="submit" value="Example Search" />
</form>
</body>
</html>
Everything in this proposal is informative since this is not a standards track document. However, RFC2119 normative language is uesd where appropriate to aid in the future should this proposal be moved into a standards track process.
The keywords "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "RECOMMENDED", "MAY" and "OPTIONAL" in this document are to be interpreted as described in Key words for use in RFCs to Indicate Requirement Levels [RFC2119].
The reco element is the way to do speech recognition using markup bindings. The reco element is legal where ever phrasing content is expected, and can contain any phrasing content, except with no descendant recoable elements unless it is the element's reco comtrol, and no descendant reco elements.
This section is based on Michael Bodell's proposal and the meeting discussion.
[NamedConstructor=Reco(),
NamedConstructor=Reco(in DOMString for)]
interface HTMLRecoElement : HTMLElement {
// Attributes
readonly attribute HTMLFormElement? form;
attribute DOMString htmlFor;
readonly attribute HTMLElement? control;
attribute SpeechInputRequest request;
attribute DOMString grammar;
// From the SpeechInputRequest
integer maxNBest;
DOMString language;
boolean saveForRereco;
boolean endpointDetection;
boolean finalizeBeforeEnd;
integer interimResults;
float confidenceThreshold;
float sensitivity;
float speedVsAccuracy;
integer completeTimeout;
integer incompleteTimeout;
integer maxSpeechTimeout;
DOMString inputWaveformURI;
attribute DOMString serviceURI;
attribute boolean continuous;
// event handlers
attribute Function onaudiostart;
attribute Function onsoundstart;
attribute Function onspeechstart;
attribute Function onspeechend;
attribute Function onsoundend;
attribute Function onaudioend;
attribute Function onresult;
attribute Function onnomatch;
attribute Function onerror;
attribute Function onauthorizationchange;
attribute Function onopen;
attribute Function onstart;
};
The reco represents a speech input in a user interface. The speech input can be associated with a specific form control, known as the reco element's reco control, either using for attribute, or by putting the form control inside the reco element itself.
Except where otherwise specified by the following rules, a reco element has no reco control.
Some elements are catagorized as recoable elements. These are elements that can be associated with a reco element:
The reco element's exact default presentation and behavior, in particular what its activation behavior might be is unspecified and user agent specific. When the reco element is bound to a recoable element if no grammar attribute is specified, then by default the default builtin uri is used. The activation behavior of a reco element for events targetted at interactive content descendants of a reco element, and any descendants of those interactive content descendants, MUST be to do nothing. When a reco element with a reco control is activated and gets a reco result, the default action of the recognition event MUST be to use the value of the top n-best interpretation of the current result event. The exact binding depends on the recoable element in question and is covered in the binding results section.
The form attribute is used to explicitly associate the reco element with its form owner.
The form IDL attribute is part of the element's forms API.
The htmlFor IDL attribute MUST reflect the for content attribute.
The for attribute MAY be specified to indicate a form control with which a speech input is to be associated. If the attribute is specified, the attribute's value MUST be the ID of a recoable element in the same Document as the reco element. If the attribute is specified and there is an element in the Document whose ID is equal to the value of the for attribute, and the first such element is a recoable element, then that element is the reco element's reco control.
If the for attribute is not specified, but the reco element has a recoable element descendant, then the first such descendant in tree order is the reco element's reco control.
The control attribute returns the form control that is associated with this element. The control IDL attribute MUST return the reco element's reco control, if any, or null if there isn't one.
control . recos returns a NodeList of all the reco elements that the form control is associated with.
Recoable elements have a NodeList object associated with them that represents the list of reco elements, in tree order, whose reco control is the element in question. The reco IDL attribute of recoable elements, on getting, MUST return that NodeList object.
The request attribute represents the SpeechInputRequest associated with this reco element. By default the User Agent sets up the speech service specified by serviceURI and the default speech input request associated with this reco. The author MAY set this attribute to associate a markup reco element with a author created speech input request. In this way the author has control over the reco involved. When the request is set then the request's speech parameters take priority over the corrisponding parameters on the reco attributes.
The uri of a grammar associated with this reco. If unset, this defaults to the default builtin uri. Note that to use multiple grammars or different weights the user must use the scripted SpeechInputRequest API.
The other attributes are all defined identiaclly to how they appear in the SpeechInputResult section.
Two constructors are provided for creating HTMLRecoElement objects (in addition to the factory methods from DOM Core such as createElement()): Reco() and Reco(for). When invoked as constructors, these MUST return a new HTMLRecoElement object (a new reco element). If the for argument is present, the object created MUST have its for content attribute set to the provided value. The element's document MUST be the active document of the browsing context of the Window object on which the interface object of the invoked constructor is found.
When the user agent needs to create a default grammar from a recoable element it builds a uri using the builtin scheme. The format of the uri is of the form: builtin:<tag-name>?<tag-attributes> where the tag-name is just the name of the recoable element, and the tag-attributes are the content attributes in the form name=value&name=value. Since this a uri, both name and value must be properly uri escaped. Note the ? character may be omitted when there are no tag-attributes. For example:
Speech services may define other builtin grammars as well. It is recommended that speech services allow a builtin:dictation to represents a "say anything" grammar and builtin:websearch to represent a speech web search.
builtin uri for grammars can be used even when the reco element is not bound to any particular element and may also be used by the SpeechInputRequest object and as a rule reference in an SRGS grammar.
In addition to the content attribute other parameters may be specified. It is recommended that speech services support a filter parameter that can be set to the value noOffensiveWords to represent a desire to not recognize offensive words. Speech services may define other extension parameters.
Note the exact grammar that is generated from any builtin uri is specific to the recognition service and the content attributes are best thought of as hints for the service.
When a reco element is bound to a recoable element and does not have an onresult attribute set then the default binding is used. The default binding can also be used if the SpeechInputResult's outputToElement method is called. In both cases the exact binding depends on the recoable element in question. In general, the binding will use the value associated with the interpretation of the top n-best element.
When the recoable element is a button then if the button is not disabled, then the result of a speech recognition is to activate the button.
When the recoable element is a input element then the exact binding depends on the control type. For basic text fields (input elements with a type attribute of text, search, tel, url, email, password, and number) the value of the result should be assigned to the input (inserted at the cursor and replacing any selected text). For button controls (submit, image, reset, button) the act of recognition just activates the input. For type checkbox, the input should be set to a checkedness of true. For type radiobutton, the input should be set to a checkedness of true, and all other inputs in the radio button group must be set to a checkedness of false. For date and time types (datetime, date, month, week, time, datetime-local) then the value should be assigned unless the value represents an non-empty string that is not valid for that type as described here. For type of color then the value should be assigned unless the value does not repersent a valid lowercase simple color. For type of range the assignment is only allowed if it is a valid floating point number, and before being assinged, it must undergo the value sanitization algorithm as described here.
When the recoable element is a keygen element then the element should regenerate the key.
When the recoable element is a meter element then the value of the metter should be set to the best representation of the value as a floating point number.
When the recoable element is a output element it assigns the recognized value to the output's value (which also must set the value mode flag to value).
When the recoable element is a progress element then the value of the progress bar should be set to the best representation of the value as a floating point number.
When the recoable element is a select element then the recognition result will be used to select any options that are named the same as the interpretations value (that is any that are returned by namedItem(value)).
When the recoable element is a textarea element then the recognized value is inserted in to the textarea where the text cursor is if it is in the textarea. If text in the textarea is selected then the new value replaces the high lighted text. If the text cursor is not in the textarea then the value is appended to the end of the textarea.
The TTS element is the way to do speech synthesis using markup bindings. The TTS element is legal where embedded content is expected. If the TTS element has a src attibute, then its content model is zero or more track elements, then transparent, but with no media element descendants. If the element does not have a src attibute, then one or more source elements, then zero or more track elements, then transparent, but with no media element descendants.
This section is based on Michael Bodell's proposal and the meeting discussion.
[NamedConstructor=TTS(),
NamedConstructor=TTS(in DOMString src)]
interface HTMLTTSElement : HTMLMediaElement {
attribute DOMString serviceURI;
attribute DOMString lastMark;
};
A TTS element represents a synthesized audio stream. A TTS element is a media element whose media data is ostensibly synthesized audio data.
When a TTS element is potentially playing, it must have its TTS data played synchronized with the current playback position, at the element's effective media volume.
When a TTS element is not potentially playing, TTS must not play for the element.
Content MAY be provided inside the TTS element. User agents SHOULD NOT show this content to the user; it is intended for older Web browsers which do not support TTS.
In particular, this content is not intended to address accessibility concerns. To make TTS content accessible to those with physical or cognitive disabilities, authors are expected to provide alternative media streams and/or to embed accessibility aids (such as transcriptions) into their media streams.
Implementations SHOULD support at least UTF-8 encoded text/plain and application/ssml+xml (both SSML 1.0 and 1.1 SHOULD be supported).
The existing timeupdate event is dispatched to report progress through the synthesized speech. If the synthesis is of type application/ssml+xml, timeupdate events should be fired for each mark element that is encountered.
The src, preload, autoplay, mediagroup, loop, muted, and controls attributes are the attributes common to all media elements.
The serviceURI attribute specifies the speech service to use in the constructed default request. If the serviceURI is unset then the User Agent MUST use the User Agent default service.
The new lastMark attribute must, on getting, return the name of the last SSML mark element that was encountered during playback. If no mark has been encountered yet, the attribute must return null.
Two constructors are provided for creating HTMLTTSElement objects (in addition to the factory methods from DOM Core such as createElement()): TTS() and TTS(src). When invoked as constructors, these MUST return a new HTMLTTSElement object (a new tts element). The element MUST have its preload attribute set to the literal value "auto". If the src argument is present, the object created MUST have its src content attribute set to the provided value, and the user agent MUST invoke the object's resource selection algorithm before returning. The element's document MUST be the active document of the browsing context of the Window object on which the interface object of the invoked constructor is found.
The speech input request interface is the scripted web API for controlling a given recognition.
This section is based on Debbie Dahl's proposal, Bjorn Bringert's proposal, and Olli Pettay's proposal.
[Constructor]
interface SpeechInputRequest {
// recognition parameters
SpeechGrammars[] grammars;
// misc parameter attributes
integer maxNBest;
DOMString language;
boolean saveForRereco;
boolean endpointDetection;
boolean finalizeBeforeEnd;
integer interimResults;
float confidenceThreshold;
float sensitivity;
float speedVsAccuracy;
integer completeTimeout;
integer incompleteTimeout;
integer maxSpeechTimeout;
DOMString inputWaveformURI;
// the generic set of parameters
SpeechParameter[] parameters;
// other attributes
attribute DOMString serviceURI;
attribute MediaStream input;
const unsigned short SPEECH_AUTHORIZATION_UNKNOWN = 0;
const unsigned short SPEECH_AUTHORIZATION_AUTHROIZED = 1;
const unsigned short SPEECH_AUTHORIZATION_NOT_AUTHORIZED = 2;
readonly attribute unsigned short authorizationState;
attribute boolean continuous;
// the generic send info method
void sendInfo(in DOMString type, in DOMString value);
// Default markup binding methods
void addGrammarFrom(in Element inputElement, optional float weight, optional boolean modal);
void outputToElement(in Element outputElement);
// methods to drive the speech interaction
void open();
void start();
void stop();
void abort();
void interpret(in DOMString text);
// event methods
attribute Function onaudiostart;
attribute Function onsoundstart;
attribute Function onspeechstart;
attribute Function onspeechend;
attribute Function onsoundend;
attribute Function onaudioend;
attribute Function onresult;
attribute Function onnomatch;
attribute Function onerror;
attribute Function onauthorizationchange;
attribute Function onopen;
attribute Function onstart;
attribute Function onend;
};
SpeechInputRequest implements EventTarget;
interface SpeechInputNomatchEvent : Event {
readonly attribute SpeechInputResult result;
};
interface SpeechInputErrorEvent : Event {
readonly attribute SpeechInputError error;
};
interface SpeechInputError {
const unsigned short SPEECH_INPUT_ERR_OTHER = 0;
const unsigned short SPEECH_INPUT_ERR_NO_SPEECH = 1;
const unsigned short SPEECH_INPUT_ERR_ABORTED = 2;
const unsigned short SPEECH_INPUT_ERR_AUDIO_CAPTURE = 3;
const unsigned short SPEECH_INPUT_ERR_NETWORK = 4;
const unsigned short SPEECH_INPUT_ERR_NOT_ALLOWED = 5;
const unsigned short SPEECH_INPUT_ERR_SERVICE_NOT_ALLOWED = 6;
const unsigned short SPEECH_INPUT_ERR_BAD_GRAMMAR = 7;
const unsigned short SPEECH_INPUT_ERR_LANGUAGE_NOT_SUPPORTED = 8;
readonly attribute unsigned short code;
readonly attribute DOMString message;
};
// Item in N-best list
interface SpeechInputAlternative {
readonly attribute DOMString utterance;
readonly attribute float confidence;
readonly attribute any interpretation;
};
// A complete one-shot simple response
interface SpeechInputResult {
readonly attribute Document resultEMMAXML;
readonly attribute DOMString resultEMMAText;
readonly attribute unsigned long length;
getter SpeechInputAlternative item(in unsigned long index);
readonly attribute boolean final;
};
// A full response, which could be interim or final, part of a continuous response or not
interface SpeechInputResultEvent : Event {
readonly attribute SpeechInputResult result;
readonly attribute short resultIndex;
readonly attribute SpeechInputResult[] results;
readonly attribute DOMString sessionId;
};
// The object representing a speech grammar
[Constructor]
interface SpeechGrammar {
attribute DOMString src;
attribute float weight;
attribute boolean modal;
};
// The object representing a speech parameter
[Constructor]
interface SpeechParameter {
attribute DOMString name;
attribute DOMString value;
};
The DOM Level 2 Event Model is used for speech recognition events. The methods in the EventTarget interface should be used for registering event listeners. The SpeechInputRequest interface also contains convenience attributes for registering a single event handler for each event type.
For all these events, the timeStamp attribute defined in the DOM Level 2 Event interface must be set to the best possible estimate of when the real-world event which the event object represents occurred.
Unless specified below, the ordering of the different events is undefined. For example, some implementations may fire audioend before speechstart or speechend if the audio detector is client-side and the speech detector is server-side.
The speech input error object has two attributes code
and message
.
The SpeechInputAlternative represents a simple view of the response that gets used in a n-best list.
The SpeechInputResult object represents a single one-shot recognition match, either as one small part of a continuous recognition or as the complete return result of a non-continuous recognition.
The Speech Input Result event is the event that is raised each time there is an interim or final result. The event contains both the current most recent recognized bit (in the result object) as well as a history of the complete recognition session so far (in the results object).
The SpeechGrammar object represents a container for a grammar. This structure has the following attributes:
The SpeechParameter object represents the container for arbitrary name/value parameters. This extensible mechanism allows developers to take advantage of extensions that recognition services may allow.
Some speech services may want to raise custom extension interim events either while doing speech recognition or while synthesizing audio. An example of this kind of event might be viseme events that encode lip and mouth positions while speech is being synthesized that can help with the creation of avatars and animation. These extension events MUST begin with "speech-x", so the hypothetical viseme event might be something like "speech-x-viseme".
Here are the design decisions from the XG that are relevant to the Web API proposal:
Need other design decisions for the Face-to-face
This section covers what some of the requirements were for this API, as well as illustrates some use cases. Note more extensive information can be found at HTML Speech XG Use Cases and Requirements as well as in the final XG note including requirements and use cases.
Voice Web Search. A user can speak a query and get a result.
Speech Comand Interface. A Speech Command and Control Shell that allows multiple comands, many of which take arguments, such as "call [number]", "call [person]", "calculate [math expression]", "play [song]", or "search for [query]".
Speech UI present when no visible UI need be present. Some speech applications are oriented around determining the user's intent before gathering any specific input, and hence their first interaction may have no visible input fields whatsoever, or may accept speech input that is far less constrained than the fields on the screen. For example, the user may simply be presented with the text "how may I help you?" (maybe with some speech synthesis or an earcon), and then utter their request, which the application analyzes in order to route the user to an appropriate part of the application.
A Speech Enabled Email Client. The application reads out subjects and contents of email and also listens for commands, for instance, "archive", "reply: ok, let's meet at 2 pm", "forward to bob", "read message". when an email message is received, a summary notification may be raised that displays a small amount of content (for instance the person the email is from and a couple of words of the subject). It is desirable that a speech API be present and listening for the duration of this notification, allowing a user experience of being able to say "Reply to that" or "Read that email message". Note that this recognition UI could not be contingent on the user clicking a button, as that would defeat much of the benefit of this scenario (being able to reply and control the email without using the keyboard or mouse).
This proposal was developed by the HTML Speech XG.
This work builds on the existing work including:
Special thanks to the members of the XG: Andrei Popescu, Andy Mauro, Björn Bringert, Chaitanya Gharpure, Charles Chen, Dan Druta, Daniel Burnett, Dave Burke, David Bolter, Deborah Dahl, Fabio Paternò, Glen Shires, Ingmar Kliche, Jerry Carter, Jim Larson, Kazuyuki Ashimura, Marc Schröder, Markus Gylling, Masahiro Araki, Matt Womer, Michael Bodell, Michael Johnston, Milan Young, Olli Pettay, Paolo Baggia, Patrick Ehlen, Raj Tumuluri, Rania Elnaggar, Ravi Reddy, Robert Brown, Satish Kumar Sampath, Somnath Chandra, and T.V. Raman.