New Working Draft of EMMA

The W3C Multimodal Interaction Working Group is happy to 
announce the publication of the third working draft of 
EMMA: Extensible MultiModal Annotation markup language.

http://www.w3.org/TR/emma/

Abstract: The W3C Multimodal Interaction working group 
aims to develop specifications to enable access to the 
Web using multiple modes of interaction, such as speech, 
pen, keypad, bitmapped displays etc. This document is 
part of a suite of specifications for multimodal systems. 
It provides markup for representing the interpretation of 
user input (speech, keystrokes, pen, input, etc.) from 
multiple modalities, together with annotations for confidence 
scores, timestamps, input medium, etc. This document is 
produced as part of the W3C Multimodal Interaction 
Activities, and it is intended for use by systems that 
provide semantic interpretation and produce joint input 
events with multiple modalities for a variety of inputs, 
including but not necessarily limited to, speech, natural 
language text, gesture, GUI and ink input.

Please send comments to www-multimodal@w3.org.

best regards,

Deborah Dahl
Chair, Multimodal Interaction Working Group

Wu Chou
Chair, EMMA subgroup

Received on Saturday, 4 September 2004 02:32:48 UTC