Last Call Working Draft of EMMA (Extensible Multimodal Annotation) specification

The W3C Multimodal Interaction Working Group is pleased to 
announce the publication of the 16 September 2005 Last Call 
Working Draft of the EMMA (Extensible MultiModal Annotation) 

Description (from the abstract)

This document is part of a set of specifications for multimodal 
systems, and provides details of an XML markup language for 
containing and annotating the interpretation of user input. 
Examples of interpretation of user input are a transcription 
into words of a raw signal, for instance derived from speech, 
pen or keystroke input, a set of attribute/value pairs describing 
their meaning, or a set of attribute/value pairs describing a 
gesture. The interpretation of the user's input is expected to 
be generated by signal interpretation processes, such as speech 
and ink recognition, semantic interpreters, and other types of 
processors for use by components that act on the user's inputs 
such as interaction managers.

Comments should be sent to this mailing list by 28 October, 2005. 

best regards,

Debbie Dahl
Chair, Multimodal Interaction Working Group

Received on Thursday, 29 September 2005 15:24:24 UTC