Last Call Working Draft of EMMA (Extensible Multimodal Annotation) specification

The W3C Multimodal Interaction Working Group is pleased to
announce the publication of the second Last Call Working 
Draft of the EMMA (Extensible Multimodal Annotation) 
specification, http://www.w3.org/TR/emma/. The second
Last Call was published because substantive changes were
made to the spec after the original Last Call. The 
changes since the first Last Call are detailed in the spec.

Description (from the Abstract)
EMMA is part of a set of specifications for multimodal 
systems, and provides details of an XML markup language 
for containing and annotating the interpretation of user 
input. Examples of interpretation of user input are a 
transcription into words of a raw signal, for instance 
derived from speech, pen or keystroke input, a set of 
attribute/value pairs describing their meaning, or a set 
of attribute/value pairs describing a gesture. The 
interpretation of the user's input is expected to be 
generated by signal interpretation processes, such as 
speech and ink recognition, semantic interpreters, and 
other types of processors for use by components that 
act on the user's inputs such as interaction managers.

We invite your comments, which should be sent to 
www-multimodal@w3.org by May 1.

best regards,

Debbie Dahl
Chair, Multimodal Interaction Working Group

Received on Thursday, 12 April 2007 16:03:15 UTC