Extensible multimodal annotation language spec published

The W3C Multimodal Interaction working group has published the first
draft specification for EMMA - the extensible multimodal annotation
markup language.


EMMA is designed as a data exchange format for use in representing
application specific interpretations of user input together with
annotations such as confidence scores, time stamps and input medium
etc. Speech and handwriting recognizers, natural language engines,
media interpreters, and multimodal integration components generate
EMMA markup. Feedback on this draft is welcome. Visit the Multimodal
Interaction home page, see:


More information about the role EMMA plays in the W3C Multimodal
Interaction Framework can be found in:


 Dave Raggett <dsr@w3.org>  W3C lead for voice and multimodal.
 http://www.w3.org/People/Raggett +44 1225 866240 (or 867351)

Received on Tuesday, 12 August 2003 05:54:52 UTC