new EMMA Working Draft published

I am pleased to announce the publication of a new Working Draft of the EMMA
(Extensible MultiModal Annotation) specification, which is being developed
by the W3C Multimodal Interaction Working Group. The specification can be
found at http://www.w3.org/TR/emma/. The Multimodal Interaction Working
Group very much welcomes any comments and suggestions you may have regarding
this specification. Please send your comments to this mailing list.

Here's the abstract from the Working Draft.

The W3C Multimodal Interaction working group aims to develop specifications
to enable access to the Web using multi-modal interaction. This document is
part of a set of specifications for multi-modal systems, and provides
details of an XML markup language for describing the interpretation of user
input. Examples of interpretation of user input are a transcription into
words of a raw signal, for instance derived from speech, pen or keystroke
input, a set of attribute/value pairs describing their meaning, or a set of
attribute/value pairs describing a gesture. The interpretation of the user's
input is expected to be generated by signal interpretation processes, such
as speech and ink recognition, semantic interpreters, and other types of
processors for use by components that act on the user's inputs such as
interaction managers.

best regards,

Deborah Dahl, W3C MMI Working Group Chair

Received on Tuesday, 14 December 2004 21:02:31 UTC