First Working Draft of InkML published

Dear www-multimodal,

The W3C Multimodal Interaction Working Group is pleased to announce
the publication of the first Working Draft of the Extensible Multimodal
Annotation language (EMMA), 

http://www.w3.org/TR/2003/WD-emma-20030811/

Public feedback is very much welcome on this list, in particular
regarding the open issues outlined in the specification.

Abstract:

The W3C Multimodal Interaction working group aims to develop
specifications to enable access to the Web using multi-modal
interaction. This document is part of a set of specifications for
multi-modal systems, and provides details of an XML markup language
for describing the interpretation of user input. Examples of
interpretation of user input are a transcription into words of a raw
signal, for instance derived from a speech or pen input, a set of
attribute/value pairs describing their meaning, or a set of
attribute/value pairs describing a gesture. The interpretation of the
user's input is expected to be generated by signal interpretation
processes, such as speech and ink recognition, semantic interpreters,
and other types of processors for use by components that act on the
user's inputs such as interaction managers.

Max Froumentin
Dave Raggett
for the Multimodal Interaction Working Group

Received on Tuesday, 12 August 2003 04:44:07 UTC