- From: Masahiro Araki <araki@kit.jp>
- Date: Thu, 30 Oct 2008 22:19:44 +0900
- To: www-multimodal@w3c.org
Dear colleague,
Here is testimonial for EMMA 1.0 from Kyoto Institute of Technology.
-----------------------------------
Kyoto Institute of Technology (KIT) strongly supports the Extensible
MultiModal Annotation 1.0 (EMMA) specification. We have been using
EMMA within our multimodal human-robot interaction system. EMMA
documents are dynamically generated by (1) the Automatic Speech
Recognition (ASR) component and (2) the Face Detection/Behavior
Recognition component in our implementation.
In addition, the Information Technology Standards Commission of Japan
(ITSCJ), which includes KIT as a member, also has a plan to use EMMA
as a data format for their own multimodal interaction architecture
specification. ITSCJ believes EMMA is very useful for both uni-modal
recognition component, e.g., ASR, and multimodal integration
component, e.g., speech with pointing gesture.
-----------------------------------
Best regards,
--
Masahiro Araki (Dr.) araki@kit.jp
Interactive Intelligence lab.
Department of Information Science
Graduate School of Science and Technology
Kyoto Institute of Technology
TEL: 075-724-7473 FAX: 075-724-7400
Received on Thursday, 30 October 2008 13:20:34 UTC