- From: B. Helena RODRIGUEZ <bhrodrig@telecom-paristech.fr>
- Date: Tue, 28 Feb 2012 09:40:48 +0100
- To: <www-multimodal@w3.org>
- Message-ID: <CB7251A0.10F93%bhrodrig@telecom-paristech.fr>
This report describes the implementation included in the SOA2M project of the MM Group- TSI Department (Institut Telecom-Telecom ParisTech). This research covers abstract multimodal user interfaces in Service Oriented Architectures for pervasive environemment, an particularly in Smart Conference Rooms. Our prototype is an implementation of a multimodal ambient system providing personalized assistance services to different profiles of users. With this prototype we are looking to test the intelligent automatic fusion and fission of modalities. We are interested on using semantics with the W3C Multimodal Architecture specification. The group has implemented a Flex/AIR Interaction Manager with an SCXML engine, available to Modality Components as a semantically annotated service (published as web or Bonjour service), and web-based RIA modality components at different levels: the basic ones (Pointer, Selector and Graphics IN/OUT) and the more complex (a voice synthesizer and a Carousel). Best regards, Helena Rodriguez PhD student at the Institut Telecom
Attachments
- application/octet-stream attachment: IR_SOA2M.zip
Received on Tuesday, 28 February 2012 08:41:26 UTC