Participate in multimodal demos at Technical Plenary

There is an opportunity to view some demos of multi-modal technologies 
(descriptions follow).  It will be on Thursday afternoon from 4:00-6:00.  I 
think this would be interesting for the working group.  How many people 
would be interested in attending?

Jon

These are the demos we will be presenting (in this order). Each demo will
>take
>about 15 minutes.
>
>1. Tsuneo Nitta, Toyohashi University of Technology.
>The demo includes a rapid-prototyping tool for MMI, an MMI description
>generator, and some video demo of a seamless web service with MMI.
>The same service can be achieved by mobile, PDA, and information Kiosk.
>
>2. Michael Johnston (ATT): Multimodal Access To City Help
>This system provides a mobile interactive guide to New York City
>including restaurant and subway information. It runs on a pen and
>speech enabled tablet and provides users the flexibility to interact
>using speech, ink, or integrated combinations of the two modes. The system
>responds with coordinated multimodal presentations that combine
>synthetic speech and dynamically generated graphics.
>
>3. Giovanni Seni (Motorola): form-filling with handwriting
>4. Kuansan Wang (Microsoft):  MapPoint map navigation demo
>
>5. Jean-Daniel Fekete (INRIA): experimental system for managing
>multimodality
>             http://www.emn.fr/dragicevic/ICon/
>ICon is an editor designed to select a set of input devices
>and connect them to actions into a graphical interactive
>application. ICon allows physically challenged users to
>connect alternative input devices and/or configure their
>interaction techniques according to their needs. It allows
>skilled users - graphic designers or musicians for example
>- to configure any ICon aware application to use their
>favorite input devices and interaction techniques (bimanual,
>voice enabled, etc.).
>
>6. Sunil Kumar (V-Enable):  multimodality on thin clients
>We will be demonstrating two different forms of sequential multimodality on
>two different categories of devices
>         1. Browser only devices such as phones with WAP browsers.
>         2. Intelligent thin clients with WAP browsers and JAVA/Brew
>capability.
>
>We will be using Email as the application to demonstrate sequential
>modality. In the demonstration we will search the Inbox using voice and get
>the email results on the screen. For example: Say "Search email from
>Deborah" in voice and then see all the new emails from Deborah Dahl. We will
>demonstrate how the sequential modality can provide you two different
>experiences on two different devices. The focus would be on latency that is
>involved when you switch between modes.
>
>7. Roberto Pieraccini (SpeechWorks:) multimodal conversational system for
>the Ford Concept Car
>This is a prototype of a conversational system
>that was installed on the Ford Concept Car Model U and shown at the latest
>Detroit international auto show. The system, including a touch screen and a
>speech recognizer, is used for controlling several non-critical automobile
>operations, such as climate, entertainment, navigation and telephone. The
>prototype implements a natural language spoken dialog interface integrated
>with an intuitive GUI, as opposed to the traditional, speech only,
>command-and-control interfaces deployed in some of the high end cars today.

Jon Gunderson, Ph.D., ATP
Coordinator of Assistive Communication and Information Technology
Division of Rehabilitation - Education Services
MC-574
College of Applied Life Studies
University of Illinois at Urbana/Champaign
1207 S. Oak Street, Champaign, IL  61820

Voice: (217) 244-5870
Fax: (217) 333-0248

E-mail: jongund@uiuc.edu

WWW: http://cita.rehab.uiuc.edu/
WWW: http://www.staff.uiuc.edu/~jongund

Received on Friday, 21 February 2003 15:05:44 UTC