- From: Al Gilman <asgilman@iamdigex.net>
- Date: Tue, 09 Jul 2002 12:18:15 -0400
- To: Roni Rosenfeld <roni@cs.cmu.edu>
- Cc: brad.myers@cs.cmu.edu, ncits-v2@nist.gov, wai-xtech@w3.org, "Dahl, Deborah A." <Deborah.Dahl@unisys.com>
Roni, Hope this is not too late to catch you before ACL is over. At the recent meeting of the INCITS V2 Standards committee, Brad Myers stated that in your Universal Speech Interface work you have been auto-generating speech interfaces from a reference model of the interaction logic which is expressed in the 'specification' format developed under the Personal Universal Controller activity related to the Pittsburgh Pebbles project. This is very exciting news. Under the aegis of the Web Accessibility Initiative, the Protocols and Formats Working Group <http://www.w3.org/WAI/PF> has been leaning on the XForms Working Group and the Voice Browsing Working Group to demonstrate that they have, or are working toward by a clear roadmap, a specification for a model class that would serve as a single-source authoring basis for both voice and more visual forms-mode interactions. Your work sounds like a new high-watermark in terms of demonstrating that this can be done, and how. But I may be interpreting it over optimistically. This has radical implications for Web Services and how Device Independent the W3C Multimodal Interaction work product can be. http://lists.w3.org/Archives/Public/w3c-wai-ig/2002AprJun/0057.html For accessibility purposes, it would be extremely valuable if "what you need to capture by way of interaction logic" were proven in multi-binding experiments and captured into a "take home and build" realization such as the XML syntax from the PUC project. This could be a major factor (aspect) of the specification of a "universally accessible" representation for Web Services. Some of the questions I haven't been able to answer from a quick scan of your home page are: * Is there somewhere a writeup that summarizes the commonality between the technology-utilization profiles employed in the running code for USI and PUC? * Are you just re-using the abstract form, the language specification of the PUC 'specification' format, or have you been generating Voice interfaces _from 'specification' instances_ developed in the PUC context without editing the 'specification' instance? I think that those two questions show the general direction of our interest. Let me stop there for now. Also, if you can possibly connect with Debbie Dahl while you are at ACL (presuming that you will be there) please connect with her so she understands the answers to the above questions, even if a review isn't available instantly on a public, WCAG1.0-AAA-accessible-HTML web page. Debbie is chairing the Multimedia Interaction Working Group within W3C and has a strong Voice background, so she will be a quick study. Al
Received on Tuesday, 9 July 2002 12:18:39 UTC