Re: Overview paragraph

Thanks Dan.  One of the drivers last week for creating this paragraph  
was also to make clear why we needed to do this work when VoiceXML  
already exists.  We in the group understand this, but others may not  
(immediately).  I didn't see any mention of why HTMLSpeech is *not*  
VoiceXML.  Did you plan to add something about this later on?

-- dan

On Apr 19, 2011, at 4:35 PM, DRUTA, DAN (ATTSI) wrote:

> Group,
> At the last meeting I volunteered to put together a few paragraphs  
> that would set the context, the rationale and the goals for the HTML  
> Speech incubation group.
> Below is my first attempt at capturing those points to be included  
> in the report introduction.
>
> Thanks,
> Dan
>
>
> Context:
>
> Speech technologies are available today from many software  
> manufacturers on a variety of platforms with implementations  
> allowing a wide selection of development tools and on multiple  
> device types. These speech technologies cover aspects of the user  
> interaction using spoken commands, dictation, text to speech and  
> speech to text recognition. While there are endless possibilities  
> and uses for speech technologies from accessibility to user  
> convenience in many fields like medical, education and others, there  
> is a great potential of adoption in the rapidly evolving realm of  
> web applications.
> HTML5 has brought the rich user experience to the web and  
> developments in the field of voice recognition allow real time user  
> to machine dialogue in the car, on the phone and at the desktop.  
> With the web moving beyond the desktop and the browser, it is  
> imperative necessary to find ways to streamline the process of  
> developing web applications and create interoperable speech APIs  
> that will work across multiple browsers and speech providers giving  
> the developers choice and consistency in implementing rich speech  
> enabled web applications.
>
> Goals:
>
> The goal for the HTML Speech incubator group is to identify and  
> document common requirements and use cases necessary to support the  
> standardization of  API(s) that will enable web developers to design  
> and deploy speech enabled web applications and provide the user with  
> a consistent experience across different platforms and devices  
> irrespective of the speech engine used. The outcome of the group's  
> findings should result in  design recommendation(s) that would  
> foster innovation and promote consistency by leveraging and  
> enhancing existing work in W3C as well as other standards bodies.
> The driver for the standard design will be a common and agreed upon  
> understanding of the elements and interactions necessary to create  
> an end to end multi modal user experience  and to avoid development  
> fragmentation in using HTML and JavaScript when developing  
> interoperable speech enabled web applications.
>
>
>
>
>

Received on Wednesday, 20 April 2011 10:36:40 UTC