W3C home > Mailing lists > Public > public-html@w3.org > January 2012

HTML5 and Speech Input

From: Adam Sobieski <adamsobieski@hotmail.com>
Date: Tue, 31 Jan 2012 18:02:10 +0000
Message-ID: <SNT138-W5298A3477182A250B06EEAC5720@phx.gbl>
To: <public-html@w3.org>




HTML5 Working Group, Greetings.  While the following may also have applicability to some other working groups, such as the MMI WG (http://www.w3.org/2002/mmi/) its MMI Architecture (http://www.w3.org/TR/2011/WD-mmi-arch-20110125/) and EMMA 2.0 use cases (http://www.w3.org/TR/2009/NOTE-emma-usecases-20091215/), I was wondering whether anybody had any thoughts about enhancing speech input techniques for HTML5? I happen to have some thoughts on the topic and, if there is sufficient interest, HTML5 can be enhanced with speech input functionality including the combined use of dialogue context and grammar to enhance the ranking of recognition candidates from naturally spoken speech in usage scenarios including chats, group discussions, mailing lists and forums.  Such a feature could convenience users when making use of speech-to-text functionality on the web, in mailing lists or web forums, for example. Some browsers, such as WebKit-based, already have some speech input functionality.  If there is sufficient interest, I would be interested facilitating advanced speech-to-text scenarios including some API's beyond the current generation of some platform speech engine API's, to include, for example, providing discourse context to speech recognition engines so as to enhance speech recognition results in some web usage scenarios.   Kind regards, Adam Sobieski  		 	   		  
Received on Tuesday, 31 January 2012 18:02:40 UTC

This archive was generated by hypermail 2.3.1 : Monday, 29 September 2014 09:39:29 UTC