W3C home > Mailing lists > Public > whatwg@whatwg.org > May 2010

[whatwg] Speech input element

From: Kazuyuki Ashimura <ashimura@w3.org>
Date: Tue, 18 May 2010 19:54:18 +0900
Message-ID: <4BF271DA.80304@w3.org>
Hi Bjorn,

Thank you for your bringing this topic (again :) to the WHAT WG list.
I'd like to bring this to the W3C Voice Browser Working Group (and
maybe the Multimodal Interaction Working Group as well) and ask
the group participants for opinion.

As you might know, the group recently created a task force named
"Voice on the Web" and work hard to promote voice technology on
various possible Web applications.

Regards,

Kazuyuki


Bjorn Bringert wrote:
> Back in December there was a discussion about web APIs for speech
> recognition and synthesis that saw a decent amount of interest
> (http://lists.whatwg.org/pipermail/whatwg-whatwg.org/2009-December/thread.html#24281).
> Based on that discussion, we would like to propose a simple API for
> speech recognition, using a new <input type="speech"> element. An
> informal spec of the new API, along with some sample apps and use
> cases can be found at:
> http://docs.google.com/Doc?docid=0AaYxrITemjbxZGNmZzc5cHpfM2Ryajc5Zmhx&hl=en.
> 
> It would be very helpful if you could take a look and share your
> comments. Our next steps will be to implement the current design, get
> some feedback from web developers, continue to tweak, and seek
> standardization as soon it looks mature enough and/or other vendors
> become interested in implementing it.
> 

-- 
Kazuyuki Ashimura / W3C Multimodal & Voice Activity Lead
mailto: ashimura at w3.org
voice: +81.466.49.1170 / fax: +81.466.49.1171
Received on Tuesday, 18 May 2010 03:54:18 UTC

This archive was generated by hypermail 2.4.0 : Wednesday, 22 January 2020 16:59:23 UTC