W3C home > Mailing lists > Public > whatwg@whatwg.org > May 2010

[whatwg] Speech input element

From: Bjorn Bringert <bringert@google.com>
Date: Tue, 18 May 2010 09:36:01 +0100
Message-ID: <AANLkTimeMylOtmaSCILDzqVruTwgPH_9WPyQJBdhh2wk@mail.gmail.com>
On Mon, May 17, 2010 at 10:55 PM, James Salsman <jsalsman at gmail.com> wrote:
> On Mon, May 17, 2010 at 8:55 AM, Bjorn Bringert <bringert at google.com> wrote:
>>> - What exactly are grammars builtin:dictation and builtin:search?
>> They are intended to be implementation-dependent large language
>> models, for dictation (e.g. e-mail writing) and search queries
>> respectively. I've tried to clarify them a bit in the spec now. There
>> should perhaps be more of these (e.g. builtin:address), maybe with
>> some optional, mapping to builtin:dictation if not available.
> Bjorn, are you interested in including speech recognition support for
> pronunciation assessment such as is done by http://englishcentral.com/
> , http://www.scilearn.com/products/reading-assistant/ ,
> http://www.eyespeakenglish.com/ , and http://wizworldonline.com/ ,
> http://www.8dworld.com/en/home.html ?
> Those would require different sorts of language models and grammars
> such as those described in
> http://www.springerlink.com/content/l0385t6v425j65h7/
> Please let me know your thoughts.

I don't have SpringerLink access, so I couldn't read that article. As
far as I could tell from the abstract, they use phoneme-level speech
recognition and then calculate the edit distance to the "correct"
phoneme sequences. Do you have a concrete proposal for how this could
be supported? Would support for PLS
(http://www.w3.org/TR/pronunciation-lexicon/) links in SRGS be enough
(the SRGS spec already includes that)?

Bjorn Bringert
Google UK Limited, Registered Office: Belgrave House, 76 Buckingham
Palace Road, London, SW1W 9TQ
Registered in England Number: 3977902
Received on Tuesday, 18 May 2010 01:36:01 UTC

This archive was generated by hypermail 2.4.0 : Wednesday, 22 January 2020 16:59:23 UTC