W3C home > Mailing lists > Public > whatwg@whatwg.org > May 2010

[whatwg] Speech input element

From: Anne van Kesteren <annevk@opera.com>
Date: Wed, 19 May 2010 10:30:44 +0200
Message-ID: <op.vcx1xiqd64w2qv@annevk-t60>
On Wed, 19 May 2010 10:22:54 +0200, Satish Sampath <satish at google.com>  
wrote:
>> I don't really see how the problem is the same as with synchronous
>> XMLHttpRequest. When you do a synchronous request nothing happens to the
>> event loop so an alert() dialog could never happen. I think you want
>> recording to continue though. Having a simple dialog stop video  
>> conferencing
>> for instance would be annoying. It's only script execution that needs  
>> to be paused. I'm also not sure if I'd really want recording to stop  
>> while looking at a page in a different tab. Again, if I'm in a  
>> conference call I'm almost always doing tasks on the side. E.g. looking  
>> up past discussions, scrolling through a document we're discussing, etc.
>
> Can you clarify how the speech input element (as described in the current
> API sketch) is related to video conferencing or a conference call, since  
> it doesn't really stream audio to any place other than potentially a  
> speech
> recognition server and feeds the result back to the element?

Well, as indicated in the other thread I'm not sure whether this is the  
best way to do it. Usually we start with a lower-level API (i.e.  
microphone input) and build up from there. But maybe I'm wrong and speech  
input is a case that needs to be considered separately. It would still not  
be like synchronous XMLHttpRequest though.


-- 
Anne van Kesteren
http://annevankesteren.nl/
Received on Wednesday, 19 May 2010 01:30:44 UTC

This archive was generated by hypermail 2.4.0 : Wednesday, 22 January 2020 16:59:23 UTC