- From: Randell Jesup <randell-ietf@jesup.org>
- Date: Sun, 20 May 2012 10:55:45 -0700
- To: public-audio@w3.org
On 5/20/2012 9:26 AM, Olli Pettay wrote: > On 05/18/2012 08:10 PM, Chris Wilson wrote: >> For myself, I would answer "because I don't always see the value in >> taking that overhead." > What overhead? > >> Sometimes, it makes sense - but a lot of the time, it's >> just programming overhead. > It depends on the API whether there is any overhead. > > I'm just asking why WebAudio API couldn't be designed to work on top > of MSP. > The actual API for JS developers could look very much the same it is > now (well, except JS generated > audio data which we really should push to workers). > I believe it would ease implementing the specs (including also other > specs for audio/streams) if all the > specs would 'speak the same language'. This ability to avoid a bunch of disparate specs that force the developer to do a lot of conversions and move between APIs would be a plus. We have a lot of media-oriented specs in progress right now, and the others are generally based around MediaStreams (Media Capture TF, WebRTC, DAP, etc). And we'll be continuing to push into this space bother for audio and video given efforts like Boot-To-Gecko. So I think both clear ways to use both mediastreams and audio processing together are important, and a framework for doing similar work for processing video in MediaStreams. I agree with Olli. -- Randell Jesup randell-ietf@jesup.org
Received on Sunday, 20 May 2012 17:56:12 UTC