- From: Grant Galitz <grantgalitz@gmail.com>
- Date: Mon, 11 Jul 2011 02:43:44 -0400
- To: public-audio@w3.org
- Message-ID: <CAD8zUBYYq21oNxPgkDcDOootuu7++Lj5FfyQEhyBAqtEnExgMg@mail.gmail.com>
---------- Forwarded message ---------- From: Grant Galitz <grantgalitz@gmail.com> Date: Mon, Jul 11, 2011 at 2:43 AM Subject: Re: Concerning the gap-less output of real-time generated audio in JavaScript To: Jussi Kalliokoski <jussi.kalliokoski@gmail.com> I agree it should be a mixed callback/write based API that allows the developer to provide samples ahead of time. I do this with the exposed APIs for my XAudioJS lib for thin wrapping mozAudio, web audio, a flash fallback, and wav pcm data uri generation together. On Mon, Jul 11, 2011 at 2:38 AM, Jussi Kalliokoski < jussi.kalliokoski@gmail.com> wrote: > Hello all, I'll jump in on this. > > On Mon, Jul 11, 2011 at 9:11 AM, Grant Galitz <grantgalitz@gmail.com>wrote: > >> I'll briefly compare the mozilla audio data api and the web audio api and >> run through this list of what can be improved upon in web audio. >> >> - Web Audio does not allow resampling, this is a major thorn in probably a >> couple people's butts, because I have to do this in JavaScript manually. If >> there is a security concern for bottlenecking, then I'd assume we could >> throw in some implementation-side limitations on the number of concurrent >> supposed resampling nodes that could be run at the same time. >> > > It most certainly is! However I disagree that there should be a resampling > node, this is a simple matter and has a simple solution employed in most if > not all the client side audio APIs: being able to select the sample rate. > And you're right, I don't think it's going to raise much respect amongst > existing audio devs if you can't even choose the sample rate for yourself. > But I believe Chris knows this already. > > >> - Web Audio forces the JavaScript developer to maintain an audio buffer in >> JavaScript. This applies for audio that cannot be timed to the web audio >> callback, such as an app timed by setInterval that has to produce x samples >> every x milliseconds. The Mozilla Audio Data API allows the JS developer to >> push samples to the browser and let the browser manage the buffer on its >> own. The callback grabbing x number of samples every call is not a buffer on >> its own, that's the callback sampling the whole buffer of what I'm talking >> about. Buffer ring management in JavaScript takes up some CPU load and it >> would always be better in my opinion to let the browser manage such a task. >> > > This is a good point as well. But IMO a more useful approach would be to > have the callback API and then an alternate write call that mixes the > written buffers into the buffers provided by the callback (if provided) and > another one that writes ahead of time, pushing away callbacks. And please, > don't make the developer handle the tail, like in mozAudio. Something like > node.write(buffer, channelCount = 2, sampleRate = [context default]) and > node.writeAhead( -||- ); > > >> - "The callback method knows how often to fire," this is a fallacy, even >> flash falls for this issue and can produce clicks and pops on real-time >> generated audio (Even their docs hint at this). This is because by the time >> the callback API figures out a delay, its buffering may be premature due to >> previous calculations and may as a result gap the audio. It is imperative >> you let the developer control the buffering process, since only the >> developer would truly know how much buffering is needed. Web Audio in chrome >> gaps out for instance when we're drawing to a canvas stretched to fullscreen >> and a canvas op takes a few milliseconds to perform, to a reasonable person >> this would seem inappropriate. This ties in basically with the previous >> point of letting the browser manage the buffer passed to it, and allowing >> the JS developer to buffer ahead of time rather than having a real-time >> thread try to play catch-up with an inherently bad plan. >> > > You're right, if the callback blocks for longer than the buffer length gets > played, pops and cracks are inevitable, but... That's the case of digital > real time audio, no matter what platform. Having a write-only API in this > case is not an option. > > >> - Building up on the last point, in order to achieve ahead-of-time >> buffering, I believe it would be wise to either introduce a stub function >> that allows samples to be added at any time without waiting for a callback, >> just like mozWriteAudio, OR to allow the callback method to be called when >> buffering reaches a specified low point *specified* by the developer. This >> low point is not how many samples are to be sent to the browser each >> callback, but lets the API know WHEN to fire the callback, with the firing >> being at a certain number of samples before buffer empty. >> >> I hope we can use some or all of these points listed in providing a proper >> API for real-time generated audio output in JavaScript in a 21st century >> browser. :D >> > ;) > > Jussi >
Received on Monday, 11 July 2011 06:44:12 UTC