Re: Concerning the gap-less output of real-time generated audio in JavaScript

On Mon, Jul 11, 2011 at 6:37 PM, Grant Galitz <grantgalitz@gmail.com> wrote:

> yes, minimum buffering to be set.


ok, good - misunderstanding then.


>
>
> On Mon, Jul 11, 2011 at 9:36 PM, Chris Rogers <crogers@google.com> wrote:
>
>>
>>
>> On Mon, Jul 11, 2011 at 6:29 PM, Grant Galitz <grantgalitz@gmail.com>wrote:
>>
>>>
>>>
>>> ---------- Forwarded message ----------
>>> From: Grant Galitz <grantgalitz@gmail.com>
>>> Date: Mon, Jul 11, 2011 at 9:23 PM
>>> Subject: Re: Concerning the gap-less output of real-time generated audio
>>> in JavaScript
>>> To: Chris Rogers <crogers@google.com>
>>>
>>>
>>> So we are thinking about a callback based system that allows buffering
>>> ahead of time, that allows resampling, and uses some form of a buffer ring
>>> for sample count safety and management optimization? Buffering real time
>>> without allowing the developer to specify a minimum amount felt like a bad
>>> plan to be implemented, due to many blocking issues for a callback to be
>>> launched without major delays (single threaded woes that we need to
>>> implement APIs around indeed...).
>>>
>>
>> I'm not quite sure I follow what you're saying.  But just to clarify what
>> I meant, I'm proposing *allowing* the developer to specify the "minimum
>> amount" in the buffering.  So when the hardware drains the buffer to below a
>> certain threshold (settable by the developer) then the callback will be
>> fired.  Isn't that what you wanted?
>>
>> Chris
>>
>>
>>
>>
>>
>>
>>>
>>>
>>> On Mon, Jul 11, 2011 at 3:00 PM, Chris Rogers <crogers@google.com>wrote:
>>>
>>>>
>>>>
>>>> On Sun, Jul 10, 2011 at 11:11 PM, Grant Galitz <grantgalitz@gmail.com>wrote:
>>>>
>>>>> I'll briefly compare the mozilla audio data api and the web audio api
>>>>> and run through this list of what can be improved upon in web audio.
>>>>>
>>>>> - Web Audio does not allow resampling, this is a major thorn in
>>>>> probably a couple people's butts, because I have to do this in JavaScript
>>>>> manually. If there is a security concern for bottlenecking, then I'd assume
>>>>> we could throw in some implementation-side limitations on the number of
>>>>> concurrent supposed resampling nodes that could be run at the same time.
>>>>>
>>>>
>>>> I agree that it would be useful to allow the creation of AudioContexts
>>>> with user-settable sample-rates.  It could be as simple as:
>>>>
>>>> var context = new AudioContext(sampleRate);
>>>>
>>>> where sampleRate must have some kind of reasonable upper and lower
>>>> bound.
>>>>
>>>>
>>>>
>>>>>
>>>>> - Web Audio forces the JavaScript developer to maintain an audio buffer
>>>>> in JavaScript. This applies for audio that cannot be timed to the web audio
>>>>> callback, such as an app timed by setInterval that has to produce x samples
>>>>> every x milliseconds. The Mozilla Audio Data API allows the JS developer to
>>>>> push samples to the browser and let the browser manage the buffer on its
>>>>> own. The callback grabbing x number of samples every call is not a buffer on
>>>>> its own, that's the callback sampling the whole buffer of what I'm talking
>>>>> about. Buffer ring management in JavaScript takes up some CPU load and it
>>>>> would always be better in my opinion to let the browser manage such a task.
>>>>>
>>>>
>>>> If sample-rate conversion is taken care of as proposed above, then the
>>>> CPU overhead of managing a simple ring-buffer in JavaScript should be
>>>> extremely small and can be implemented in just a few lines of code.  I
>>>> understand that in your current implementation, you're also dealing with
>>>> sample-rate conversion which is slower and complicates your code.  But a
>>>> simple ring-buffer is not very complex.
>>>>
>>>>
>>>>
>>>>>
>>>>> - "The callback method knows how often to fire," this is a fallacy,
>>>>> even flash falls for this issue and can produce clicks and pops on real-time
>>>>> generated audio (Even their docs hint at this). This is because by the time
>>>>> the callback API figures out a delay, its buffering may be premature due to
>>>>> previous calculations and may as a result gap the audio. It is imperative
>>>>> you let the developer control the buffering process, since only the
>>>>> developer would truly know how much buffering is needed. Web Audio in chrome
>>>>> gaps out for instance when we're drawing to a canvas stretched to fullscreen
>>>>> and a canvas op takes a few milliseconds to perform, to a reasonable person
>>>>> this would seem inappropriate. This ties in basically with the previous
>>>>> point of letting the browser manage the buffer passed to it, and allowing
>>>>> the JS developer to buffer ahead of time rather than having a real-time
>>>>> thread try to play catch-up with an inherently bad plan.
>>>>>
>>>>> - Building up on the last point, in order to achieve ahead-of-time
>>>>> buffering, I believe it would be wise to either introduce a stub function
>>>>> that allows samples to be added at any time without waiting for a callback,
>>>>> just like mozWriteAudio, OR to allow the callback method to be called when
>>>>> buffering reaches a specified low point *specified* by the developer. This
>>>>> low point is not how many samples are to be sent to the browser each
>>>>> callback, but lets the API know WHEN to fire the callback, with the firing
>>>>> being at a certain number of samples before buffer empty.
>>>>>
>>>>
>>>> I like your second idea of having an internal buffer (in the
>>>> implementation) whose size can be specified by the developer.  This buffer
>>>> size is independent of the callback size.  There could also be a mode where
>>>> this internal buffer can automatically adjust its size depending on runtime
>>>> characteristics, but this mode could either be enabled or disabled.
>>>>
>>>>
>>>>
>>>>>
>>>>> I hope we can use some or all of these points listed in providing a
>>>>> proper API for real-time generated audio output in JavaScript in a 21st
>>>>> century browser. :D
>>>>>
>>>>
>>>> I think we can.  My apologies for not yet implementing the ability to
>>>> choose sample-rates for an AudioContext.  It'll come...
>>>>
>>>> Chris
>>>>
>>>>
>>>>
>>>
>>>
>>
>

Received on Tuesday, 12 July 2011 01:40:32 UTC