Re: Sample-accurate JS output (was: scheduling subgraphs)

Hi Joe,

I think maybe the confusion is that you're imagining a scenario with many
JavaScriptAudioNodes, one per note.  I'm suggesting that we discourage
developers from creating large numbers of JavaScriptAudioNodes.  Instead, a
single JavaScriptAudioNode can be used to render anything it wants,
including synthesizing and mixing down multiple notes using JavaScript.
 This way, there's only a single event listener to fire, instead of many as
in your case.

Chris

On Tue, Oct 19, 2010 at 3:56 PM, Joseph Berkovitz <joe@noteflight.com>wrote:

> Hi Chris,
>
> I'm a little puzzled by your response on this point -- I understand the
> perils of heavy thread traffic, but my proposal is designed to decrease that
> traffic relative to the current API, not increase it.
>
> I'm proposing a mechanism that basically prevents events from being
> dispatched to JavaScriptAudioNodes that don't need to be serviced because
> their start time hasn't arrived yet.  It seems to me that this approach
> actually cuts back on event listener servicing.  Without such a filtering
> mechanism, many AudioProcessingEvents are going to be fired off to JS nodes,
> which will look at the event playback time and then return a zero buffer
> because they discover they're quiescent. This seems like a waste of cycles
> to me. Wouldn't it be better to have the audio thread understand that there
> is no need for JS invocation on these nodes much of the time, and zero out
> the audio output on their behalf?
>
> I totally understand your concerns about reliability and robustness. I'm
> certainly willing to go to the codebase and demonstrate the feasibility of
> what I'm proposing, but would it perhaps make sense for us to have a direct
> implementation-level conversation first?  I'm not sure email is working very
> well here as a communication mechanism.
>
> Best,
>
> ...Joe
>
>
> On Oct 19, 2010, at 5:27 PM, Chris Rogers wrote:
>
>  Joe,
>>
>> I understand that it could be implemented to work as you suggest without
>> adding a large amount of code, but the point is that there could still be a
>> large amount of traffic between the audio thread and the main thread with
>> large numbers of event listeners being fired near the same time (for
>> overlapping notes).  The handling of timers and event listeners on the main
>> thread is fairly dicey and is in competition with page rendering and other
>> JavaScript running there.  There's also garbage collection which can stall
>> for significant amounts of time.  I know that to some extent we're already
>> accepting this scenario by having a JavaScriptAudioNode in the first place.
>>  But, the API  system you're proposing encourages the possibility of many
>> more event listeners needing to be serviced in a short span of time.
>>
>> That said, you're free to take the WebKit audio branch code and try some
>> experiments there.  My concern is mostly oriented around the reliability and
>> robustness of the system when pushed in different ways, run on a variety of
>> platforms (slow and fast), and combined with other stuff going on in the
>> rendering engine like WebGL and canvas drawing.
>>
>
>
>
>

Received on Tuesday, 19 October 2010 23:18:29 UTC