Re: Fw: Sample-accurate JS output (was: scheduling subgraphs)

Hi Joe, and thanks for your clarification.  I'm more open to this idea with
that in mind, but I'm still a bit concerned that due to the nature of the
API it may have a high potential for abuse.  In any case, it's definitely a
feature we should keep in mind.  Over the next few days, I'll try to put
together some of the new feature ideas people have been proposing and put
them into a separate page which I can link to from my specification.

Cheers,
Chris

On Tue, Oct 19, 2010 at 5:55 PM, <joe@noteflight.com> wrote:

> Sent from my Verizon Wireless BlackBerry
> ------------------------------
> *From: * joe@noteflight.com
> *Date: *Wed, 20 Oct 2010 00:40:52 +0000
> *To: *Chris Rogers<crogers@google.com>
> *ReplyTo: * joe@noteflight.com
> *Subject: *Re: Sample-accurate JS output (was: scheduling subgraphs)
>
> Thanks for the clarification -- that's very helpful.
>
> We are in agreement: I am not thinking that js nodes should be used as
> polyphonic building blocks. That's what audio buffer nodes are for. By all
> means let's discourage the creation of many such js nodes that are --
> important emphasis -- simultaneously active.
>
> I believe that a simple scheduling mechanism of the type I described for js
> nodes remains very appropriate to include in the api, especially since it
> allows a sequence of "monophonic" js nodes to perform as well as a single js
> node (since inactive nodes don't incur much of a cost). Without scheduling /
> event filtering for inactive js nodes, a sequence costs N * as much as a
> single node where N is its length. And they are harder to work with for
> programming even one shot sounds.
>
> Hope this clarifies my p.o.v. As well!
>
> Best,
> ...Joe
>
> Sent from my Verizon Wireless BlackBerry
> ------------------------------
> *From: * Chris Rogers <crogers@google.com>
> *Date: *Tue, 19 Oct 2010 16:17:51 -0700
> *To: *Joseph Berkovitz<joe@noteflight.com>
> *Cc: *<public-xg-audio@w3.org>
> *Subject: *Re: Sample-accurate JS output (was: scheduling subgraphs)
>
> Hi Joe,
>
> I think maybe the confusion is that you're imagining a scenario with many
> JavaScriptAudioNodes, one per note.  I'm suggesting that we discourage
> developers from creating large numbers of JavaScriptAudioNodes.  Instead, a
> single JavaScriptAudioNode can be used to render anything it wants,
> including synthesizing and mixing down multiple notes using JavaScript.
>  This way, there's only a single event listener to fire, instead of many as
> in your case.
>
> Chris
>
> On Tue, Oct 19, 2010 at 3:56 PM, Joseph Berkovitz <joe@noteflight.com>wrote:
>
>> Hi Chris,
>>
>> I'm a little puzzled by your response on this point -- I understand the
>> perils of heavy thread traffic, but my proposal is designed to decrease that
>> traffic relative to the current API, not increase it.
>>
>> I'm proposing a mechanism that basically prevents events from being
>> dispatched to JavaScriptAudioNodes that don't need to be serviced because
>> their start time hasn't arrived yet.  It seems to me that this approach
>> actually cuts back on event listener servicing.  Without such a filtering
>> mechanism, many AudioProcessingEvents are going to be fired off to JS nodes,
>> which will look at the event playback time and then return a zero buffer
>> because they discover they're quiescent. This seems like a waste of cycles
>> to me. Wouldn't it be better to have the audio thread understand that there
>> is no need for JS invocation on these nodes much of the time, and zero out
>> the audio output on their behalf?
>>
>> I totally understand your concerns about reliability and robustness. I'm
>> certainly willing to go to the codebase and demonstrate the feasibility of
>> what I'm proposing, but would it perhaps make sense for us to have a direct
>> implementation-level conversation first?  I'm not sure email is working very
>> well here as a communication mechanism.
>>
>> Best,
>>
>> ...Joe
>>
>>
>> On Oct 19, 2010, at 5:27 PM, Chris Rogers wrote:
>>
>>  Joe,
>>>
>>> I understand that it could be implemented to work as you suggest without
>>> adding a large amount of code, but the point is that there could still be a
>>> large amount of traffic between the audio thread and the main thread with
>>> large numbers of event listeners being fired near the same time (for
>>> overlapping notes).  The handling of timers and event listeners on the main
>>> thread is fairly dicey and is in competition with page rendering and other
>>> JavaScript running there.  There's also garbage collection which can stall
>>> for significant amounts of time.  I know that to some extent we're already
>>> accepting this scenario by having a JavaScriptAudioNode in the first place.
>>>  But, the API  system you're proposing encourages the possibility of many
>>> more event listeners needing to be serviced in a short span of time.
>>>
>>> That said, you're free to take the WebKit audio branch code and try some
>>> experiments there.  My concern is mostly oriented around the reliability and
>>> robustness of the system when pushed in different ways, run on a variety of
>>> platforms (slow and fast), and combined with other stuff going on in the
>>> rendering engine like WebGL and canvas drawing.
>>>
>>
>>
>>
>>
>

Received on Wednesday, 20 October 2010 01:01:23 UTC