Re: Sample-accurate JS output (was: scheduling subgraphs)

Hi Chris,
Just a last couple of points of clarification from me.  I think I'm
getting there!  As the idea is that we minimise the number of
JSAudioNodes (in theory only running one per app), what is the idea
for having different graph routing on generated sounds?  For example
if I want a saw wave at 100hz running throught a low-pass filter, and
another saw wave at 400hz running through a reverb what would I do?
Would it be a matter of having a JSAudioNode with multiple channels,
then routing each channel differently?

On scheduling again, is the idea that we take the new "playbackTime"
attribute of an AudioProcessingEvent, and use that to "tick" a
JavaScript scheduler.  I think this is how synchronisation between JS
and native timing should work - is that correct?

Also, if I was looking to synchronise visuals with
AudioBufferSourceNodes scheduled using noteOn, would I then need to
have a JSAudioNode to use for scheduling the visuals, and just send
zeros for the sound data?  This sounds a little counterintuitive
(although not difficult or harmful really).

Finally just to echo Joe B's sentiment - I really appreciate the work
you're putting in.  I'm mainly asking questions because I'm excited to
see what will be possible.
Cheers,
Joe

On Wed, Oct 20, 2010 at 2:05 AM,  <joe@noteflight.com> wrote:
> Thanks -- and I hugely appreciate all the heavy lifting you're doing with
> Webkit right now. I don't ever want to imply that all this commentary means
> you should down tools to deal with it right now!
>
> Best regards,
>
> ...Joe
>
> Sent from my Verizon Wireless BlackBerry
>
> ________________________________
> From: Chris Rogers <crogers@google.com>
> Date: Tue, 19 Oct 2010 17:55:34 -0700
> To: <joe@noteflight.com>
> Subject: Re: Sample-accurate JS output (was: scheduling subgraphs)
> Hi Joe, and thanks for your clarification.  I'm more open to this idea with
> that in mind, but I'm still a bit concerned that due to the nature of the
> API it may have a high potential for abuse.  In any case, it's definitely a
> feature we should keep in mind.  Over the next few days, I'll try to put
> together some of the new feature ideas people have been proposing and put
> them into a separate page which I can link to from my specification.  As I
> mentioned in the meeting, my highest priority right now is to land and
> stabilize all the code I have in my branch into WebKit trunk.  It turns out
> to be more work than you would think due to the stringent code reviews they
> put you through in WebKit :)
> Cheers,
> Chris
>
> On Tue, Oct 19, 2010 at 5:40 PM, <joe@noteflight.com> wrote:
>>
>> Thanks for the clarification -- that's very helpful.
>>
>> We are in agreement: I am not thinking that js nodes should be used as
>> polyphonic building blocks. That's what audio buffer nodes are for. By all
>> means let's discourage the creation of many such js nodes that are --
>> important emphasis -- simultaneously active.
>>
>> I believe that a simple scheduling mechanism of the type I described for
>> js nodes remains very appropriate to include in the api, especially since it
>> allows a sequence of "monophonic" js nodes to perform as well as a single js
>> node (since inactive nodes don't incur much of a cost). Without scheduling /
>> event filtering for inactive js nodes, a sequence costs N * as much as a
>> single node where N is its length. And they are harder to work with for
>> programming even one shot sounds.
>>
>> Hope this clarifies my p.o.v. As well!
>>
>> Best,
>> ...Joe
>>
>> Sent from my Verizon Wireless BlackBerry
>>
>> ________________________________
>> From: Chris Rogers <crogers@google.com>
>> Date: Tue, 19 Oct 2010 16:17:51 -0700
>> To: Joseph Berkovitz<joe@noteflight.com>
>> Cc: <public-xg-audio@w3.org>
>> Subject: Re: Sample-accurate JS output (was: scheduling subgraphs)
>> Hi Joe,
>> I think maybe the confusion is that you're imagining a scenario with many
>> JavaScriptAudioNodes, one per note.  I'm suggesting that we discourage
>> developers from creating large numbers of JavaScriptAudioNodes.  Instead, a
>> single JavaScriptAudioNode can be used to render anything it wants,
>> including synthesizing and mixing down multiple notes using JavaScript.
>>  This way, there's only a single event listener to fire, instead of many as
>> in your case.
>> Chris
>>
>> On Tue, Oct 19, 2010 at 3:56 PM, Joseph Berkovitz <joe@noteflight.com>
>> wrote:
>>>
>>> Hi Chris,
>>>
>>> I'm a little puzzled by your response on this point -- I understand the
>>> perils of heavy thread traffic, but my proposal is designed to decrease that
>>> traffic relative to the current API, not increase it.
>>>
>>> I'm proposing a mechanism that basically prevents events from being
>>> dispatched to JavaScriptAudioNodes that don't need to be serviced because
>>> their start time hasn't arrived yet.  It seems to me that this approach
>>> actually cuts back on event listener servicing.  Without such a filtering
>>> mechanism, many AudioProcessingEvents are going to be fired off to JS nodes,
>>> which will look at the event playback time and then return a zero buffer
>>> because they discover they're quiescent. This seems like a waste of cycles
>>> to me. Wouldn't it be better to have the audio thread understand that there
>>> is no need for JS invocation on these nodes much of the time, and zero out
>>> the audio output on their behalf?
>>>
>>> I totally understand your concerns about reliability and robustness. I'm
>>> certainly willing to go to the codebase and demonstrate the feasibility of
>>> what I'm proposing, but would it perhaps make sense for us to have a direct
>>> implementation-level conversation first?  I'm not sure email is working very
>>> well here as a communication mechanism.
>>>
>>> Best,
>>>
>>> ...Joe
>>>
>>> On Oct 19, 2010, at 5:27 PM, Chris Rogers wrote:
>>>
>>>> Joe,
>>>>
>>>> I understand that it could be implemented to work as you suggest without
>>>> adding a large amount of code, but the point is that there could still be a
>>>> large amount of traffic between the audio thread and the main thread with
>>>> large numbers of event listeners being fired near the same time (for
>>>> overlapping notes).  The handling of timers and event listeners on the main
>>>> thread is fairly dicey and is in competition with page rendering and other
>>>> JavaScript running there.  There's also garbage collection which can stall
>>>> for significant amounts of time.  I know that to some extent we're already
>>>> accepting this scenario by having a JavaScriptAudioNode in the first place.
>>>>  But, the API  system you're proposing encourages the possibility of many
>>>> more event listeners needing to be serviced in a short span of time.
>>>>
>>>> That said, you're free to take the WebKit audio branch code and try some
>>>> experiments there.  My concern is mostly oriented around the reliability and
>>>> robustness of the system when pushed in different ways, run on a variety of
>>>> platforms (slow and fast), and combined with other stuff going on in the
>>>> rendering engine like WebGL and canvas drawing.
>>>
>>>
>>>
>>
>
>

Received on Wednesday, 20 October 2010 07:46:39 UTC