W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2012

Re: Help needed with a sync-problem

From: Chris Rogers <crogers@google.com>
Date: Fri, 3 Aug 2012 16:26:53 -0700
Message-ID: <CA+EzO0kQpRrODF=VVwgq+fFM5R=dgKfUyNYOP9wLk5jhVzckgw@mail.gmail.com>
To: Peter van der Noord <peterdunord@gmail.com>
Cc: Chris Wilson <cwilso@google.com>, Jussi Kalliokoski <jussi.kalliokoski@gmail.com>, Adam Goode <agoode@google.com>, public-audio@w3.org
On Fri, Aug 3, 2012 at 3:53 PM, Chris Rogers <crogers@google.com> wrote:

>
>
> On Fri, Aug 3, 2012 at 10:40 AM, Peter van der Noord <
> peterdunord@gmail.com> wrote:
>
>> I agree fully that it won't be what most developers want or need to do,
>> the api will be used for games and site music/effects mostly, but creating
>> custom nodes would be my primary focus. To be honest, the list of native
>> nodes that i wanted to use has thinned out, due to some behaviours and
>> implementations that were not appropriate for what i wanted. That's all
>> fine by itself, but if i can't recreate them myself...
>>
>> I personally don't think "a lot of people won't be using custom nodes
>> anyway" is a good argument for not  implement them correctly. If they add
>> lag, i can't use them.
>>
>
> Peter, as Jussi has pointed out.  This is a sad fact of life and "not a
> bug".  If you read more carefully what Jussi says, this lag/latency is not
> something we can somehow fix by creating a different API.  This is the way
> that a real-time audio thread interacting with a main thread (which can
> have its own delays such as garbage collection) must be.  It's a law of
> physics.
>

Native nodes are there for at least three reasons:

1.  To give developers and the users the very lowest possible latency for
applications requiring near-instantaneous response for MIDI synths, live
audio processing, digital-audio-workstation live recording,etc.  Nobody
wants to have a guitar amp simulation where you feel the lag as you pluck
your guitar string.  Or play a MIDI synth where your fingers are
practically stumbling across the notes because of the compromised latency -
time-lag.

2.  Because a built-in, optimized, high-level API is genuinely useful as
compared with just a low-level API.  Many developers want something that
"just works" where it's possible to do interesting things with audio in few
lines of code.  We have high-level APIs like canvas 2D because they make
sense to offer instead of just a much lower-level mathematical library for
poking pixels into a bitmap.  Drawing lines, circles, styled-text, etc. is
natural to offer as a "built-in" API in a web browser.  We don't tell
developers that JS libraries for poking pixels will be enough, and I think
that audio is much the same.  Operating systems like Mac OS X which are
known for their compelling audio applications don't simply offer a
low-level way to get audio out to the hardware and a vector math library
(although they offer both).  They offer much more to audio developers in
terms of higher-level APIs.  People can argue that assembly language is
sufficient for writing any program and we don't need high-level languages,
but does anybody want this?

3.  Because the native nodes are able to perform much more complex
processing than is possible directly in JS.  Simply offering a math library
does not solve the problem because this requires JS code to make sequential
calls to functions where garbage collection and thread scheduling latency
make (1) fail, and are vastly more difficult to understand and use than
(2).  Vector math libraries are also unhelpful for processing one
sample-at-a-time which is important for covering many/most of the
synthesis/processing cases not covered by the native Web Audio nodes.  Thus
I don't think it really extends the possibilities already offered in much
of a significant way.

Chris



>
> Chris
>
>
>>
>>
>> Peter
>>
>>
>> 2012/8/3 Chris Wilson <cwilso@google.com>
>>
>>> How would you empower the JS node/DSP API to fix this?
>>>
>>> I still think, personally, that there's an awful lot of focus on custom
>>> processing in our discussions here.  I haven't felt the need to build a
>>> JSNode yet - the first one I will build is probably a noise gate/expander,
>>> since that's the only thing I can't easily replicate from the nodes already
>>> available.  I'm not really convinced that what most application developers
>>> want to do - NEED to do - is process audio bits themselves directly.
>>>
>>>
>>> On Fri, Aug 3, 2012 at 10:11 AM, Jussi Kalliokoski <
>>> jussi.kalliokoski@gmail.com> wrote:
>>>
>>>> It's a known and major issue all right, but it's not a bug. There's not
>>>> much that can be done about it though, afaict. The processing thread has to
>>>> buffer enough data (the buffer size) from the inputs of the JSNode before
>>>> its callback can be invoked, and next it just sends an event to the JS
>>>> thread to process the buffer. The audio thread, however, can't wait for the
>>>> JS thread to process the buffer but instead plays back the previously
>>>> processed buffer.
>>>>
>>>> This is one of the reasons why I think we should focus on empowering
>>>> the JS node / DSP API. If you want to add any custom processing to the
>>>> graph, you're going to have to adjust the rest of the graph accordingly and
>>>> you'll end up with more latency. This means that if you want to do
>>>> extensive custom processing you'll probably need to work around the graph
>>>> or just go with just the JS and have your own routing which means you're in
>>>> a much more flexible place already anyway. I think the graph serves best as
>>>> an IO abstraction, and that's the part we should focus on.
>>>>
>>>> Cheers,
>>>> Jussi
>>>>
>>>>
>>>> On Fri, Aug 3, 2012 at 4:16 PM, Peter van der Noord <
>>>> peterdunord@gmail.com> wrote:
>>>>
>>>>> Well, it seems indeed that custom-nodes add a delay-time to the
>>>>> signal. I've connected a few bypass modules (they write their input to the
>>>>> output) and i'm magically creating an echo...
>>>>>
>>>>> http://www.petervandernoord.nl/patchwork_js/?patch=2&buffer_size=8192
>>>>>
>>>>> (to hear sound, you have to select the loaded buffer from the pulldown
>>>>> in the buffersource-module)
>>>>>
>>>>> I'm getting somewhat confused and concerned about this, why does this
>>>>> happen and isn't this a major bug/issue?
>>>>>
>>>>> Peter
>>>>>
>>>>>
>>>>>
>>>>> 2012/8/2 Adam Goode <agoode@google.com>
>>>>>
>>>>>> I think you can use playbackTime to determine the absolute a-rate
>>>>>> time of the beginning of the javascript buffer. But last I checked it
>>>>>> wasn't present in webkit.
>>>>>>
>>>>>> You might be able to count samples, assuming you know the node's
>>>>>> noteOn time, to keep track of the a-rate time. But with a short buffer
>>>>>> size, sometimes you can have problems as you've noticed.
>>>>>>
>>>>>>
>>>>>> On Thu, Aug 2, 2012 at 4:47 PM, Peter van der Noord <
>>>>>> peterdunord@gmail.com> wrote:
>>>>>>
>>>>>>> Ermmm.....wait, what? And that is intened behavior?
>>>>>>>
>>>>>>> Peter
>>>>>>>
>>>>>>>
>>>>>>> 2012/8/2 Jussi Kalliokoski <jussi.kalliokoski@gmail.com>
>>>>>>>
>>>>>>>> Hey Peter!
>>>>>>>>
>>>>>>>> I think this is because the JSNode has a delay equivalent to the
>>>>>>>> buffer size, hence if you have parallel graphs that contain a different
>>>>>>>> number of JSNodes, they will arrive to the common destination at a
>>>>>>>> different delay.
>>>>>>>>
>>>>>>>> Cheers,
>>>>>>>> Jussi
>>>>>>>>
>>>>>>>>
>>>>>>>> On Thu, Aug 2, 2012 at 11:35 PM, Peter van der Noord <
>>>>>>>> peterdunord@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> I'm having a strange problem with some signals at the moment and
>>>>>>>>> i've been staring at it for way too long now, so i thought: why not put it
>>>>>>>>> up here, maybe someone sees what's going on. It's a lenghty story, so if
>>>>>>>>> you want to hang on...i'll try to explain :)
>>>>>>>>>
>>>>>>>>> As you may know, i'm writing a modular synthesizer:
>>>>>>>>>
>>>>>>>>> http://petervandernoord.nl/patchwork_js (maybe clear your cache
>>>>>>>>> if you've been there before)
>>>>>>>>>
>>>>>>>>> If you click the 'json to patch' button, a testpatch will be set.
>>>>>>>>> (Important to know: all custom nodes will be created with the buffer-size
>>>>>>>>> that's selected in the pulldown on the right). The patch contains 3 modules
>>>>>>>>> (in patchwork, a module can contain one or more audionodes, with the
>>>>>>>>> module's in/outputs mapped to certain in/outputs of the containing nodes):
>>>>>>>>>
>>>>>>>>> - The destination, which contains a normal destinationNode
>>>>>>>>> http://localhost/patchworkjs/js/modules/DestinationModule.js
>>>>>>>>>
>>>>>>>>> - a clockmodule. one custom js node
>>>>>>>>> http://localhost/patchworkjs/js/modules/DestinationModule.js
>>>>>>>>>
>>>>>>>>> - a triggersequencer, also one custom node.
>>>>>>>>> http://localhost/patchworkjs/js/modules/TriggerSequencerModule.js(the audioprocess callback is at the bottom)
>>>>>>>>>
>>>>>>>>> What is happening in the patch: the clock sends out single values
>>>>>>>>> of 1s (all other values are 0) on a given interval (set in BPM). The
>>>>>>>>> sequencer checks on every incoming value if that value is >0 AND the
>>>>>>>>> previous one was <=0 (i'll call that a clock-pulse). If that is the case,
>>>>>>>>> its SequencerParameter will proceed to the next step. A sequencer-parameter
>>>>>>>>> (actually it is a LogicSequencerParameter, but that's almost the same - it
>>>>>>>>> has one extra method) can be found here:
>>>>>>>>>
>>>>>>>>> http://localhost/patchworkjs/js/params/SequenceParameter.js
>>>>>>>>>
>>>>>>>>> It's basically just an array filled ith 0s and 1s (you can set a 1
>>>>>>>>> by clicking somewhere on the sequencer), and increases the current position
>>>>>>>>> when it gets a next() command. So, back to the the sequencer module: If it
>>>>>>>>> received a clock-pulse, it proceeds the sequencer. Then, if the (new) value
>>>>>>>>> of the sequencer-parameter is 1, the sequencer will write a 1 in its
>>>>>>>>> outputbuffer as well.
>>>>>>>>>
>>>>>>>>> My issues:
>>>>>>>>> - in the testpatch, both the clockmodule and the seq-module are
>>>>>>>>> connected to the output. if you activate some steps in the sequencer, you
>>>>>>>>> will hear that the clicks do not run in sync. I have no idea why that is,
>>>>>>>>> the stepsequencer writes a 1 in exact the same iteration as it reads the
>>>>>>>>> incoming 1s from the clock. In my opinion, they should run exactly in sync.
>>>>>>>>> - When you change the buffersize (which is for the customnodes)
>>>>>>>>> you will hear that the timedifference between the ticks changes (since
>>>>>>>>> there's no clear, you have to refresh the page, set another buffersize and
>>>>>>>>> click 'json to patch')
>>>>>>>>> - Something else i noticed: when i run just a clock module
>>>>>>>>> connected to the output, with a very low buffersize (256, 512), the clock
>>>>>>>>> seems to run very, very irregular.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> So, my main question: Does anyone have any idea why those two
>>>>>>>>> modules do not run in sync when both connected to the output?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Peter
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>
Received on Friday, 3 August 2012 23:27:22 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 3 August 2012 23:27:23 GMT