W3C home > Mailing lists > Public > public-audio@w3.org > April to June 2012

Re: Missing information in the Web Audio spec

From: Philip Jägenstedt <philipj@opera.com>
Date: Mon, 21 May 2012 17:20:52 +0200
To: "Chris Rogers" <crogers@google.com>
Cc: "Robert O'Callahan" <robert@ocallahan.org>, public-audio@w3.org
Message-ID: <op.wenzk2nisr6mfa@kirk>
On Fri, 18 May 2012 19:38:41 +0200, Chris Rogers <crogers@google.com>  
wrote:

> On Fri, May 18, 2012 at 9:23 AM, Philip Jägenstedt  
> <philipj@opera.com>wrote:
>
>> On Thu, 17 May 2012 01:36:15 +0200, Robert O'Callahan <
>> robert@ocallahan.org> wrote:
>>
>>  On Thu, May 17, 2012 at 11:02 AM, Chris Rogers <crogers@google.com>
>>> wrote:
>>>
>>>  As it stands right now, the Web Audio API makes no claims about  
>>> whether
>>>> the underlying implementation uses a block-based or per-sample  
>>>> approach.
>>>>
>>>
>>>
>>> That is good and we should definitely preserve it.
>>>
>>>  From a purist API perspective it really doesn't have to, because in  
>>> the
>>>> future such performance limitations may become moot.  But until that  
>>>> time
>>>> is reached, practically speaking we may have to spell out some
>>>> limitations
>>>> (minimum delay time with feedback...).  This is what I would suggest.
>>>>
>>>
>>>
>>> So then, one approach would be to specify that in any cycle of nodes,
>>> there
>>> should be at least one DelayNode with a minimum delay, where the  
>>> minimum
>>> is
>>> set in the spec. The spec would still need to define what happens if  
>>> that
>>> constraint is violated. That behavior needs to be carefully chosen so  
>>> that
>>> later we can lower the minimum delay (possibly all the way to zero)
>>> without
>>> having to worry about Web content having accidentally used a too-small
>>> delay and relying on the old spec behavior in some way. (I know it  
>>> sounds
>>> crazy, but spec changes breaking clearly-invalid-but-still-**deployed
>>> content
>>> is a real and common problem.)
>>>
>>> Alternatively we can set the minimum to zero now, but then we need to
>>> write
>>> tests for cycles with very small delays and ensure implementations  
>>> support
>>> them. If there's a JS processing node in the cycle that will not be
>>> pleasant...
>>>
>>
>> I think this is a sane approach unless everyone is prepared to support
>> per-sample processing, which I suspect is not the case. Chris, how large
>> are the work buffers in your implementation? How large can we make the
>> limit before it becomes a problem to generate useful, real-world  
>> effects?
>>
>
> Hi Philip, the buffer size we use for rendering is 128 sample-frames.  In
> our implementation it's a power-of-two size become some of the effects  
> use
> FFTs, where this makes the buffering easier.  We also like to keep this a
> relatively small power-of-two size (and would even consider going down to
> 64) to reduce latency for those audio back-ends which can support it.   
> For
> those audio back-ends which don't support it, we simply process multiple
> work buffers to satisfy one hardware request for more data.
>
> I think this size is small enough to allow for a good range of useful
> real-world delay effects.  I don't want to go larger because of the  
> latency
> hit.

OK, so it sounds like if it is necessary to allow loops, then the spec  
should require that a DelayNode equivalent to at least 128 samples is  
used. What should happen if there is not? Since AudioNode.delayTime is an  
AudioParam this could change at any time, so it can't be checked only when  
constructing the graph.

-- 
Philip Jägenstedt
Core Developer
Opera Software
Received on Monday, 21 May 2012 15:21:37 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 21 May 2012 15:21:43 GMT