W3C home > Mailing lists > Public > public-audio@w3.org > April to June 2012

Re: Missing information in the Web Audio spec

From: Raymond Toy <rtoy@google.com>
Date: Fri, 18 May 2012 10:04:49 -0700
Message-ID: <CAE3TgXF8LA4DMbA8fYBvoQMO8BedOQZLTuuKrhUXeM+GyQ9g4w@mail.gmail.com>
To: Philip Jägenstedt <philipj@opera.com>
Cc: Chris Rogers <crogers@google.com>, "Robert O'Callahan" <robert@ocallahan.org>, public-audio@w3.org
On Fri, May 18, 2012 at 9:23 AM, Philip Jägenstedt <philipj@opera.com>wrote:

> On Thu, 17 May 2012 01:36:15 +0200, Robert O'Callahan <
> robert@ocallahan.org> wrote:
>
>  On Thu, May 17, 2012 at 11:02 AM, Chris Rogers <crogers@google.com>
>> wrote:
>>
>>  As it stands right now, the Web Audio API makes no claims about whether
>>> the underlying implementation uses a block-based or per-sample approach.
>>>
>>
>>
>> That is good and we should definitely preserve it.
>>
>>  From a purist API perspective it really doesn't have to, because in the
>>> future such performance limitations may become moot.  But until that time
>>> is reached, practically speaking we may have to spell out some
>>> limitations
>>> (minimum delay time with feedback...).  This is what I would suggest.
>>>
>>
>>
>> So then, one approach would be to specify that in any cycle of nodes,
>> there
>> should be at least one DelayNode with a minimum delay, where the minimum
>> is
>> set in the spec. The spec would still need to define what happens if that
>> constraint is violated. That behavior needs to be carefully chosen so that
>> later we can lower the minimum delay (possibly all the way to zero)
>> without
>> having to worry about Web content having accidentally used a too-small
>> delay and relying on the old spec behavior in some way. (I know it sounds
>> crazy, but spec changes breaking clearly-invalid-but-still-**deployed
>> content
>> is a real and common problem.)
>>
>> Alternatively we can set the minimum to zero now, but then we need to
>> write
>> tests for cycles with very small delays and ensure implementations support
>> them. If there's a JS processing node in the cycle that will not be
>> pleasant...
>>
>
> I think this is a sane approach unless everyone is prepared to support
> per-sample processing, which I suspect is not the case. Chris, how large
> are the work buffers in your implementation?


I believe the current implementation works on 128 samples at a time.

Ray
Received on Friday, 18 May 2012 17:05:20 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 18 May 2012 17:05:21 GMT