W3C home > Mailing lists > Public > public-audio@w3.org > April to June 2012

Re: Missing information in the Web Audio spec

From: Chris Rogers <crogers@google.com>
Date: Wed, 16 May 2012 16:02:49 -0700
Message-ID: <CA+EzO0k1euGiDbOr4C-pgoUZ_tdtB4hqcFdovuYj+RgbbcN0qA@mail.gmail.com>
To: robert@ocallahan.org
Cc: Philip Jägenstedt <philipj@opera.com>, public-audio@w3.org
On Wed, May 16, 2012 at 3:41 PM, Robert O'Callahan <robert@ocallahan.org>wrote:

> On Thu, May 17, 2012 at 10:37 AM, Robert O'Callahan <robert@ocallahan.org>wrote:
>
>> I understand that that is well defined. Very small delays do impose
>> significant implementation constraints, however.
>>
>
> What I'm getting at, of course, is that if you have a very large arbitrary
> cycle of nodes, containing a single DelayNode which delays by a single
> sample, then you need to iterate over all the nodes in the cycle producing
> one output sample per node, once per sample. That will require alternative
> code paths to the normal, more efficient paths that use SIMD to process
> multiple samples per node.
>

I know, this is something which is well known in computer-music systems.
 It's a basic trade-off between performance and flexibility.  Many/most
audio processing implementations use a block-based approach to processing,
instead of a per-sample approach because of the huge performance win.  A
block-based approach to implementation creates some modest limitations
concerning how small the delays can get.  Nevertheless, a large number of
real-world common feedback-delay effects are practical using it.  It's the
implementation approach many modular systems use.

As it stands right now, the Web Audio API makes no claims about whether the
underlying implementation uses a block-based or per-sample approach.  From
a purist API perspective it really doesn't have to, because in the future
such performance limitations may become moot.  But until that time is
reached, practically speaking we may have to spell out some limitations
(minimum delay time with feedback...).  This is what I would suggest.
 There are systems (such as Pure Data) which explicitly expose block-size
and allow sub-graphs to run at smaller block sizes (down to one sample).
 If we wanted to get this fancy, then we certainly could.  But because this
is a very much more specialized use case, I don't consider it essential to
get that fancy, at least not right now.

Thanks for bringing this up, it's a very interesting topic.

Cheers,
Chris



>
> Rob
> --
> “You have heard that it was said, ‘Love your neighbor and hate your
> enemy.’ But I tell you, love your enemies and pray for those who persecute
> you, that you may be children of your Father in heaven. ... If you love
> those who love you, what reward will you get? Are not even the tax
> collectors doing that? And if you greet only your own people, what are you
> doing more than others?" [Matthew 5:43-47]
>
>
Received on Wednesday, 16 May 2012 23:03:19 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 16 May 2012 23:03:23 GMT