W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2012

Re: AudioPannerNode and Spatialization

From: Chris Rogers <crogers@google.com>
Date: Mon, 30 Jul 2012 13:05:20 -0700
Message-ID: <CA+EzO0=2kLbUQt5dx1m=Ustup5mWgtr+UffBVTbDBVtaP9UHzA@mail.gmail.com>
To: Samuel Goldszmidt <samuel.goldszmidt@ircam.fr>
Cc: public-audio@w3.org
On Thu, Jul 12, 2012 at 3:54 AM, Samuel Goldszmidt <
samuel.goldszmidt@ircam.fr> wrote:

>  Hello list,
>

Hi Samuel, it's good to see your comments.


>
> I get back to you to get some information about the AudioPannerNode (and
> more widely about spatialization).
> At Ircam, one of the research team works on spatialization, and I have
> been asked to help building an interface from HRTF files.
> For what we understood, the AudioPannerNode is
> - a panning effect
> - a distance related sound attenuation
> - a beam directivity
>
> 1. Panning effect
> The panning effect seems to use HRTF filters, and we have some audio
> sample libraries with those kind of filters (based on the shape of the
> user) :
>
    a. is it a 'default' human body which is used for rendering in
> AudioPannerNode?
>

As it turns out in WebKit we're actually using a composite HRTF (combined
from several of the IRCAM datasets :)


>     b. how could we use our own HRTF impulse files ?
>

There is currently no way to do this.  However, it would be possible to add
a method to assign one or more AudioBuffer objects containing the various
impulse responses at the various orientations.


>
> 2. Distance attenuation
> For distance attenuation, in our model, the distance affects also the
> spectrum ( sources closer will typically boost a low frequency).
>    a. how isit implemented in web audio api ?
>

It implements a simple gain attenuation according to the OpenAL
specification.


>    b. Is there a way to achieve this kind of rendering using
> AudioPannerNode ?
>

There currently is not.  However, I've been aware of the possible need for
this since designing this in the very beginning.  One possibility is to
define a read-only .distance AudioParam attribute on AudioPannerNode.  In
this way, the dynamically calculated distance value could control one or
more biquad filters for quite a range of distance controlled frequency
effects.


>
> 3. 'Beam' (or Sound is may be better word for that) directivity
> We would like to understand the way it has been implemented, is it a
> lowpass filter first or second order ?
> In our case (implemented in a sofware called 'the spat') the directive
> beam interacts with a room effect ( through ConvolverNode for instance).
> Is there a way to achieve this also ?
>

The current implementation is simple and follows the OpenAL specification.
 The cone effect (as defined in OpenAL) simply controls the gain in the
sound "beam".  But, as I just mentioned as a possibility for more complex
distance effects (by exposing a .distance param), we could also expose a
.cone AudioParam which could control arbitrary filters, gains, and "reverb"
dry/wet blends, for example.


>
> Thanks for all your anwsers, (we would like to test our spatialization
> effects (and models) through the web audio api, to have rich end user
> experiences).
>

Thanks for having a look :)  For many developers, I'm quite sure the basic
OpenAL behavior will be sufficient, but I think we can address many/all of
your more advanced applications through a couple of small targeted
additions to AudioPannerNode.

Cheers,
Chris


>
> Regards,
>
>
>
>
>
> --
> Samuel Goldszmidt
> IRCAM - APM / CRI
> 01 44 78 14 78
>
Received on Monday, 30 July 2012 20:05:49 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 30 July 2012 20:05:49 GMT