W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2012

AudioPannerNode and Spatialization

From: Samuel Goldszmidt <samuel.goldszmidt@ircam.fr>
Date: Thu, 12 Jul 2012 12:54:21 +0200
Message-ID: <4FFEACDD.7040407@ircam.fr>
To: public-audio@w3.org
Hello list,

I get back to you to get some information about the AudioPannerNode (and 
more widely about spatialization).
At Ircam, one of the research team works on spatialization, and I have 
been asked to help building an interface from HRTF files.
For what we understood, the AudioPannerNode is
- a panning effect
- a distance related sound attenuation
- a beam directivity

1. Panning effect
The panning effect seems to use HRTF filters, and we have some audio 
sample libraries with those kind of filters (based on the shape of the 
user) :
     a. is it a 'default' human body which is used for rendering in 
AudioPannerNode?
     b. how could we use our own HRTF impulse files ?

2. Distance attenuation
For distance attenuation, in our model, the distance affects also the 
spectrum ( sources closer will typically boost a low frequency).
    a. how isit implemented in web audio api ?
    b. Is there a way to achieve this kind of rendering using 
AudioPannerNode ?

3. 'Beam' (or Sound is may be better word for that) directivity
We would like to understand the way it has been implemented, is it a 
lowpass filter first or second order ?
In our case (implemented in a sofware called 'the spat') the directive 
beam interacts with a room effect ( through ConvolverNode for instance). 
Is there a way to achieve this also ?

Thanks for all your anwsers, (we would like to test our spatialization 
effects (and models) through the web audio api, to have rich end user 
experiences).

Regards,





-- 
Samuel Goldszmidt
IRCAM - APM / CRI
01 44 78 14 78
Received on Thursday, 12 July 2012 10:55:01 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Thursday, 12 July 2012 10:55:01 GMT