Re: AudioPannerNode and Spatialization

Hello Samuel,

I think Chris Rogers would be the best person to answer your questions,  
but from what I can tell the AudioPannerNode does not allow you to use  
custom HRTF filters, nor do custom filtering effects (e.g. depending on  
distance/angle etc). AFAICT, you would have to create your own custom  
panner node for that purpose (either as a complex node graph or as a  
JavaScript node). The alternative would be to extend the functionality of  
the AudioPannerNode to make it more customizable.

As I have pointed out earlier on this list, I think it would be very  
useful if we could do these kind of custom effects in JavaScript, without  
loosing performance. If we do the processing in workers (=low latency) and  
provide a set of signal processing primitives as JavaScript built-in  
functions (e.g. convolution and FFT), we should get performance that is on  
par with (or at least very close to) the currently provided native audio  
nodes. This way we wouldn't have to decide between providing limited  
functionality to Web developers or having to over-engineer the audio nodes  
to accommodate for every possible use case.

/Marcus


Den 2012-07-12 12:54:21 skrev Samuel Goldszmidt  
<samuel.goldszmidt@ircam.fr>:

> Hello list,
>
> I get back to you to get some information about the AudioPannerNode (and
> more widely about spatialization).
> At Ircam, one of the research team works on spatialization, and I have
> been asked to help building an interface from HRTF files.
> For what we understood, the AudioPannerNode is
> - a panning effect
> - a distance related sound attenuation
> - a beam directivity
>
> 1. Panning effect
> The panning effect seems to use HRTF filters, and we have some audio
> sample libraries with those kind of filters (based on the shape of the
> user) :
>      a. is it a 'default' human body which is used for rendering in
> AudioPannerNode?
>      b. how could we use our own HRTF impulse files ?
>
> 2. Distance attenuation
> For distance attenuation, in our model, the distance affects also the
> spectrum ( sources closer will typically boost a low frequency).
>     a. how isit implemented in web audio api ?
>     b. Is there a way to achieve this kind of rendering using
> AudioPannerNode ?
>
> 3. 'Beam' (or Sound is may be better word for that) directivity
> We would like to understand the way it has been implemented, is it a
> lowpass filter first or second order ?
> In our case (implemented in a sofware called 'the spat') the directive
> beam interacts with a room effect ( through ConvolverNode for instance).
> Is there a way to achieve this also ?
>
> Thanks for all your anwsers, (we would like to test our spatialization
> effects (and models) through the web audio api, to have rich end user
> experiences).
>
> Regards,
>


-- 
Marcus Geelnard
Core Graphics Developer
Opera Software ASA

Received on Thursday, 12 July 2012 11:56:40 UTC