Re: About AudioPannerNode

On Tue, Jun 19, 2012 at 2:50 AM, Marcus Geelnard <mage@opera.com> wrote:

> Here's another subject... :)
>
> I've looked a bit at the AudioPannerNode/**AudioListenerNode pair, and I
> think there are a few things that need to be discussed.
>
>
> First of all, the AudioPannerNode is a bit of a odd-one-out among the
> audio nodes, since it has an implicit dependency on the AudioListenerNode
> of the context to which the panner node belongs. If we wanted to decouple
> audio nodes from the audio context (e.g. to make it possible to connect
> nodes from different audio contexts, as have been suggested), the
> AudioPannerNode becomes a special case. Not sure how an alternate solution
> should be designed (discussions welcome), but one idea could be to decouple
> the AudioListenerNode from the AudioContext, and manually construct
> listener nodes and set the listener node for each panner node. This would
> also make it possible to have any number of listeners (e.g. for doing
> background sounds or other listener-aligned sound sources).
>

I think it's worth considering allowing AudioListeners to be constructed
and assigned to a .listener attribute of AudioPannerNode.  This attribute
would simply default to the AudioContext "default" listener.


>
> Btw, the current design also makes it difficult/impossible to implement
> the AudioPannerNode using a JavaScriptAudioNode.
>
>
> Another thing is that I don't really see how a non-mono input signal would
> make sense for a panner node, at least not if we think of it as a 3D
> spatialization tool. For instance, in an HRTF model, I think an audio
> source should be in mono to make sense. Would it be a limitation if all
> inputs are down-mixed to mono?
>

Sorry, I need to add that into the spec.  I've just added details about how
to do mono->stereo and stereo->stereo equal-power-panning.  But, it's also
possible to pan stereo sources with HRTF too.  Basically, the idea is to
process the left input channel with the left impulse-response and the right
input channel with the right impulse-response.

But if the input to the panner is more channels than stereo, then it will
have to be mixed down to stereo.



> On the other hand, in music applications you may want to do left-right
> panning of stereo signals. Should that be treated as a special case
> (configuration) of the AudioPannerNode, or would it be better to split the
> current interface into two (e.g. StereoPannerNode and SpatializationNode)?
>

Right now the AudioPannerNode, in both the equal-power and HRTF modes, will
automatically do the right thing for both mono and stereo sources without
any special configuration.  I think it's better to keep that part simple
for developers and just make sure the "right thing" happens.


>
>
> Lastly, how should complex spatialization models (thinking about HRTF
> here) be handled (should they even be supported)? I fear that a fair amount
> of spec:ing and testing must be done to support this, not to mention that
> HRTF in general relies on data files from real-world measurements (should
> these be shared among implementations or not?


I'm happy to share the measured HRTF files that we use in WebKit.  I'm not
sure if they should be normative or not...


> should there be high/low-quality versions of the impulse responses? etc).
> Would it perhaps be a good idea to leave this for a later revision or
> another level of the spec?
>

I think an implementation could be allowed to pick a high/low quality set
of impulse responses depending on the hardware.  But, I think this is
unlikely to be necessary.  In WebKit, we run on quite a variety of
hardware, using several different FFT implementations, and (as far as I
know) this has not been an issue.

Game developers are using the Web Audio API now, exploiting the
spatialization feature, so I think it needs to be included.  We have
re-usable code we can share to get you boot-strapped.

Cheers,
Chris


>
>
> Regards,
>
>  Marcus
>
>

Received on Tuesday, 19 June 2012 17:40:22 UTC