W3C home > Mailing lists > Public > public-audio@w3.org > January to March 2012

Re: Web Audio API Feedback from Spaceport.io

From: Patrick Borgeat <patrick@borgeat.de>
Date: Tue, 27 Mar 2012 23:55:10 +0200
Cc: public-audio@w3.org, matt@spaceport.io, ben@sibblingz.com
Message-Id: <574822A8-485A-4505-BAC5-C8F9951F4800@borgeat.de>
To: Alistair MacDonald <al@signedon.com>

Am 27.03.2012 um 23:18 schrieb Alistair MacDonald:

> Spaceportís goals for audio on the web
> 

> 4.4. The AudioDestinationNode Interface
> 
> I assume there is only one implementation of AudioDestinationNode (a speaker), and it is created when you create a new AudioContext.  This does make sense to me, but I don't see why it was done this way.
> 
> ...
> 
> In other words, can I mix into something other than a speaker?  What if I just want to mix sound and then ship it off to the internet, or to disk, or to another buffer?

I totally agree.

> 4.5. The AudioParam Interface
> 
> It seems AudioParam is a prime candidate for JavaScript control.  It doesn't seem I can easily implement tremolo (or vibrato) using the current API.
> 
> In short - people will never be happy and will always demand more and more methods.  Linear and Exponential are great, but why not just let someone specify and arbitrary function pointer that returns a value?  This allows a far better level of control.

I totally agree.
> 4.12. The JavaScriptAudioNode Interface
> 
> The text:
> 
>     numberOfInputs  : 1
>     numberOfOutputs : 1
> 
> is confusing, especially because a few paragraphs later the spec says the number of inputs and outputs can be variable:
> 
> numberOfInputChannels and numberOfOutputChannels determine the number of input and output channels. It is invalid for both numberOfInputChannels and numberOfOutputChannels to be zero.
I totally agree.
> 4.15. The AudioListener Interface
> 
> Why is AudioListener not an AudioNode?  It seems odd to special-case this type.

I thought about AudioListener too and found it quite odd, that it's part of the AudioContext and not bound to AudioPanner Nodes. I can imagine use cases with multiple AudioPanner Nodes with unique AudioListeners, especially when multiple DestinationNodes would be allowed.


I also want to add, that limiting AudioChannelSplitter and AudioChannelMerger to 6 in/outputs seems artificially limiting to me: In sonic art installations and media art 8 and more speakers are quite common. I think this doesn't need an upper boundary.

(Another use case for > 8 channels: A museum implements a info terminal and provides multiple headphones with multiple, simultaneous sounding speakers in different languages. It would be quite easy to implement this with a modern, Web Audio capable Browser and a 8 channel audio interface.)

Patrick
Received on Tuesday, 27 March 2012 21:55:42 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 27 March 2012 21:55:47 GMT