W3C home > Mailing lists > Public > public-audio@w3.org > October to December 2012

Revisiting AudioParam constructors - proposal

From: Ray Bellis <ray@bellis.me.uk>
Date: Sun, 25 Nov 2012 09:40:24 +0000
Message-ID: <50B1E788.1030807@bellis.me.uk>
To: public-audio@w3.org
I know there's been previous discussion on the desire to somehow expose 
the functionality of AudioParams to ScriptProcessorNodes, and bug #17388 
(https://www.w3.org/Bugs/Public/show_bug.cgi?id=17388) proposed the 
ability to construct an AudioParam directly.

It occurred to me last night that we probably don't really need the 
ability to directly instantiate a standalone AudioParam, we just need to 
allow a ScriptProcessorNode to own them, hence:

     partial interface AudioContext {
         ScriptProcessorNode createScriptProcessor(unsigned long bufferSize,
             optional unsigned long numberOfInputChannels = 2,
             optional unsigned long numberOfOutputChannels = 2,
             optional unsigned long numberOfAudioParams = 0         // new

As far as I can see the AudioParam minValue, maxValue and units fields 
are all informational, so I've ignored them.

The ScriptProcessorNode does of course need to be expose these 
AudioParams so they can receive connections:

     partial interface ScriptProcessorNode {
         readonly attribute AudioParam param[];

There's no need for direct name support - the developer can add a 
read-only named attribute that references the appropriate index into the 
param[] array.

The hard part is allowing the "onaudioprocess" event to obtain access to 
the AudioParams' values as they change over time.

I see two possibilities other than the "getParamValues()" proposed in 
the bug entry, but don't know which (if any) is feasible within current 

The first would be to augment AudioParam with a function that can 
calculate the computedValue for any future time "t".

The second is to have the automation curves pre-calculated and passed as 
part of the AudioProcessingEvent interface just as inputBuffer data is now:

     partial interface AudioProcessingEvent {
         readonly attribute AudioBuffer paramBuffer[];

Personally, I think the latter makes more sense - if WebAudio can buffer 
a series of input samples then it surely ought to be equally able to 
buffer a series of AudioParam values for each time window.

What do you folks think?

Received on Sunday, 25 November 2012 09:40:50 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:03:14 UTC