W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2014

Re: Dropping AudioBuffer in AudioProcessingEvent?

From: Chris Wilson <cwilso@google.com>
Date: Wed, 27 Aug 2014 11:01:26 -0700
Message-ID: <CAJK2wqXhzAZSLhwEi=iG3b6m_fpxDny6jWjUoSn77_v0x=TqcA@mail.gmail.com>
To: Marcus Geelnard <mage@opera.com>
Cc: Joseph Berkovitz <joe@noteflight.com>, "public-audio@w3.org" <public-audio@w3.org>
Actually, I thought of one negative - if we're doing this, we should make a
SEPARATE AudioWorkerProcessingEvent.  We would not want to enforce this
change on the (hopefully deprecated) AudioProcessingEvent that's dispatched
for the current ScriptProcessorNode, as it would break all usage of
ScriptProcessorNode.


On Wed, Aug 27, 2014 at 9:34 AM, Marcus Geelnard <mage@opera.com> wrote:

> Yes, it's not really a big change, and probably does not do much for
> performance at the moment.
>
> My reasoning comes from two observations:
>
> 1) All attributes are redundant (as pointed out by Chris), and I'm
> personally not a big fan of API overlap (it tends to confuse
> developers and lead to diverging coding styles).
>
> 2) I have always thought of the AudioBuffer as an abstract interface
> to data that is primarily owned and managed by the audio engine, and
> should typically be used more like a "handle" than for actually
> inspecting/modifying the data (typical example usage: load & decode
> sound file -> AudioBuffer, then tell audio engine to play
> AudioBuffer). The AudioProcessingEvent is the most notable exception
> from this model, where the AudioBuffer is used exclusively for
> inspecting and modifying the raw data. As we've discussed earlier (too
> long ago for me to remember), if at some point we want to add support
> for non-Float32 internal formats (e.g. for saving memory on the
> client), it might be beneficial to minimize JS-side data access to
> AudioBuffers (since each getChannelData() call would potentially
> require a format conversion operation).
>
> In short: it seems a bit odd to use AudioBuffers in the
> AudioProcessingEvent, and it might even be a bad thing (TM) for the
> API in the future.
>
> Anyway, just my two cents.
>
> /Marcus
>
> 2014-08-27 18:13 GMT+02:00 Chris Wilson <cwilso@google.com>:
> > I'm not sure there's a significant win from doing so, although I guess
> I'm
> > okay with it.  None of the descriptive info in AudioBuffer interface is
> > needed: the .sampleRate is redundant (same as GlobalScope sampleRate),
> as is
> > the .length (since it's the length of each of the Float32Arrays) and the
> > .numberOfChannels (the length of the channelbuffer arrays); duration is
> just
> > a nice-to-have (i.e. calculatable from the length and the sampleRate).
> But
> > you'd still need to have a couple of sequences of Float32Arrays, so
> you're
> > not really simplifying or enabling fewer objects, just getting rid of the
> > use of AudioBuffer and dropping a few parameters that don't change.
> >
> > Current interface:
> >
> > interface AudioProcessingEvent : Event {
> >     readonly    attribute double      playbackTime;
> >     readonly    attribute AudioBuffer inputBuffer;
> >     readonly    attribute AudioBuffer outputBuffer;
> >     readonly    attribute object      parameters;
> > };
> >
> >
> > Interface with arrays of Float32Arrays:
> >
> > interface AudioProcessingEvent : Event {
> >     readonly    attribute double      playbackTime;
> >     readonly    attribute Float32Array[] inputChannelBuffers;
> >     readonly    attribute Float32Array[] outputChannelBuffer;
> >     readonly    attribute object      parameters;
> > };
> >
> > Thoughts from others?
> >
> >
> > On Wed, Aug 27, 2014 at 6:54 AM, Joseph Berkovitz <joe@noteflight.com>
> > wrote:
> >>
> >> Marcus,
> >>
> >> I would presently favor keeping AudioBuffer since it includes useful
> >> descriptive info and since people have already written code to handle
> it.
> >> Even if the descriptive info is redundant (and I’m not sure it is) we
> may
> >> want to add other attributes in the future, and an array of arrays will
> not
> >> afford that opportunity for extension.
> >>
> >> It seems to me that getChannelData() simply becomes a trivial operation
> >> and that we remove the verbiage about “acquiring contents of an
> AudioBuffer”
> >> from the spec. The copy() methods can be deprecated but do not need to
> be
> >> broken.
> >>
> >> I might have missed some point of discussion in the past, please correct
> >> me if I have.
> >>
> >> …Joe
> >>
> >>
> >> On Aug 27, 2014, at 2:49 AM, Marcus Geelnard <mage@opera.com> wrote:
> >>
> >> Hi!
> >>
> >> First off, the interface looks good to me. The wording related to
> >> ScriptProcessorNode and AudioProcessingEvent may need to be update
> though
> >> (e.g. in 2.15, AudioWorkerNodes should be mentioned).
> >>
> >> Now to my question: Is now a good time to replace the AudioBuffer
> >> attributes in AudioProcessingEvent (inputBuffer, outputBuffer) with
> arrays
> >> of Float32Arrays? Or do we need to keep the AudioBuffer interface for
> some
> >> reason?
> >>
> >> /Marcus
> >>
> >>
> >> Den 2014-08-25 17:29, Chris Wilson skrev:
> >>
> >> I've done some tweaking to the Audio Worker (issue #113) proposal, and
> >> most significantly added the ability to create AudioParams on Audio
> Workers
> >> (issue #134).
> >>
> >> The fork is hosted on my fork (http://cwilso.github.io/web-audio-api/).
> >> Start here to review the creation method, and the bulk of the text
> begins at
> >> http://cwilso.github.io/web-audio-api/#the-audio-worker.
> >>
> >>
> >>
> >> --
> >> Marcus Geelnard
> >> Opera Software
> >>
> >>
> >>
> >>
> >> .            .       .    .  . ...Joe
> >>
> >> Joe Berkovitz
> >> President
> >>
> >> Noteflight LLC
> >> Boston, Mass.
> >> phone: +1 978 314 6271
> >> www.noteflight.com
> >> "Your music, everywhere"
> >>
> >
>
Received on Wednesday, 27 August 2014 18:01:54 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:50:14 UTC