W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2012

Re: Decoding audio w/o an AudioContext object

From: Chris Wilson <cwilso@google.com>
Date: Fri, 17 Aug 2012 11:24:17 -0700
Message-ID: <CAJK2wqXE1CFeecwAJq6ht5fKmJDghQ+JssQXz-q-QpaCJFmJ-A@mail.gmail.com>
To: Chris Rogers <crogers@google.com>
Cc: Marcus Geelnard <mage@opera.com>, public-audio@w3.org
On Fri, Aug 17, 2012 at 10:43 AM, Chris Rogers <crogers@google.com> wrote:

> On Thu, Aug 16, 2012 at 11:29 PM, Marcus Geelnard <mage@opera.com> wrote:
>
>> An alternative could be to pass an optional resampleTo argument to
>> decodeAudioData() and createBuffer(), just as with mixToMono, to let the
>> developer decide which sounds to optimize for 1:1 playback.
>
>
> Yes, this could be possible as an optional argument.
>

That was my reaction - that we should add control over the resample as an
optional argument - but resampling should still be the default (i.e. it
should be a "dontResampleToContextRate" flag [and no, that was not a
proposed name]).  Otherwise, we're just going to cost perf in a large
number of scenarios.  You're not usually going to care about getting the
precise audio bits as much as you are going to care about getting good
sound quality and low performance impact.

By the way, what is the use case for mixToMono, and why is it not available
>> as an argument to decodeAudioData().
>
>
> Yes, I know, the synchronous method is older and not consistent.  We might
> even consider removing it from the spec since async is better.
>

Heh.  You know, given that the two methods were named so differently, I
didn't even realize the synchronous equivalent to decodeAudioData() was
still in the spec.

+1 to removing the synchronous method (that is, removing the
createBuffer(ArrayBuffer buffer, boolean mixToMono) call.)

-Chris(2)
Received on Friday, 17 August 2012 18:24:58 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 17 August 2012 18:24:59 GMT