Re: Decoding audio w/o an AudioContext object

On Fri, Aug 17, 2012 at 10:43 AM, Chris Rogers <crogers@google.com> wrote:

> On Thu, Aug 16, 2012 at 11:29 PM, Marcus Geelnard <mage@opera.com> wrote:
>
>> An alternative could be to pass an optional resampleTo argument to
>> decodeAudioData() and createBuffer(), just as with mixToMono, to let the
>> developer decide which sounds to optimize for 1:1 playback.
>
>
> Yes, this could be possible as an optional argument.
>

That was my reaction - that we should add control over the resample as an
optional argument - but resampling should still be the default (i.e. it
should be a "dontResampleToContextRate" flag [and no, that was not a
proposed name]).  Otherwise, we're just going to cost perf in a large
number of scenarios.  You're not usually going to care about getting the
precise audio bits as much as you are going to care about getting good
sound quality and low performance impact.

By the way, what is the use case for mixToMono, and why is it not available
>> as an argument to decodeAudioData().
>
>
> Yes, I know, the synchronous method is older and not consistent.  We might
> even consider removing it from the spec since async is better.
>

Heh.  You know, given that the two methods were named so differently, I
didn't even realize the synchronous equivalent to decodeAudioData() was
still in the spec.

+1 to removing the synchronous method (that is, removing the
createBuffer(ArrayBuffer buffer, boolean mixToMono) call.)

-Chris(2)

Received on Friday, 17 August 2012 18:24:58 UTC