W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2012

Re: Decoding audio w/o an AudioContext object

From: Chris Rogers <crogers@google.com>
Date: Thu, 16 Aug 2012 10:27:05 -0700
Message-ID: <CA+EzO0=3cO3E1jxFPiNzm+-2RDM91AMG+vsp9xP3YhgOEO7urA@mail.gmail.com>
To: Marcus Geelnard <mage@opera.com>
Cc: public-audio@w3.org
On Wed, Aug 15, 2012 at 11:22 PM, Marcus Geelnard <mage@opera.com> wrote:

> Hi!
>
> AudioContext provides two methods for decoding audio (both the synchronous
> createBuffer and the asynchronous decodeAudioData), and people will quite
> likely want to use these methods for decoding audio files without actually
> wanting to play them using an AudioContext.
>
> Is there anything preventing us to allow users to do things like:
>
> function decoded(data) {
>   // Dostuff
> }
>
> AudioContext.decodeAudioData(**rawData, decoded);
>
>
> Also, both versions of the createBuffer method could be treated similarly.
>
> Any opinions?
>

Hi Marcus, one reason that the methods are based on the AudioContext is
because the audio data needs to be decoded *and* sample-rate converted to
the correct sample-rate for the AudioContext.  So, for example, on most Mac
OS X machines this would usually be 44.1KHz, but most Windows machines
would be 48KHz.  But depending on settings these values could be different.
 The methods will *do the right thing* and sample-rate convert
appropriately.

Chris
Received on Thursday, 16 August 2012 17:27:32 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Thursday, 16 August 2012 17:27:33 GMT