- From: Jussi Kalliokoski <jussi.kalliokoski@gmail.com>
- Date: Thu, 16 Aug 2012 20:54:08 +0300
- To: Chris Rogers <crogers@google.com>
- Cc: Marcus Geelnard <mage@opera.com>, public-audio@w3.org
Received on Thursday, 16 August 2012 17:54:35 UTC
On Thu, Aug 16, 2012 at 8:27 PM, Chris Rogers <crogers@google.com> wrote: > > > On Wed, Aug 15, 2012 at 11:22 PM, Marcus Geelnard <mage@opera.com> wrote: > >> Hi! >> >> AudioContext provides two methods for decoding audio (both the >> synchronous createBuffer and the asynchronous decodeAudioData), and people >> will quite likely want to use these methods for decoding audio files >> without actually wanting to play them using an AudioContext. >> >> Is there anything preventing us to allow users to do things like: >> >> function decoded(data) { >> // Dostuff >> } >> >> AudioContext.decodeAudioData(rawData, decoded); >> >> >> Also, both versions of the createBuffer method could be treated similarly. >> >> Any opinions? >> > > Hi Marcus, one reason that the methods are based on the AudioContext is > because the audio data needs to be decoded *and* sample-rate converted to > the correct sample-rate for the AudioContext. So, for example, on most Mac > OS X machines this would usually be 44.1KHz, but most Windows machines > would be 48KHz. But depending on settings these values could be different. > The methods will *do the right thing* and sample-rate convert > appropriately. > Maybe add an argument for the sample rate so this ctx.decodeAudioData(rawData, ...) would become AudioContext.decodeAudioData(rawData, ctx.sampleRate, ...) Cheers, Jussi
Received on Thursday, 16 August 2012 17:54:35 UTC