- From: Chris Rogers <crogers@google.com>
- Date: Thu, 16 Aug 2012 10:27:05 -0700
- To: Marcus Geelnard <mage@opera.com>
- Cc: public-audio@w3.org
Received on Thursday, 16 August 2012 17:27:32 UTC
On Wed, Aug 15, 2012 at 11:22 PM, Marcus Geelnard <mage@opera.com> wrote: > Hi! > > AudioContext provides two methods for decoding audio (both the synchronous > createBuffer and the asynchronous decodeAudioData), and people will quite > likely want to use these methods for decoding audio files without actually > wanting to play them using an AudioContext. > > Is there anything preventing us to allow users to do things like: > > function decoded(data) { > // Dostuff > } > > AudioContext.decodeAudioData(**rawData, decoded); > > > Also, both versions of the createBuffer method could be treated similarly. > > Any opinions? > Hi Marcus, one reason that the methods are based on the AudioContext is because the audio data needs to be decoded *and* sample-rate converted to the correct sample-rate for the AudioContext. So, for example, on most Mac OS X machines this would usually be 44.1KHz, but most Windows machines would be 48KHz. But depending on settings these values could be different. The methods will *do the right thing* and sample-rate convert appropriately. Chris
Received on Thursday, 16 August 2012 17:27:32 UTC