W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2012

Re: Decoding audio w/o an AudioContext object

From: Marcus Geelnard <mage@opera.com>
Date: Fri, 17 Aug 2012 06:29:47 +0000
Message-ID: <20120817062947.hclirgvlrx8g0g8g@staff.opera.com>
To: Chris Rogers <crogers@google.com>
Cc: public-audio@w3.org
Citerar Chris Rogers <crogers@google.com>:

> On Wed, Aug 15, 2012 at 11:22 PM, Marcus Geelnard <mage@opera.com> wrote:
>
>> Hi!
>>
>> AudioContext provides two methods for decoding audio (both the synchronous
>> createBuffer and the asynchronous decodeAudioData), and people will quite
>> likely want to use these methods for decoding audio files without actually
>> wanting to play them using an AudioContext.
>>
>> Is there anything preventing us to allow users to do things like:
>>
>> function decoded(data) {
>>   // Dostuff
>> }
>>
>> AudioContext.decodeAudioData(**rawData, decoded);
>>
>>
>> Also, both versions of the createBuffer method could be treated similarly.
>>
>> Any opinions?
>>
>
> Hi Marcus, one reason that the methods are based on the AudioContext is
> because the audio data needs to be decoded *and* sample-rate converted to
> the correct sample-rate for the AudioContext.

Why do you need to do that? You can just as well compensate for it in  
the audioBufferSourceNode: playbackRate' = playbackRate *  
buffer.sampleRate / ctx.sampleRate.

I think it's kind of counter-intuitive that an audio resource gets  
decoded differently on different machines, since the decoded data is  
exposed to the script (this could lead to false assumptions about  
decoded data lengths etc).

If it's for performance reasons (typically making sense for  
doppler-free, single pitch sound fx in games), it's kind of a guess,  
since you can't know beforehand if sounds will be played at  
playbackRate = 1 or if they will be used as musical instruments, for  
instance.

An alternative could be to pass an optional resampleTo argument to  
decodeAudioData() and createBuffer(), just as with mixToMono, to let  
the developer decide which sounds to optimize for 1:1 playback.

By the way, what is the use case for mixToMono, and why is it not  
available as an argument to decodeAudioData().

>  So, for example, on most Mac
> OS X machines this would usually be 44.1KHz, but most Windows machines
> would be 48KHz.  But depending on settings these values could be different.
>  The methods will *do the right thing* and sample-rate convert
> appropriately.
>

/Marcus
Received on Friday, 17 August 2012 06:31:27 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 17 August 2012 06:31:30 GMT