Fwd: Large AudioBuffers in Web Audio API

I realized that I picked reply and not reply all, so I'm forwarding these
two messages I sent just to Noah to all.

Jussi

---------- Forwarded message ----------
From: Jussi Kalliokoski <jussi.kalliokoski@gmail.com>
Date: Tue, Mar 1, 2011 at 7:21 AM
Subject: Re: Large AudioBuffers in Web Audio API
To: Noah Mendelsohn <nrm@arcanedomain.com>


Hmm, as far as I know, the API allows you to cache items longer than one
minute, it's just generally not recommended, for scalability reasons, as
caching even a 3 minute clip to a Float32Array makes up for ~64MB memory
use. Of course that might be ok, depending on the use case. I also think
very few DAWs cache items that long and rather play the file, like is the
idea here. Reading from a file & buffering portions may be more CPU
expensive (not really) but it saves huge chunks of memory, which often
becomes a limit in big projects. However, the problem with using the <audio>
for items like already rendered tracks in a mixer has the problematique of
<audio> not being able to play sample-accurately, meaning we'd get unsynced
playback.

Anyway, I think we're waiting for the <audio> implementations to catch up
with our use cases as well, and in the meantime, we'll have to sacrifice
some precious memory and cache our items if we need synced playback.

If we instead are just playing back a single file and analyzing it, we
should use <audio>.

Best Regards,
Jussi


On Mon, Feb 28, 2011 at 6:02 PM, Noah Mendelsohn <nrm@arcanedomain.com>wrote:

> I agree. It should be possible, memory permitting, to buffer the full data,
> or any selected areas of data, for clips of much longer than a minute or so.
> Whether AudioBuffer is the right abstraction for this, I'm not sure, but if
> I want to build something like an audio mixer application, or if I'm
> building a scientific application that analyses audio, I might want lots of
> flexibility to buffer various bits of the audio as necessary to maintain
> performance, graph waveforms, etc. Which parts of the proposed API are to be
> used for such things? (I admit I'm just learning about the API, so maybe
> I've missed something obvious.)
>
> Noah
>
>
> On 2/27/2011 7:25 PM, Ryan Berdeen wrote:
>
>> Using the Web Audio API, how should I access the data of an audio file
>> longer than a minute?
>>
>>  From the current specification on AudioBuffers at
>>>
>>
>> http://chromium.googlecode.com/svn/trunk/samples/audio/specification/specification.html#AudioBuffer-section
>> :
>>
>>  Typically, it would be expected that the length of the PCM data would be
>>> fairly short (usually somewhat less than a minute). For longer sounds,
>>> such as music soundtracks, streaming should be used with the audio
>>> element and MediaElementAudioSourceNode.
>>>
>>
>> I can use AudioBuffers in conjunction with JavaScriptAudioNodes to
>> manipulate the audio data, but I don't see a way to do this with a
>> MediaElementAudioSourceNode. Let's say I wish to play an MP3 file of a
>> typical 3 minute long song backwards. How should I accomplish this?
>> The API discourages me from loading the song into an AudioBuffer due
>> to it's length, but this seems to be the only way to directly access
>> and manipulate the audio data.
>>
>> For comparison, with Flash and ActionScript 3, I can load an MP3 as a
>> Sound object and use the "extract" method to get a subset of the audio
>> data:
>> http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/media/Sound.html#extract()
>>
>> - Ryan
>>
>>
>>
>>
>

Received on Thursday, 3 March 2011 14:03:42 UTC