Re: Large AudioBuffers in Web Audio API

David-

I've been working on the basically the same concept for MP3—split the
file on frames, then have the audio API do the decoding of the
individual chunks. I have a proof-of-concept MP3 parser here:
https://github.com/also/js-audio/blob/master/mp3.js. I also made a
simple WAV file reader, but yours looks much more complete.

I focused on getting the data into an AudioBuffer, which you could
then schedule or use in a JSAudioNode, which I think allows more
flexibility and is what I'd like to see in the API.

I'm curious if this anywhere on the upcoming API roadmap, or if is it
worth continuing to work on this further in JavaScript. I'm guessing
something like this will show up in the spec at some point.

Ryan

On Mon, May 16, 2011 at 5:28 PM, David Lindkvist
<david.lindkvist@shapeshift.se> wrote:
> Hi everyone,
> sorry for entering this discussion a bit late. I recently did an experiment
> of scheduling large audio samples as buffered chunks, and would be
> interested in hearing your feedback on the approach:
> 1 - It uses the FileSystem API to store a wav file on disk.
> 2 - Uses the File API to read slices of the wav file into memory
> 3 - Schedules these chunks in sequence using AudioBufferSourceNodes (Web
> Audio API)
> Demo URL: http://jsbin.com/ffdead-audio-wav-slice/233  (feel free to play
> with the source code and save new revisions)
> I have tested this on MacOSX and the Windows Canary build of Chrome and am
> very happy with the result so far - no audible clicks as long as the tab has
> focus.
> Topics for discussion:
>  - will this approach scale up to multiple tracks with realtime effects?
>  - Is window.requestAnimationFrame a good solution for scheduling/updating
> the UI without stealing priority from audio playback?
>  - Using the Web Audio api, how would I go about bouncing a mix of X tracks
> with effects? Do I route the main mix through a JavaScriptAudioNode and
> create a wav on the fly using something like XAudioJS? I would very much
> like to see a save method provided by the api (as I believe you already have
> discussed in this group).
> Thanks for a very promising API Chris, I'm happy to see DAWs explicitly
> called out as a use case in the Web Audio spec.
> Thanks,
> David Lindkvist
> ---------- Forwarded message ----------
> From: Jussi Kalliokoski <jussi.kalliokoski@gmail.com>
> Date: Wed, Mar 2, 2011 at 7:38 AM
> Subject: Re: Large AudioBuffers in Web Audio API
> To: Noah Mendelsohn <nrm@arcanedomain.com>
>
> On Tue, Mar 1, 2011 at 5:22 PM, Noah Mendelsohn <nrm@arcanedomain.com>wrote:
>> Yes, I assume that's what they do, but I'm also assuming that they need a
>> lot of flexibility in what they read and buffer, and it wouldn't surprise
>> me
>> that there are times when storing more is what you need to do.
>>
>>
> Yes, but the problematique here lies in the fact that native DAWs have the
> luxury of reading files in chunks, having a readhead, whereas that's not at
> all the nature of the web, we can just download the whole file and hope the
> user has enough memory to hold that. However, there are ways to work around
> this, such as creating a node server that would serve portions of a file
> through, ehh, websockets, maybe? Also, there's the File API that may save us
> some of that pain, but I'm not sure how we can bend these APIs to play
> around with compressed material as well...
>>
>> So, I think what I'm arguing is that implementing a DAW using this API is
>> a
>> good requirement/use case for the spec. Exactly how to design an API that
>> I'm not the one to suggest. Certainly, if the API itself doesn't support
>> sync'd multi-stream playback, then the buffering and computational
>> capabilities of the APIs, when used with a modern Javascript runtime and
>> associated libraries, need to be shown to be adequate to computing the
>> merged/synced streams in real time, and arranging for capture and playback
>> with minimal latency.
>>
>> Maybe not all of this will be achievable with quite the performance or
>> flexibility one would want on day 1, but I think the API should be
>> architected from the start to support it. Building DAWs, (the sound
>> portion
>> of) video editors, etc. seems like something you really want to do with
>> this
>> API. Doing visualizations and effects processing on single streams is fun,
>> but is that really where the high value is.
>
> A valid point, I also think DAWs should be a use case for the API, but
> actually that is the case, because basically, the Web Audio API is very
> similar to Audio Core on Mac, which is, as we have seen, very suitable for
> that use case. I also think that the things that put us in a weaker position
> compared to native do not include this API, but many other APIs that are
> still in progress. This API pretty much does what it's supposed to do, and
> if we extend it to cover areas that are supposed to be covered by other
> APIs, I think we're going to have a very hard time getting this API to the
> recommendation status.
> And also, yes, the API is designed to be very easily expandable, especially
> by other APIs. :)
> Best Regards,
> Jussi Kalliokoski

Received on Thursday, 19 May 2011 05:15:37 UTC