Fwd: Large AudioBuffers in Web Audio API

---------- Forwarded message ----------
From: Jussi Kalliokoski <jussi.kalliokoski@gmail.com>
Date: Wed, Mar 2, 2011 at 7:38 AM
Subject: Re: Large AudioBuffers in Web Audio API
To: Noah Mendelsohn <nrm@arcanedomain.com>




On Tue, Mar 1, 2011 at 5:22 PM, Noah Mendelsohn <nrm@arcanedomain.com>wrote:
>
>
> Ah, OK. I find the current spec to be more like a formatted IDL than an
> actual functional specification or user manual for the API, so it can be
> hard to tell what's actually intended.
>
>
Yes, but I think that will change soon, and we'll get the developer side of
the API spec as well. :)


>
> Yes, I assume that's what they do, but I'm also assuming that they need a
> lot of flexibility in what they read and buffer, and it wouldn't surprise me
> that there are times when storing more is what you need to do.
>
>
Yes, but the problematique here lies in the fact that native DAWs have the
luxury of reading files in chunks, having a readhead, whereas that's not at
all the nature of the web, we can just download the whole file and hope the
user has enough memory to hold that. However, there are ways to work around
this, such as creating a node server that would serve portions of a file
through, ehh, websockets, maybe? Also, there's the File API that may save us
some of that pain, but I'm not sure how we can bend these APIs to play
around with compressed material as well...


>
> So, I think what I'm arguing is that implementing a DAW using this API is a
> good requirement/use case for the spec. Exactly how to design an API that
> I'm not the one to suggest. Certainly, if the API itself doesn't support
> sync'd multi-stream playback, then the buffering and computational
> capabilities of the APIs, when used with a modern Javascript runtime and
> associated libraries, need to be shown to be adequate to computing the
> merged/synced streams in real time, and arranging for capture and playback
> with minimal latency.
>
> Maybe not all of this will be achievable with quite the performance or
> flexibility one would want on day 1, but I think the API should be
> architected from the start to support it. Building DAWs, (the sound portion
> of) video editors, etc. seems like something you really want to do with this
> API. Doing visualizations and effects processing on single streams is fun,
> but is that really where the high value is.


A valid point, I also think DAWs should be a use case for the API, but
actually that is the case, because basically, the Web Audio API is very
similar to Audio Core on Mac, which is, as we have seen, very suitable for
that use case. I also think that the things that put us in a weaker position
compared to native do not include this API, but many other APIs that are
still in progress. This API pretty much does what it's supposed to do, and
if we extend it to cover areas that are supposed to be covered by other
APIs, I think we're going to have a very hard time getting this API to the
recommendation status.

And also, yes, the API is designed to be very easily expandable, especially
by other APIs. :)

Best Regards,
Jussi Kalliokoski

Received on Thursday, 3 March 2011 14:04:05 UTC