W3C home > Mailing lists > Public > public-audio@w3.org > April to June 2013

Re: OfflineAudioContext specification gaps

From: Russell McClellan <russell@motu.com>
Date: Fri, 3 May 2013 19:06:19 -0400
Cc: "Robert O'Callahan" <robert@ocallahan.org>, Joseph Berkovitz <joe@noteflight.com>, "public-audio@w3.org WG" <public-audio@w3.org>
Message-Id: <B8C56CCF-4E3F-40D1-BA32-05BF48B7A62F@motu.com>
To: Chris Rogers <crogers@google.com>
On May 3, 2013, at 6:59 PM, Chris Rogers <crogers@google.com> wrote:
> 
> 
> 
> On Fri, May 3, 2013 at 3:51 PM, Robert O'Callahan <robert@ocallahan.org> wrote:
> It might work well to give startRendering() an optional duration parameter in samples. Then the OfflineAudioContext would render exactly that many samples (which must be a multiple of the block size, of course) and stop (firing some kind of notification event). Then the application could make changes to the graph and call startRendering() again to render another batch of samples.
> 
> Yes, I was thinking something like that too.  Do you think it would be necessary to call startRendering() again each time, or could it be implied that after returning from the event handler that processing would continue, repeatedly calling the notification, then finally calling the oncomplete handler?

Another advantage of something like this is that it would allow streaming of rendered audio data rather than a single buffer at the end of the render, which is something I've been asking for as a user.

Currently there's nothing in the specification that says that you can't modify the audio graph from a script processor node, so one might expect to be able to do that.  If that were explicitly allowed, I don't see why you'd need any other method of scheduling graph change events.

Thanks,
-Russell
Received on Friday, 3 May 2013 23:06:44 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:03:18 UTC