W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2013

Re: [web-audio-api] OfflineAudioContext needs a way to handle audio of arbitrary duration (#21)

From: Olivier Thereaux <notifications@github.com>
Date: Wed, 11 Sep 2013 07:29:24 -0700
To: WebAudio/web-audio-api <web-audio-api@noreply.github.com>
Message-ID: <WebAudio/web-audio-api/issues/21/24244094@github.com>
> [Original comment](https://www.w3.org/Bugs/Public/show_bug.cgi?id=21311#1) by Robert O'Callahan (Mozilla) on W3C Bugzilla. Sun, 17 Mar 2013 22:57:19 GMT

(In reply to [comment #1](#issuecomment-24244089))
> However, to speculate a bit, another approach for handling long renderings
> is to direct the output to a MediaStreamRecorder which will hopefully in the
> future allow extracting compressed formats as well.

The whole point of MediaStreamRecorder is to support compression so I think that'll be supported as soon as MediaStreamRecorder is implemented.

> This would allow the UA
> take care of the render/compress cycle and block sizes, and the API would
> just hand the developer a readily rendered blob to use however needed. I'm
> not quite up to date on MediaStreamRecorder though, perhaps Roc knows more
> about whether this is feasible or not.

It's feasible but we need to specify how MediaStreamRecorder and OfflineAudioContext interact since currently MediaStreams are real-time-only.

I think it's pretty important to have some mechanism whereby OfflineAudioContext can deliver data incrementally.

---
Reply to this email directly or view it on GitHub:
https://github.com/WebAudio/web-audio-api/issues/21#issuecomment-24244094
Received on Wednesday, 11 September 2013 14:32:09 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:03:24 UTC