Re: Extending MediaRecorder to record from Web Audio node faster than real time?

Please note that we currently have an open issue to remove media stream and
element creations from OfflineAudioContext -
https://github.com/WebAudio/web-audio-api/issues/308.

OfflineAudioContext is not *required* to render faster than realtime - but
that is its primary purpose.  I think if we really want streaming that is
unlinked from realtime, we need a different source node that includes the
media stream controls on the node itself - rather than relying on a running
stream coming from the <audio> element, with playback controls there.

I'm open to trying to specify this all together, but it will be complex;
currently, the PoR is to remove the ability to create media stream sources
and destinations from OfflineAudioContext.


On Wed, Aug 20, 2014 at 9:19 AM, Jim Barnett <1jhbarnett@gmail.com> wrote:

> The OfflineAudioContext is not required to render faster than real time,
> is it?  Could we say that when attached to a PeerConnection it should
> render in real time?
>
> By the way, is it correct to say that an OfflineAudioContext isn't linked
> to a clock at all?  It must have some sense of what the timing would be
> when the audio is played in real time.  Otherwise, MediaRecorder can't
> process it.  The MediaRecorder API lets the caller say "give me 200ms
> buffers of audio".  It would be fine for the MediaRecorder to return those
> buffers more often than once every 200ms, but the amount of data has to be
> correct.
>
> In any case, I agree that since it is already possible to create a
> MediaStream from an AudioNode (and hence to record from the AudioNode), we
> should figure out how that works (or forbid it) before we allow passing an
> AudioNode directly to MediaRecorder.
>
> - Jim
>
>
> On 8/20/2014 6:57 AM, Harald Alvestrand wrote:
>
>> On 08/19/2014 10:23 AM, John Lin wrote:
>>
>>> Hi all,
>>>    Currently MediaRecorder only records data from media stream, AIUI
>>> [1], is a real time source. That means recording would take as long as
>>> content duration.
>>>    Some use cases, such as cropping and cropping 1st half of an one hour
>>> speech audio clip, would not be very useful if saving the result needs such
>>> a long time to complete.
>>>    Web Audio API already defines OfflineAudioContext [2] to support
>>> processing faster than real time use cases.
>>>    By adding a new Constructor to MediaRecoder API:
>>>           Constructor(AudioNode node, optional unsigned long output = 0,
>>> optional MediaRecorderOptions options)
>>>       web applications can implement use cases that need to save
>>> processed audio with OfflineAudioContext and MediaRecorder.
>>>      What do you think?
>>>
>> Hmm.... this has more hair than most things, but mostly behind the
>> scenes; it's already possible to specify using
>> http://webaudio.github.io/web-audio-api/#the-
>> mediastreamaudiodestinationnode-interface
>>
>> destiationNode = new MediaStreamAudioDestinationNode()
>> recorder = new MediaRecorder(destinationNode.stream)
>>
>> But having MediaStreams that are not linked to a clock at all - we might
>> have to think about that; what happens if we link a stream from an
>> OfflineAudioContext to a PeerConnection?
>>
>> Since it's an apparently legal thing to do, it SHOULD have a well
>> defined behaviour - but I'm not sure whether the result should be "no,
>> you can't do that" or whether it should do something useful in all cases.
>>
>>  [1] final paragraph of http://www.w3.org/TR/mediacapture-streams/#
>>> introduction
>>> [2] http://webaudio.github.io/web-audio-api/#the-
>>> offlineaudiocontext-interface
>>>
>>> —
>>> John
>>>
>>>
>>
>
>

Received on Wednesday, 20 August 2014 19:00:36 UTC