RE: Generic media recording proposal

>-----Original Message-----
>From: Harald Alvestrand [mailto:harald@alvestrand.no]
>On 12/08/2011 06:50 PM, Travis Leithead wrote:
>> What is the group's thoughts on creating a recorder interface that could
>>be used for more than just MediaStreams? For example, it seems like it
>>might be neat to be able to record a<video>  or<audio>  tag, or record
>>video from an animating<canvas>?
>>
>> If this sounds like an interesting scenario for future extension, I would
>>suggest that the "recording" capability be logically separated from the
>>MediaStream.
>
>I am in favour of the separation, but that's because I think the
>functionality is not needed for the main purposes of MediaStream, which
>makes it strange to require implementations to have it.
>
>I would prefer to think of those different recording sources as
>producing a MediaStream that can be consumed by a Recorder, though. It
>decreases the amount of required linkage.

In other words, should those different sources wish to support recordings, they would need to add an API to "extract" a MediaStream object that could then be presented to a Recorder, correct? This does appear to decrease the potential for linkage or extension required to a Recorder interface such as the one previously described.

>> [Constructor(DOMString encodeType, optional DOMString quality)]
>
>What do you mean by "quality" here? This is one of those words that
>turns out to be either a hard link to a very big specification or an
>exercise in wishful thinking ("high", "low", "medium").

My initial thoughts were more along the lines of the "wishful thinking" approach, which is what the Win8 Developer Preview implemented [1] and [2]. However, as Rich previously pointed out, HTML5's "toBlob" API already has a notion of specifying a "quality" parameter [3], which is loosely defined for image/jpeg but not for other types. That might work, but perhaps may not be applicable to a wider range of image/video formats.

>> * start - fired when encoded data is available in the [internal] buffer
>>after the call to start() has been made (the stream result will then have
>>data available)
>> * stop - fired after the stop() request was made and the encoder stopped
>>processing new data from the src (data may still be in the buffer waiting
>>to be read by the stream sink)
>
>should these be onstarted / onstopped, to reflect that they're fired
>when the event has happened, not when it is asked for?

Yep. Totally agree.

>> * bufferempty - fired when the MediaRecorder's buffer is empty (i.e., the
>>stream sink has finished reading all the data from the [internal] recorded
>>buffer)
>
>Seems undefined to me. Why should there be a buffer visible in the
>abstraction, and why should there be a control on it?

May just be a bad name; the concept is that a consumer of the Stream output of this Recorder interface does not need to start reading the Stream immediately. In fact, the consumer might read only a portion of the Stream, convert it into a Blob (via StreamReader) execute some post-processing on it, then return and read more data from the Stream. Meanwhile, the Recorder will still be recording and the [internal] buffer will grow. The intent of this event was to notify the reader of the Stream that all the content from the recording has been read.

It appears that the StreamReader interface already notifies when the Stream being read is empty. Since that notification fulfills this scenario, it's entirely likely that this event is not needed.


>> * change - fired when the recording source is changed (via setSource()
>>before load fires)
>
>Not at all clear to me what use case this would provide support for. Do
>you intend for there to be the ability to do setSource() multiple times,
>or is setSource() an one-time operation?

It seemed useful to me to create a single Recorder instance based on a MIME type/quality inputs, and then to be able to use that instance to record a variety of things. That would imply that the Recorder instance could be re-used (support setSource() multiple times); otherwise, the Recorder is one-time use, and you may as well specify the source in the Constructor as a required parameter.

The scenario that is not possible with a one-time-use Recorder is recording segments of a MediaStream into one contiguous file (not recording the gaps between the time that stop() is called and start() is called). For example, if your capture device reports periods of silence on an audio track, you could stop and start the audio recording to trim out the silence gaps in a conversation (or commercials in a video stream as another example).


[1] http://msdn.microsoft.com/en-us/library/windows/apps/windows.media.capture.audioencodingquality.aspx
[2] http://msdn.microsoft.com/en-us/library/windows/apps/windows.media.capture.videoencodingquality.aspx
[3] http://dev.w3.org/html5/spec/Overview.html#dom-canvas-toblob

Received on Monday, 12 December 2011 22:05:11 UTC