W3C home > Mailing lists > Public > public-media-capture@w3.org > December 2011

Re: Generic media recording proposal

From: Harald Alvestrand <harald@alvestrand.no>
Date: Fri, 09 Dec 2011 23:22:15 +0100
Message-ID: <4EE28A17.7080406@alvestrand.no>
To: Travis Leithead <travis.leithead@microsoft.com>
CC: "public-media-capture@w3.org" <public-media-capture@w3.org>
On 12/08/2011 06:50 PM, Travis Leithead wrote:
> What is the group's thoughts on creating a recorder interface that could be used for more than just MediaStreams? For example, it seems like it might be neat to be able to record a<video>  or<audio>  tag, or record video from an animating<canvas>?
>
> If this sounds like an interesting scenario for future extension, I would suggest that the "recording" capability be logically separated from the MediaStream.
I am in favour of the separation, but that's because I think the 
functionality is not needed for the main purposes of MediaStream, which 
makes it strange to require implementations to have it.

I would prefer to think of those different recording sources as 
producing a MediaStream that can be consumed by a Recorder, though. It 
decreases the amount of required linkage.

That said - I like this. It's very like what Tommy proposed, but with 
somewhat more controls.
> I would propose something akin to the FileReader, but for recording:
>
> // encodeType required here, but could be made optional if the type is left to UA discretion
> [Constructor(DOMString encodeType, optional DOMString quality)]
What do you mean by "quality" here? This is one of those words that 
turns out to be either a hard link to a very big specification or an 
exercise in wishful thinking ("high", "low", "medium").
> interface MediaEncoder {
>      // similar to canPlayType() of HTMLMediaElement
>      static boolean canRecordType(DOMString type);
>
>      readonly attribute DOMString encodingType; // Reports the constructor's value
>      readonly attribute DOMString encodingQuality; // Reports the constructor's encoding quality (or the default)
>
>      // encoder state
>      readonly attribute DOMString readyState; // "idle", "encoding", "error"
>      readonly attribute DOMError error;  // Reports the error state.
>
>      // data source
>      void setSource(MediaStream source);  // The source of the stream, including access to related tracks, etc.
>      // Could add more overloads here in the future...e.g.:
>      // void setSource(HTMLVideoElement source);
>      readonly attribute any src; // null or reference to the current source (MediaStream)
>
>      // encoder controls
>      void start(); // async start encoding request
>      void stop(); // async end request
>
>      // encoded result
>      readonly attribute Stream stream; // suitable for using StreamReader to collect into a Blob if desired
>
>      // Support [encoder] media events
>      [TreatNonCallableAsNull] attribute Function? onload;
>      [TreatNonCallableAsNull] attribute Function? onstart;
>      [TreatNonCallableAsNull] attribute Function? onstop;
>      [TreatNonCallableAsNull] attribute Function? onbufferempty;
>      [TreatNonCallableAsNull] attribute Function? onchange;
>      [TreatNonCallableAsNull] attribute Function? onerror;
> };
>
> * load - fired after a new src is established, and the encoder has finished analyzing the media stream and prepared its internal state
> * start - fired when encoded data is available in the [internal] buffer after the call to start() has been made (the stream result will then have data available)
> * stop - fired after the stop() request was made and the encoder stopped processing new data from the src (data may still be in the buffer waiting to be read by the stream sink)
should these be onstarted / onstopped, to reflect that they're fired 
when the event has happened, not when it is asked for?
> * bufferempty - fired when the MediaRecorder's buffer is empty (i.e., the stream sink has finished reading all the data from the [internal] recorded buffer)
Seems undefined to me. Why should there be a buffer visible in the 
abstraction, and why should there be a control on it?
> * change - fired when the recording source is changed (via setSource() before load fires)
Not at all clear to me what use case this would provide support for. Do 
you intend for there to be the ability to do setSource() multiple times, 
or is setSource() an one-time operation?
> * error - fired for a variety of possible reasons, such as calling stop() before start(), attempting to assign a new setSource() while encoding is in progress, incompatible mutations to the MediaStream's tracks, etc.
>
> The above proposal is built around audio/video capture, but might be extended for photo/snapshot use too.
Received on Friday, 9 December 2011 22:22:51 GMT

This archive was generated by hypermail 2.3.1 : Tuesday, 26 March 2013 16:14:58 GMT