RE: recording

Yang
Huawei

From: Harald Alvestrand [mailto:harald@alvestrand.no]
Sent: Tuesday, November 13, 2012 7:19 PM
To: public-media-capture@w3.org
Subject: Re: recording

On 11/12/2012 08:42 PM, Jim Barnett wrote:
Here's a summary of what I think we agreed to in Lyon.  If we still agree to it, I will start writing it up.  (Please send comments on any or all of the points below.)
I think I like this. I'd like to see this as a proposal.


Recording should be implemented by a separate class.  Its constructor will take a MediaStream as an argument  (we can define it to take other types as well, if we choose, but no one is suggesting any at the moment.)

There are two kinds of recording:

1)      incremental, in which (smaller) Blobs of data are returned to the application as they are available.

2)      All-at-once, in which one big Blob of data is made available when recording is finished.
So are these 2 classes that both take a MediaStream as a constructor argument?
This might be simpler than having one merged class that does both.

[yang]  I think it is 1 class Jim mentioned, and using two method for the 2 purposes.


There will be different methods for these types.  To make a distinction, I will call type 1 'capture' and type 2 'recording'  (I tend to think of 'recording' as producing one big file, while 'capture' sounds more incremental).

To do incremental recording, the app calls startCapture(bufferSize), where bufferSize specifies the number of milliseconds of data it wants to receive in each Blob.  The Blobs are made available in  dataavailable events (with an ondataavailable handler to extract/process the data.)  Capture continues until the MediaStream is ended or the app calls stopCapture.  When this happens it receives  a final dataavailable event, and then a captureDone event (which indicates that no further data will be available.)  While capture is going on, the app can also call getData, which causes the UA to generate a dataavailable event containing all the data that it has gathered since the last dataavailable event.  (The app can use getData to do polling on its own schedule if it doesn't want to rely on a fixed buffer size.  It would have to set 'bufferSize' to a very large value in this case.)
This makes sense to me - but is there a race condition possible, where the buffer fills up at the very instant that getData is called? It may not matter - one always gets a dataavailable event, either with a short buffer or a full buffer. As long as it's clear that client must be written to handle dataavailable any time, I think it works no matter what.

Is it OK for recording to wait a bit (say, until one has a full frame) before returning data after a getData call? Allowing this saves us from requiring that the app handle partial data stream constructs.

[yang] proposal has said that only after dataavailable event is fired, we can use getData, I guess this is the same meaning you mentioned.


To do all-at-once recording, the app calls startRecording, which returns immediately.   The UA accumulates data until the MediaStream is ended or the app calls stopRecording.  At that point a single recordingDone event is generated which contains a Blob with all the accumulated data.

It is an open question whether an app can make overlapping calls to startCapture and startRecord.  We can define it either way.
If they are methods on different classes, this becomes a non-question.

[yang] I think we only have one class, Record, and I suggest we do not have 2 method, why not startCapture do the two job, we can use argument to control the BLOB size.

There will be methods to read and set the UA's recording options.  Details are TBD but at the least the app would be able to choose among the available recording/container formats (if the UA supports more than one.)
I agree with Tim that MIME types (possibly with parameters) seems like the place to start.

[yang] also agree with this.

There were a couple of issues left open in the discussion.  The first was whether there would be an MTI format.  It seems to me that at the very least we would want to require that the UA be able to play back any format that it can record/capture to.
That makes sense to me too. Should we say "play back via the MediaSource API" to be explicit?




  I think that it would also make sense to require that the UA support a format/container allowing it to record/capture at least 2 simultaneous video and 2 audio streams (i.e. both legs of an simple peer-to-peer audio+ video call).   If we can agree on an MTI format that allows this, so much the better (but I don't have one to propose.)

The second open issue was the definition of errors.  There wasn't any consensus on a set of errors that we should define, and many people seemed to think that different platforms might raise errors under different conditions, depending on their underlying capabilities.  Whatever we decide to do, it seems to me that there are two cases that we should distinguish.  Some errors are fatal, in that recording cannot continue (running out of memory would be an example.)  In this case, the UA should raise an Error, followed by dataavailable and captureDone (in the capture case) or recordingDone (in the recording case.)  In other cases, recording may be able to continue, though the results may not be what the app is expecting.  The best example of this that came up during discussion was adding a Track while capture was underway.  Platforms may not be able to do this, but could continue capturing the original set of Tracks.  In this case, perhaps the UA should raise a Warning event, indicating "I'm still recording, but it's probably not what you want." It would be up to the App to decide whether to stop capture or let it continue.
Seems to me that Error(recording will end) and Warning(recording will not end) seems like a sensible distinction to make available to the user. I suggest you write up a starting point for this.

[yang] I suggest do not have "will end","will not end", they make no sense, only end is ok, and may report run out of memory or disk error.


-        Jim

Received on Friday, 16 November 2012 09:15:43 UTC