W3C home > Mailing lists > Public > public-media-capture@w3.org > November 2012

Re: recording

From: Adam Bergkvist <adam.bergkvist@ericsson.com>
Date: Tue, 13 Nov 2012 13:32:14 +0100
Message-ID: <50A23DCE.7030302@ericsson.com>
To: Jim Barnett <Jim.Barnett@genesyslab.com>
CC: public-media-capture@w3.org
Hi

To sum up - I really liked your previous proposal that only talked about 
recording. :) See comments inline.

On 2012-11-12 20:42, Jim Barnett wrote:
> Here’s a summary of what I think we agreed to in Lyon.  If we still
> agree to it, I will start writing it up.  (Please send comments on any
> or all of the points below.)
>
> Recording should be implemented by a separate class.  Its constructor
> will take a MediaStream as an argument  (we can define it to take other
> types as well, if we choose, but no one is suggesting any at the moment.)
>
> There are two kinds of recording:
>
> 1)incremental, in which (smaller) Blobs of data are returned to the
> application as they are available.
>
> 2)All-at-once, in which one big Blob of data is made available when
> recording is finished.
>
> There will be different methods for these types.  To make a distinction,
> I will call type 1 ‘capture’ and type 2 ‘recording’  (I tend to think of
> ‘recording’ as producing one big file, while ‘capture’ sounds more
> incremental).

I'm not sure these definitions are universal. I know that we've talked 
about recording vs. capturing earlier and some kind of conclusion was 
that recording was to save data to a file while capturing was grabbing 
data from, e.g., a camera. I can try to dig up some minutes of this if 
people want.

It's ok to call getData() (previously requestData()) in both incremental 
and all-at-once mode right? In that case they're both incremental in 
some sense.

I liked the version with a single record method and a timeSlice 
argument. Then all other methods used recording in their names as well.

> To do incremental recording, the app calls startCapture(bufferSize),
> where bufferSize specifies the number of milliseconds of data it wants
> to receive in each Blob.

I think the old timeSlice argument name is better than bufferSize when 
we're talking about a time unit (ms).

> The Blobs are made available in  dataavailable
> events (with an ondataavailable handler to extract/process the data.)
>   Capture continues until the MediaStream is ended or the app calls
> stopCapture.  When this happens it receives  a final dataavailable
> event, and then a captureDone event (which indicates that no further
> data will be available.)  While capture is going on, the app can also
> call getData, which causes the UA to generate a dataavailable event

I liked the old requestData() rather than getData(). If a method is 
prefixed "get" it's tempting to read a return value.

> containing all the data that it has gathered since the last
> dataavailable event.  (The app can use getData to do polling on its own
> schedule if it doesn’t want to rely on a fixed buffer size.  It would
> have to set ‘bufferSize’ to a very large value in this case.)
>
> To do all-at-once recording, the app calls startRecording, which returns
> immediately.   The UA accumulates data until the MediaStream is ended or
> the app calls stopRecording.  At that point a single recordingDone event
> is generated which contains a Blob with all the accumulated data.
>
> It is an open question whether an app can make overlapping calls to
> startCapture and startRecord.  We can define it either way.

This issue could be avoided with the single record() method from the 
your initial proposal.

/Adam
Received on Tuesday, 13 November 2012 12:32:38 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 16:26:12 UTC