MediaStream Recording : record() method

A couple more points about the record(optional long? timeslice) method.

First, I think the recording API should be tuned for latent behavior, so
that the recording product can be stored to long-term storage. That is, an
app shouldn't expect to be able to get real-time recording from the API.
For use cases where that's important, we have PeerConnection. This means
that not all timeslice values will be honored. We could look at language
like the setTimeout() spec uses -- where timeout values set inside a nested
setTimeout won't be honored if they are greater than 4ms.

Here we could use something more UA-specific, such that either the
timeslice value is to be regarded as a hint, or just say up-front that
sufficiently low values of |timeslice| won't be honored, and some
UA-specific minimum will be used instead. It'd be good to mention in the
spec that apps should not attempt to rely on the API for real-time behavior.

Concrete edit:

3. If the timeSlice argument has been provided, then once timeSlice
milliseconds of data have been collected, raise a dataavailable event
containing the Blob of collected data, and start gathering a new Blob of
data. Otherwise (if timeSlice has not been provided), continue gathering
data into the original Blob.

could become:

3. If the timeSlice argument has been provided, then once at least
timeSlice milliseconds of data have been collected, or some minimum time
slice imposed by the user agent, whichever is greater, raise a
dataavailable event containing the Blob of collected data, and start
gathering a new Blob of data. Otherwise (if timeSlice has not been
provided), continue gathering data into the original Blob. Callers should
not rely on exactness of the timeSlice value, especially if the timeSlice
value is small. Callers should consider timeSlice as a minimum value.


Another thing I noticed in the API is the language that "The UA must record
the MediaStream in such a way that the original Tracks can be retrieved at
playback time."

Shouldn't this be a "should" requirement? The UA should maintain fidelity
to the inputs, but suppose an app configures the recorder to accept
multiple superimposed audio tracks into a format that doesn't support such
superposition. That seems like a pretty nice use case (web-based audio
editing) that apps would want to deliberately support.

The original tracks make it sound like playback would produce either the
same JS objects that went in, or ones with the same bits. I don't think
this is the intended interpretation. Perhaps a better interpretation is
that the recorder should represent all recorded tracks in the output. (Not,
for example, just the first track in a set.)

Concrete edit:

"The UA should record the MediaStream in such a way that all compatible
Tracks in the original are represented at playback time. The UA should do
so in the highest fidelity to the input track composition which it can,
given recording options and output format."

Received on Tuesday, 27 August 2013 00:36:31 UTC