W3C home > Mailing lists > Public > public-media-capture@w3.org > August 2013

Re: Playability of individual Blobs produced by a MediaRecorder

From: Rachel Blum <groby@chromium.org>
Date: Thu, 1 Aug 2013 12:12:59 -0700
Message-ID: <CACmqxcx0VHQT8-xKN_tj+4zmVvu=_WnuJ6q9Cq4Lv8EaH=4hMg@mail.gmail.com>
To: Jim Barnett <Jim.Barnett@genesyslab.com>
Cc: Martin Thomson <martin.thomson@gmail.com>, "robert@ocallahan.org" <robert@ocallahan.org>, Travis Leithead <travis.leithead@microsoft.com>, "public-media-capture@w3.org" <public-media-capture@w3.org>
I don't think we want to do that for each specific timeslice. If we take
the security cam example, that'd either result in lots of really small
files, or occasional humongous blobs that would need to be handled by
dataavailable. Neither one seem like a particularly good solution.

I'd prefer a split() call that would make sure the current encoded sequence
is properly terminated and a new one is started, but it would be hard to
tell which blob is what in the ondataavailable event. (I _suppose_ we could
abuse the blob type to encode that, but that'd be quite a hack)

If we were to use a Streams API instead, split() would simply record a new
stream. The UA can then properly finalize the previous stream, and set up
the appropriate headers/metadata for the new stream. In fact, the split()
behavior could probably be implicit in the record() call, so we wouldn't
even need a bigger API surface.

 - rachel






On Thu, Aug 1, 2013 at 10:13 AM, Jim Barnett <Jim.Barnett@genesyslab.com>wrote:

> One option would be an optional parameter to requestData() that would
> specify whether to deliver the current data as the end of one file and to
> start another.  We could do the same thing with record() when the timeslice
> is specified, specifying whether each returned segment was a complete file
> or not.  However such a parameter wouldn't make much sense for record()
> without a timeslice specified.  What would it mean to call record() without
> a timeslice and with createfile? (or whatever it's called) set to 'false'.
>  Would we wait till the stream ended and then dump out one huge buffer
> without any headers or metadata?
>
> - Jim
>
> -----Original Message-----
> From: Martin Thomson [mailto:martin.thomson@gmail.com]
> Sent: Thursday, August 01, 2013 7:23 AM
> To: Jim Barnett
> Cc: Rachel Blum; robert@ocallahan.org; Travis Leithead;
> public-media-capture@w3.org
> Subject: Re: Playability of individual Blobs produced by a MediaRecorder
>
> On 1 August 2013 13:20, Martin Thomson <martin.thomson@gmail.com> wrote:
> > On 30 July 2013 22:17, Jim Barnett <Jim.Barnett@genesyslab.com> wrote:
> >> I canít think of a good use case for a series of contiguous playable
> blobs.
> >
> > I can.  You are maintaining a continuous record of the output of a
> > security camera.  You don't want to archive an arbitrarily long stream
> > into a single file, instead you want to ensure that you can separately
> > access records from a given time (slice) without having to scan
> > through a single big stream.
>
> (Whoops, fat-fingered the enter key...)
>
> A command that triggers the atomic creation of the end of one file and the
> start of the next seems like an appropriate solution, though I'm sure that
> other options are possible.
>
> I'd like to avoid the need for applications to create overlapping
> recording contexts.  That sucks for any number of reasons.
>
Received on Thursday, 1 August 2013 19:13:46 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 16:26:18 UTC