W3C home > Mailing lists > Public > public-media-capture@w3.org > October 2012

RE: Lossless modes (Re: approaches to recording)

From: Young, Milan <Milan.Young@nuance.com>
Date: Mon, 15 Oct 2012 16:47:21 +0000
To: Harald Alvestrand <harald@alvestrand.no>, "public-media-capture@w3.org" <public-media-capture@w3.org>
Message-ID: <B236B24082A4094A85003E8FFB8DDC3C1A4AABCD@SOM-EXCH04.nuance.com>


From: Harald Alvestrand [mailto:harald@alvestrand.no]
Sent: Monday, October 15, 2012 6:33 AM
To: public-media-capture@w3.org
Subject: Re: Lossless modes (Re: approaches to recording)

On 10/15/2012 03:24 PM, Jim Barnett wrote:
Harald,
Yes, if this complicates the stack, it won't be worth the effort.  We will use an asynch API that delivers buffers of data (of configurable size) as they are available.  That's somewhat different from the/a common recording case, where you just want the whole Blob when it's done (or maybe you want it written out to file without the JS code ever seeing it.)  This asynch API is the same one we'd use for real-time media processing  as well (for example, drawing a box around the bouncing ball).  There seems to be some disagreement about whether this API is part of the recording API or a separate one.

I think that's mainly a naming issue - "recording" sounds like it's dealing with a whole conversation, with all its media flows, which kind of implies a MediaStream-level operation, and not in real time, while "data processing" sounds like it may need to deal with one component at a time, and may potentially need to be fast.
[Milan] I prefer "Media Access" over "Data Processing".  The later is a pretty broad label.


If all new features were named "featureXYZ", we'd all be less likely to jump to conclusions (and very much less likely to remember which is which!)

                Harald
Received on Monday, 15 October 2012 16:47:50 GMT

This archive was generated by hypermail 2.3.1 : Tuesday, 26 March 2013 16:15:02 GMT