RE: revised recording proposal

> there are differences in how the actual collection happens because the GC knows about the backing array used to store the data in ArrayBuffers, but has to call a finalizer to NS_RELEASE the nsIDOMBlob for Blobs, but I think those are completely irrelevant to this discussion

Thanks - this is along the lines of what I would have liked to point out regarding implementations of typed arrays versus blobs (but not quite everything).  I don't believe this distinction is irrelevant at all.  But I can also understand why going into the topic of specific implementations can be problematic in this group.

I think one goal should be to avoid replication of multimedia buffers in virtual space due to multiple references.  I don't expect with a typed array to have multiple copies created for every reference.  You can't even read the contents of an ArrayBuffer without an ArrayView, and an ArrayView should not result in an additional copy of the underlying data due to creation of the typed array.  For example, according to MDN (https://developer.mozilla.org/en-US/docs/JavaScript_typed_arrays/ArrayBuffer), "You can't directly manipulate the contents of an ArrayBuffer; instead, you create an ArrayBufferView object which represents the buffer in a specific format, and use that to read and write the contents of the buffer."  In other words, no copy created.

Blobs could (and should) be implemented in a similar fashion I suppose, but the current specification (http://www.w3.org/TR/FileAPI/#slide-method-algo) states that for extracting slices "The slice method returns a new Blob object with bytes ranging from the optional start parameter up to but not including the optional end parameter, and with a type attribute that is the value of the optional contentType parameter."  So in my reading it is conceivable that the new blob created from a slice could result in a new copy in a poorly designed UA - there is no normative text preventing it.

-----Original Message-----
From: Timothy B. Terriberry [mailto:tterriberry@mozilla.com] 
Sent: Thursday, November 29, 2012 8:52 AM
To: public-media-capture@w3.org
Subject: Re: revised recording proposal

Jim Barnett wrote:
 > There hasn't been much discussion of this on the list.  (If it matters, in the  > speech recognition case, the buffers are likely to be about 200ms in size,  > though of course we can't guarantee that apps won't ask for other sizes.)

For any audio-only use-case, we could store several minutes of compressed audio in the same space it takes to store one uncompressed HD video frame for display.

> Mandyam, Giridhar wrote:
>> My understanding is that existing GC's handle the two data types very 
>> differently (if I go into more details I may have to start discussing 
>> proprietary implementations).

If your argument is, "You should do X, but I can't tell you why because secretz," you're not likely to get a lot of agreement from the rest of us engineers. AFAIK, in _our_ implementation the lifetime of Blobs and ArrayBuffers are controlled in exactly the same way (there are differences in how the actual collection happens because the GC knows about the backing array used to store the data in ArrayBuffers, but has to call a finalizer to NS_RELEASE the nsIDOMBlob for Blobs, but I think those are completely irrelevant to this discussion).

Received on Thursday, 29 November 2012 20:28:40 UTC