RE: Communicating memory pressure in mediastream recording

One thing to consider about Low_Memory is that the spec currently says that when the UA raises an error, it  should then raise a dataavailable event followed by a stop event, (and also stop recording).    That's probably right for a fatal error, but Low_Memory isn't fatal.  If we want it to be an Error, we'll need to change the definition.  Raising a dataavailable event might still make sense, but do we remove the stop altogether, or make it contingent on whether the UA is able to continue?


-          Jim

From: Greg Billock [mailto:gbillock@google.com]
Sent: Tuesday, July 30, 2013 5:16 PM
To: Robert O'Callahan
Cc: public-media-capture@w3.org
Subject: Re: Communicating memory pressure in mediastream recording

On Mon, Jul 29, 2013 at 4:39 PM, Robert O'Callahan <robert@ocallahan.org<mailto:robert@ocallahan.org>> wrote:
On Tue, Jul 30, 2013 at 4:34 AM, Greg Billock <gbillock@google.com<mailto:gbillock@google.com>> wrote:
In the record(timeslice) case, the implementation will probably not want to incur the latency penalty of writing each intermediate block to disk before creating the blob handle. This means some coordination is required to make sure the app doesn't let too much in-memory content accumulate. (The no-timeslice overload shouldn't have this weakness.

Can you give a specific example of a use-case for this?


Just the one you mention below -- if the browser is keeping the blobs in-memory, the app may for some reason keep the blobs in a list. Suppose, for example, I queue the blobs for sending in-order via XHR, but my network connection is flaky. If those references pin the blobs in-memory it could create pressure quite quickly.

Moving the blobs to disk under memory pressure is a good idea. It's basically the app's only option anyway in some cases. Even with that available, however, there's still an issue: suppose the disk is full. I think we need a memory pressure (or storage pressure, more generically) signal regardless if the client is accepting streaming blocks. And in other cases, the app may prefer to take another approach to the signal. (i.e. stop and then restart the recording)

I think an implementation that sent LOW_MEMORY, then soon after began transferring to disk, then sent OUT_OF_MEMORY on exhaustion would be pretty robust in the use cases we've discussed.



In Gecko, data accumulated by the recorder is initially kept in memory. Once it exceeds a certain size (1MB) we transfer it to a file. So, Blobs below the threshold are memory-backed, Blobs above the threshold are file-backed. It would be possible for an application to exhaust memory by using a small timeslice and accumulating all the Blobs in a list, but why would an application need to do that? AFAIK the only reason to use a small timeslice is if you're going to stream the recorded data to a file or across the network.

Even if an application does need to do that, I'd much rather handle the low-memory situation by having Gecko transparently move Blob contents from memory to files than by asking the Web application to handle it. I have low confidence Web developers would bother handling low memory, or would be able to handle it robustly if they did try.

Rob
--
Jtehsauts  tshaei dS,o n" Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni le atrhtohu gthot sf oirng iyvoeu rs ihnesa.r"t sS?o  Whhei csha iids  teoa stiheer :p atroa lsyazye,d  'mYaonu,r  "sGients  uapr,e  tfaokreg iyvoeunr, 'm aotr  atnod  sgaoy ,h o'mGee.t"  uTph eann dt hwea lmka'n?  gBoutt  uIp  waanndt  wyeonut  thoo mken.o w

Received on Tuesday, 30 July 2013 22:16:26 UTC