W3C home > Mailing lists > Public > public-media-capture@w3.org > July 2013

Re: Communicating memory pressure in mediastream recording

From: Greg Billock <gbillock@google.com>
Date: Tue, 30 Jul 2013 14:15:41 -0700
Message-ID: <CAAxVY9dHuHKHkKVMi1bwhB-V+HAOPB2Fc8NNFADGubZZKmMnTA@mail.gmail.com>
To: "Robert O'Callahan" <robert@ocallahan.org>
Cc: "public-media-capture@w3.org" <public-media-capture@w3.org>
On Mon, Jul 29, 2013 at 4:39 PM, Robert O'Callahan <robert@ocallahan.org>wrote:

> On Tue, Jul 30, 2013 at 4:34 AM, Greg Billock <gbillock@google.com> wrote:
>> In the record(timeslice) case, the implementation will probably not want
>> to incur the latency penalty of writing each intermediate block to disk
>> before creating the blob handle. This means some coordination is required
>> to make sure the app doesn't let too much in-memory content accumulate.
>> (The no-timeslice overload shouldn't have this weakness.
> Can you give a specific example of a use-case for this?

Just the one you mention below -- if the browser is keeping the blobs
in-memory, the app may for some reason keep the blobs in a list. Suppose,
for example, I queue the blobs for sending in-order via XHR, but my network
connection is flaky. If those references pin the blobs in-memory it could
create pressure quite quickly.

Moving the blobs to disk under memory pressure is a good idea. It's
basically the app's only option anyway in some cases. Even with that
available, however, there's still an issue: suppose the disk is full. I
think we need a memory pressure (or storage pressure, more generically)
signal regardless if the client is accepting streaming blocks. And in other
cases, the app may prefer to take another approach to the signal. (i.e.
stop and then restart the recording)

I think an implementation that sent LOW_MEMORY, then soon after began
transferring to disk, then sent OUT_OF_MEMORY on exhaustion would be pretty
robust in the use cases we've discussed.

> In Gecko, data accumulated by the recorder is initially kept in memory.
> Once it exceeds a certain size (1MB) we transfer it to a file. So, Blobs
> below the threshold are memory-backed, Blobs above the threshold are
> file-backed. It would be possible for an application to exhaust memory by
> using a small timeslice and accumulating all the Blobs in a list, but why
> would an application need to do that? AFAIK the only reason to use a small
> timeslice is if you're going to stream the recorded data to a file or
> across the network.
> Even if an application does need to do that, I'd much rather handle the
> low-memory situation by having Gecko transparently move Blob contents from
> memory to files than by asking the Web application to handle it. I have low
> confidence Web developers would bother handling low memory, or would be
> able to handle it robustly if they did try.
> Rob
> --
> Jtehsauts  tshaei dS,o n" Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni
> le atrhtohu gthot sf oirng iyvoeu rs ihnesa.r"t sS?o  Whhei csha iids  teoa
> stiheer :p atroa lsyazye,d  'mYaonu,r  "sGients  uapr,e  tfaokreg iyvoeunr,
> 'm aotr  atnod  sgaoy ,h o'mGee.t"  uTph eann dt hwea lmka'n?  gBoutt  uIp
> waanndt  wyeonut  thoo mken.o w  *
> *
Received on Tuesday, 30 July 2013 21:16:08 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 16:26:18 UTC