- From: Rachel Blum <groby@chromium.org>
- Date: Tue, 30 Jul 2013 14:29:59 -0700
- To: Greg Billock <gbillock@google.com>
- Cc: "Robert O'Callahan" <robert@ocallahan.org>, "public-media-capture@w3.org" <public-media-capture@w3.org>
- Message-ID: <CACmqxcynjQPFDsM8vojRReNRt7N_D7UYSU_DwFHB4zezb43Dxw@mail.gmail.com>
Curious - if we implement disk backing store, should the disk space usage count against the temporary HTML5 storage? (In which case the app could query usage/quota for a bit more fine-grained control than LOW_MEMORY) - rachel On Tue, Jul 30, 2013 at 2:15 PM, Greg Billock <gbillock@google.com> wrote: > On Mon, Jul 29, 2013 at 4:39 PM, Robert O'Callahan <robert@ocallahan.org>wrote: > >> On Tue, Jul 30, 2013 at 4:34 AM, Greg Billock <gbillock@google.com>wrote: >> >>> In the record(timeslice) case, the implementation will probably not want >>> to incur the latency penalty of writing each intermediate block to disk >>> before creating the blob handle. This means some coordination is required >>> to make sure the app doesn't let too much in-memory content accumulate. >>> (The no-timeslice overload shouldn't have this weakness. >>> >> >> Can you give a specific example of a use-case for this? >> > > > Just the one you mention below -- if the browser is keeping the blobs > in-memory, the app may for some reason keep the blobs in a list. Suppose, > for example, I queue the blobs for sending in-order via XHR, but my network > connection is flaky. If those references pin the blobs in-memory it could > create pressure quite quickly. > > Moving the blobs to disk under memory pressure is a good idea. It's > basically the app's only option anyway in some cases. Even with that > available, however, there's still an issue: suppose the disk is full. I > think we need a memory pressure (or storage pressure, more generically) > signal regardless if the client is accepting streaming blocks. And in other > cases, the app may prefer to take another approach to the signal. (i.e. > stop and then restart the recording) > > I think an implementation that sent LOW_MEMORY, then soon after began > transferring to disk, then sent OUT_OF_MEMORY on exhaustion would be pretty > robust in the use cases we've discussed. > > > >> >> In Gecko, data accumulated by the recorder is initially kept in memory. >> Once it exceeds a certain size (1MB) we transfer it to a file. So, Blobs >> below the threshold are memory-backed, Blobs above the threshold are >> file-backed. It would be possible for an application to exhaust memory by >> using a small timeslice and accumulating all the Blobs in a list, but why >> would an application need to do that? AFAIK the only reason to use a small >> timeslice is if you're going to stream the recorded data to a file or >> across the network. >> >> Even if an application does need to do that, I'd much rather handle the >> low-memory situation by having Gecko transparently move Blob contents from >> memory to files than by asking the Web application to handle it. I have low >> confidence Web developers would bother handling low memory, or would be >> able to handle it robustly if they did try. >> >> Rob >> -- >> Jtehsauts tshaei dS,o n" Wohfy Mdaon yhoaus eanuttehrotraiitny eovni >> le atrhtohu gthot sf oirng iyvoeu rs ihnesa.r"t sS?o Whhei csha iids teoa >> stiheer :p atroa lsyazye,d 'mYaonu,r "sGients uapr,e tfaokreg iyvoeunr, >> 'm aotr atnod sgaoy ,h o'mGee.t" uTph eann dt hwea lmka'n? gBoutt uIp >> waanndt wyeonut thoo mken.o w * >> * >> > >
Received on Thursday, 1 August 2013 09:20:46 UTC