- From: Robert O'Callahan <robert@ocallahan.org>
- Date: Tue, 30 Jul 2013 11:39:11 +1200
- To: Greg Billock <gbillock@google.com>
- Cc: "public-media-capture@w3.org" <public-media-capture@w3.org>
- Message-ID: <CAOp6jLZgL9arAbby4ZqOC_OwUQ_F_NgJaRT0Lzx=b7-ctKs7Nw@mail.gmail.com>
On Tue, Jul 30, 2013 at 4:34 AM, Greg Billock <gbillock@google.com> wrote: > In the record(timeslice) case, the implementation will probably not want > to incur the latency penalty of writing each intermediate block to disk > before creating the blob handle. This means some coordination is required > to make sure the app doesn't let too much in-memory content accumulate. > (The no-timeslice overload shouldn't have this weakness. > Can you give a specific example of a use-case for this? In Gecko, data accumulated by the recorder is initially kept in memory. Once it exceeds a certain size (1MB) we transfer it to a file. So, Blobs below the threshold are memory-backed, Blobs above the threshold are file-backed. It would be possible for an application to exhaust memory by using a small timeslice and accumulating all the Blobs in a list, but why would an application need to do that? AFAIK the only reason to use a small timeslice is if you're going to stream the recorded data to a file or across the network. Even if an application does need to do that, I'd much rather handle the low-memory situation by having Gecko transparently move Blob contents from memory to files than by asking the Web application to handle it. I have low confidence Web developers would bother handling low memory, or would be able to handle it robustly if they did try. Rob -- Jtehsauts tshaei dS,o n" Wohfy Mdaon yhoaus eanuttehrotraiitny eovni le atrhtohu gthot sf oirng iyvoeu rs ihnesa.r"t sS?o Whhei csha iids teoa stiheer :p atroa lsyazye,d 'mYaonu,r "sGients uapr,e tfaokreg iyvoeunr, 'm aotr atnod sgaoy ,h o'mGee.t" uTph eann dt hwea lmka'n? gBoutt uIp waanndt wyeonut thoo mken.o w * *
Received on Monday, 29 July 2013 23:39:38 UTC