- From: Joshua Bell <jsbell@google.com>
- Date: Tue, 3 Dec 2013 11:55:52 -0800
- To: Aymeric Vitte <vitteaymeric@gmail.com>
- Cc: "Web Applications Working Group WG (public-webapps@w3.org)" <public-webapps@w3.org>
- Message-ID: <CAD649j7wBw9m4mLH4NMX_yi+SDmYLmS8MVE3PPz++9BLQo8etQ@mail.gmail.com>
On Tue, Dec 3, 2013 at 4:07 AM, Aymeric Vitte <vitteaymeric@gmail.com>wrote: > I am aware of [1], and really waiting for this to be available. > > So you are suggesting something like {id:file_id, chunk1:chunk1, > chunk2:chunk2, etc}? > No, because you'd still have to fetch, modify, and re-insert the value each time. Hopefully implementations store blobs by reference so that doesn't involve huge data copies, at least. I was imagining that if you're building up a record in a store with primary key file_id that you could store chunks as entirely separate records with primary key [file_id, 1], [file_id, 2] etc. either in the same store or a separate chunk store. Once the last chunk arrives, fetch all the chunks and delete those records. > Related to [1] I have tried a "workaround" (not for fun, because I needed > to test at least with two different browsers): store the chunks as > ArrayBuffers in an Array {id:file_id, [chunk1, chunk2,... ]}, after testing > different methods the idea was to new Blob([chunk1, chunk2,... ]) on query > and avoid creating a big ArrayBuffer on update. > > Unfortunately, with my configuration, Chrome crashes systematically on > update for "big" files (tested with 250 MB file and chunks of 2 MB, does > not seem to be something really enormous). > Please file a bug at http://crbug.com if you can reproduce it. > > Then I was thinking to use different keys as you suggest but maybe it's > not very easy to manipulate and you still have to use an Array to > concatenate, what's the best method? > > Regards, > > Aymeric > > [1] http://code.google.com/p/chromium/issues/detail?id=108012 > > Le 02/12/2013 23:38, Joshua Bell a écrit : > > On Mon, Dec 2, 2013 at 9:26 AM, Aymeric Vitte <vitteaymeric@gmail.com>wrote: > >> This is about retrieving a large file with partial data and storing it in >> an incremental way in indexedDB. >> > ... > >> This seems not efficient at all, was it never discussed the possibility >> to be able to append data directly in indexedDB? >> > > You're correct, IndexedDB doesn't have a notion of updating part of a > value, or even querying part of a value (other than via indexes). We've > received developer feedback that partial data update and query would both > be valuable, but haven't put significant thought into how it would be > implemented. Conceivably you could imagine an API for "get" or "put" with > an additional keypath into the object. We (Chromium) currently treat the > stored value as opaque so we'd need to deserialize/reserialize the entire > thing anyway unless we added extra smarts in there, at which point a smart > caching layer implemented in JS and tuned for the webapp might be more > effective. > > Blobs are pesky since they're not mutable. So even with the above > hand-waved API you'd still be paying for a fetch/concatenate/store. (FWIW, > Chromium's support for Blobs in IndexedDB is still in progress, so this is > all in the abstract.) > > I think the best advice at the moment for dealing with incremental data > in IDB is to store the chunks under separate keys, and concatenate when > either all of the data has arrived or lazily on use. > > > > -- > Peersm : http://www.peersm.com > node-Tor : https://www.github.com/Ayms/node-Tor > GitHub : https://www.github.com/Ayms > >
Received on Tuesday, 3 December 2013 19:56:19 UTC