- From: Aymeric Vitte <vitteaymeric@gmail.com>
- Date: Mon, 09 Dec 2013 19:12:59 +0100
- To: Joshua Bell <jsbell@google.com>
- CC: Jonas Sicking <jonas@sicking.cc>, "Web Applications Working Group WG (public-webapps@w3.org)" <public-webapps@w3.org>
- Message-ID: <52A6082B.30703@gmail.com>
I have implemented the {[file_id, i], data:chunk} method.
This works better (non withstanding [1] for FF and the fact that Chrome
still crashes after chunks concatenation and attempt to store in IDB, I
will post a bug), but seems a bit slow.
Two questions about the good use of IDB and Blobs:
1- storing chunks, the code looks like:
var tx=db.transaction(['test'],'readwrite');
var objectStore=tx.objectStore('test');
objectStore.put({k:[request.file_id,request.chunk_nb],data:data});
//tx.oncomplete=function() {
var queue=request.queue_;
queue.shift();
if (queue.length) {
queue[0]();
};
//};
Should the commented part be commented/removed or not?
2- fetching chunks (not sure how to fetch efficiently the [file_id,i]
key and if the blob should be incremented or concatenated once we have
all the chunks) :
var i=0;
var blob=new Blob();
var a=chunkStore.get([request.file_id,i]);
a.onsuccess=function(evt) {
var res=evt.target.result;
if (res) {
blob=new Blob([blob,res.data],{type:type});
chunkStore.delete([request.file_id,i]);
i++;
a=chunkStore.get([request.file_id,i]);
a.onsuccess=this.onsuccess;
};
};
Regards
Aymeric
[1] https://bugzilla.mozilla.org/show_bug.cgi?id=947994
Le 04/12/2013 18:04, Joshua Bell a écrit :
> On Wed, Dec 4, 2013 at 2:13 AM, Aymeric Vitte <vitteaymeric@gmail.com
> <mailto:vitteaymeric@gmail.com>> wrote:
>
> OK for the different records but just to understand correctly,
> when you fetch {chunk1, chunk2, etc} or [chunk1, chunk2, etc],
> does it do something else than just keeping references to the
> chunks and storing them again with (new?) references if you didn't
> do anything with the chunks?
>
>
> I believe you understand correctly, assuming a reasonable[1] IDB
> implementation. Updating one record with multiple chunk references vs.
> storing one record per chunk really comes down to personal preference.
>
> [1] A conforming IDB implementation *could* store blobs by copying the
> data into the record, which would be extremely slow. Gecko uses
> references (per Jonas); Chromium will as well, so updating a record
> with [chunk1, chunk2, ...] shouldn't be significantly slower than
> updating a record not containing Blobs. In Chromium's case there will
> be extra book-keeping going on but no huge data copies.
>
>
>
> Regards
>
> Aymeric
>
> Le 03/12/2013 22:12, Jonas Sicking a écrit :
>
> On Tue, Dec 3, 2013 at 11:55 AM, Joshua Bell
> <jsbell@google.com <mailto:jsbell@google.com>> wrote:
>
> On Tue, Dec 3, 2013 at 4:07 AM, Aymeric Vitte
> <vitteaymeric@gmail.com <mailto:vitteaymeric@gmail.com>>
> wrote:
>
> I am aware of [1], and really waiting for this to be
> available.
>
> So you are suggesting something like {id:file_id,
> chunk1:chunk1,
> chunk2:chunk2, etc}?
>
> No, because you'd still have to fetch, modify, and
> re-insert the value each
> time. Hopefully implementations store blobs by reference
> so that doesn't
> involve huge data copies, at least.
>
> That's what the Gecko implementation does. When reading a Blob
> from
> IndexedDB, and then store the same Blob again, that will not
> copy any
> of the Blob data, but simply just create another reference to the
> already existing data.
>
> / Jonas
>
>
> --
> Peersm : http://www.peersm.com
> node-Tor : https://www.github.com/Ayms/node-Tor
> GitHub : https://www.github.com/Ayms
>
>
--
Peersm : http://www.peersm.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms
Received on Monday, 9 December 2013 18:13:45 UTC