Re: IndexedDB, Blobs and partial Blobs - Large Files

OK for the different records but just to understand correctly, when you 
fetch {chunk1, chunk2, etc} or [chunk1, chunk2, etc], does it do 
something else than just keeping references to the chunks and storing 
them again with (new?) references if you didn't do anything with the chunks?

Regards

Aymeric

Le 03/12/2013 22:12, Jonas Sicking a écrit :
> On Tue, Dec 3, 2013 at 11:55 AM, Joshua Bell <jsbell@google.com> wrote:
>> On Tue, Dec 3, 2013 at 4:07 AM, Aymeric Vitte <vitteaymeric@gmail.com>
>> wrote:
>>> I am aware of [1], and really waiting for this to be available.
>>>
>>> So you are suggesting something like {id:file_id, chunk1:chunk1,
>>> chunk2:chunk2, etc}?
>> No, because you'd still have to fetch, modify, and re-insert the value each
>> time. Hopefully implementations store blobs by reference so that doesn't
>> involve huge data copies, at least.
> That's what the Gecko implementation does. When reading a Blob from
> IndexedDB, and then store the same Blob again, that will not copy any
> of the Blob data, but simply just create another reference to the
> already existing data.
>
> / Jonas

-- 
Peersm : http://www.peersm.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms

Received on Wednesday, 4 December 2013 10:14:09 UTC