W3C home > Mailing lists > Public > public-webapps@w3.org > January to March 2011

Re: [IndexedDB] Two Real World Use-Cases

From: Joran Greef <joran@ronomon.com>
Date: Thu, 3 Mar 2011 11:15:49 +0200
Message-Id: <431802B7-68E7-4E29-948C-3CE20651BB1D@ronomon.com>
To: public-webapps@w3.org
Hi Jonas

I have been trying out your suggestion of using a separate object store to do manual indexing (and so support compound indexes or index object properties with arrays as values).

There are some problems with this approach:

1. It's far too slow. To put an object and insert 50 index records (typical when updating an inverted index) this way takes 100ms using IDB versus 10ms using WebSQL (with a separate indexes table and compound primary key on index name and object key). For instance, my application has a real requirement to replicate 4,000,000 emails between client and server and I would not be prepared to accept latencies of 100ms to store each object. That's more than the network latency.

2. It's a waste of space.

Using a separate object store to do manual indexing may work in theory but it does not work in practice. I do not think it can even be remotely suggested as a panacea, however temporary it may be.

We can fix all of this right now very simply:

1. Enable objectStore.put and objectStore.delete to accept a setIndexes option and an unsetIndexes option. The value passed for either option would be an array (string list) of index references.

2. The object would first be removed as a member from any indexes referenced by the unsetIndexes option. Any referenced indexes which would be empty thereafter would be removed.

3. The object would then be added as a member to any indexes referenced by the setIndexes option. Any referenced indexes which do not yet exist would be created.

This would provide the much-needed indexing capabilities presently lacking in IDB without sacrificing performance.

It would also enable developers to use IDB statefully (MySQL-like pre-defined schemas with the DB taking on the complexities of schema migration and data migration) or statelessly (See Berkeley DB with the application responsible for the complexities of data maintenance) rather than enforcing an assumption at such an early stage.

Regards

Joran Greef
Received on Thursday, 3 March 2011 09:16:30 GMT

This archive was generated by hypermail 2.3.1 : Tuesday, 26 March 2013 18:49:43 GMT