Re: [IndexedDB] Two Real World Use-Cases

On 3 March 2011 09:15, Joran Greef <joran@ronomon.com> wrote:

> Hi Jonas
>
> I have been trying out your suggestion of using a separate object store to
> do manual indexing (and so support compound indexes or index object
> properties with arrays as values).
>
> There are some problems with this approach:
>
> 1. It's far too slow. To put an object and insert 50 index records (typical
> when updating an inverted index) this way takes 100ms using IDB versus 10ms
> using WebSQL (with a separate indexes table and compound primary key on
> index name and object key). For instance, my application has a real
> requirement to replicate 4,000,000 emails between client and server and I
> would not be prepared to accept latencies of 100ms to store each object.
> That's more than the network latency.
>
> 2. It's a waste of space.
>
> Using a separate object store to do manual indexing may work in theory but
> it does not work in practice. I do not think it can even be remotely
> suggested as a panacea, however temporary it may be.
>
> We can fix all of this right now very simply:
>
> 1. Enable objectStore.put and objectStore.delete to accept a setIndexes
> option and an unsetIndexes option. The value passed for either option would
> be an array (string list) of index references.
>
> 2. The object would first be removed as a member from any indexes
> referenced by the unsetIndexes option. Any referenced indexes which would be
> empty thereafter would be removed.
>
> 3. The object would then be added as a member to any indexes referenced by
> the setIndexes option. Any referenced indexes which do not yet exist would
> be created.
>
> This would provide the much-needed indexing capabilities presently lacking
> in IDB without sacrificing performance.
>
> It would also enable developers to use IDB statefully (MySQL-like
> pre-defined schemas with the DB taking on the complexities of schema
> migration and data migration) or statelessly (See Berkeley DB with the
> application responsible for the complexities of data maintenance) rather
> than enforcing an assumption at such an early stage.
>
> Regards
>
> Joran Greef
>


Why would this be faster? Surely most of the time in inserting the 50
indexes is the search time of the index, and the JavaScript function call
overhead would be minimal (its only 50 calls)?

Cheers,
Keean.

Received on Thursday, 3 March 2011 09:27:18 UTC