Re: [IndexedDB] Two Real World Use-Cases

Do you expect allowing multiple keys in a store to improve performance?
Presumably it will save space, but also make things slower due to needing to
reference count. Will we assume that two identical objects inserted with
different keys in different store commands are not referencing the same
object?


Cheers,
Keean


On 5 March 2011 01:50, Jonas Sicking <jonas@sicking.cc> wrote:

> Like I said, I agree that we need to do something to allow for more
> powerful indexes. We already have two options for allowing essentially
> arbitrary indexes:
>
> 1. Use a separate objectStore which is manually managed.
> 2. Modify the object before inserting it to add a special property
> which can then be indexed.
>
> There are downsides with both solutions. The former is a bit more work
> and might have performance impact. The latter requires modifying the
> data as it goes into the objectStore.
>
> For version 2 we should come up with something better. If it ends up
> being what you are proposing, or something like the function I was
> suggesting, or both, or neither, that remains to be seen.
>
> What we do need to do sooner rather than later though is allowing
> multiple index values for a given entry using arrays. We also need to
> add support for compound keys. But lets deal with those issues in a
> separate thread.
>
> / Jonas
>
> On Thu, Mar 3, 2011 at 1:26 AM, Keean Schupke <keean@fry-it.com> wrote:
> > On 3 March 2011 09:15, Joran Greef <joran@ronomon.com> wrote:
> >>
> >> Hi Jonas
> >>
> >> I have been trying out your suggestion of using a separate object store
> to
> >> do manual indexing (and so support compound indexes or index object
> >> properties with arrays as values).
> >>
> >> There are some problems with this approach:
> >>
> >> 1. It's far too slow. To put an object and insert 50 index records
> >> (typical when updating an inverted index) this way takes 100ms using IDB
> >> versus 10ms using WebSQL (with a separate indexes table and compound
> primary
> >> key on index name and object key). For instance, my application has a
> real
> >> requirement to replicate 4,000,000 emails between client and server and
> I
> >> would not be prepared to accept latencies of 100ms to store each object.
> >> That's more than the network latency.
> >>
> >> 2. It's a waste of space.
> >>
> >> Using a separate object store to do manual indexing may work in theory
> but
> >> it does not work in practice. I do not think it can even be remotely
> >> suggested as a panacea, however temporary it may be.
> >>
> >> We can fix all of this right now very simply:
> >>
> >> 1. Enable objectStore.put and objectStore.delete to accept a setIndexes
> >> option and an unsetIndexes option. The value passed for either option
> would
> >> be an array (string list) of index references.
> >>
> >> 2. The object would first be removed as a member from any indexes
> >> referenced by the unsetIndexes option. Any referenced indexes which
> would be
> >> empty thereafter would be removed.
> >>
> >> 3. The object would then be added as a member to any indexes referenced
> by
> >> the setIndexes option. Any referenced indexes which do not yet exist
> would
> >> be created.
> >>
> >> This would provide the much-needed indexing capabilities presently
> lacking
> >> in IDB without sacrificing performance.
> >>
> >> It would also enable developers to use IDB statefully (MySQL-like
> >> pre-defined schemas with the DB taking on the complexities of schema
> >> migration and data migration) or statelessly (See Berkeley DB with the
> >> application responsible for the complexities of data maintenance) rather
> >> than enforcing an assumption at such an early stage.
> >>
> >> Regards
> >>
> >> Joran Greef
> >
> >
> > Why would this be faster? Surely most of the time in inserting the 50
> > indexes is the search time of the index, and the JavaScript function call
> > overhead would be minimal (its only 50 calls)?
> > Cheers,
> > Keean.
>

Received on Saturday, 5 March 2011 19:28:18 UTC