Re: IndexedDB: ambiguity around createIndex order-of-operations

On Mon, Aug 13, 2012 at 11:16 AM, Alec Flett <alecflett@google.com> wrote:
> jsbell and I have been discussing a possible ambiguity in the IndexedDB spec
> w.r.t. error handling around createIndex calls.
>
>
> In particular, createIndex() is supposed to behave somewhat synchronously in
> that calling:
>
>>
>> the implementation must create a newindex and return an IDBIndex object
>> representing it.
>
>
> so that this is reasonable:
> objectStore.createIndex('foo',...)
> objectStore.put(...)
> objectStore.index('foo').get(...)
>
> But at the same time createIndex() behaves somewhat asynchrnously - while
> the metadata for the index needs to be there immediately, the actual
> indexing data doesn't have to:
>
>> In some implementations it's possible for the implementation to
>> asynchronously run into problems creating the index after the createIndex
>> function has returned. For example in implementations where metadata about
>> the newly created index is queued up to be inserted into the database
>> asynchronously, or where the implementation might need to ask the user for
>> permission for quota reasons. Such implementations must still create and
>> return an IDBIndex object. Instead, once the implementation realizes that
>> creating the index has failed, it must abort the transaction using thesteps
>> for aborting a transaction using the appropriate error as error parameter.
>
>
> The issue in question is how to handle this:
>
> objectStore.put({"foo": 1, "message": "hello"});
> req = objectStore.put({"foo": 1, "message": "goodbye"});
> objectStore.createIndex("foo", "foo", {unique: true});    // will fail
> asynchronously
>
> The question is, should req's onerror fire or not? Depending on the
> implementation, createIndex() could fully create/index the whole 'foo' index
> before the put's are serviced, which means by the time the 2nd put()
> happens, the index already says that the put is invalid. On the other hand,
> if the actual indexing happens later (asynchronously), but in the order
> written (i.e. put(), put(), createIndex) then the 2nd put would succeed, and
> THEN the index gets created. In either case the transaction is aborted.
>
> From a developer's perspective, I feel like making the 2nd put() fail is
> really confusing, because it seems really strange that a later API call
> (createIndex) could make an earlier put() fail - you might remove the
> createIndex() to debug the code and then magically it would succeed! On the
> other hand, that behavior does allow the creator to preventBubble() to
> prevent the failure, which could prevent the transaction from failing.
>
> In either case, I feel like this is a fairly degenerate case and I feel like
> we need to optimize this behavior for debugging, since I don't think normal
> usage patterns of IndexedDB should be doing this.

I think the two puts need to succeed. Implementation would be very
complex and suboptimal otherwise. You need to know that there's a
pending index-create operation and wait with firing success values for
any requests until both all requests have succeeded and the
index-create operation has succeeded before you can fire any events.

On top of that you can get circular dependencies I think since if one
of the two put operations failed for reasons unrelated to the
index-create, then the index-create operation would succeed.

The way we handle this in gecko is that we treat index-create as a
normal async operation. However we create the meta-data on the main
thread so that we can return the index object. But otherwise the
index-create operation is a normal async operation which runs on the
database thread in the same order as normal requests. The only
difference is that we don't expose the request object anywhere.

Suggestions for how to clarify this in the spec is welcome. At the
very least we need a bug.

/ Jonas

Received on Monday, 13 August 2012 19:24:29 UTC