Re: [IndexedDB] Proposal for async API changes

On Tue, May 18, 2010 at 12:10 PM, Jeremy Orlow <jorlow@chromium.org> wrote:
>>>> 10. You are allowed to have multiple transactions per database
>>>> connection. However if they use overlapping tables, only the first one
>>>> will receive events until it is finished (with the usual exceptions of
>>>> allowing multiple readers of the same table).
>>>
>>> Can you please clarify what you mean here?  This seems like simply an
>>> implementation detail to me, so maybe I'm missing something?
>>
>> What this is trying to say is that you can have an object store being used
>> in more than one transaction, but they cannot access it at the same time.
>>  However, I think it's best for Jonas to chime in here because this doesn't
>> quite seem right to me like it did yesterday.
>
> Oh, I see.  The problem is that if you open an entity store and start
> multiple transactions, it's not clear which it's associated with.  I guess I
> feel like what Jonas described would be pretty confusing.

The objectStore accessor lives off of the IDBTransactionRequest API,
so they are always associated with the transaction they were retrieved
using.

I agree it's a little confusing that you can have several references
to the "same" objectStore through multiple transactions. However if
you look at the examples the code works out quite nicely and is quite
understandable IMHO.

> What about creating a IDBSuccessEvents.transaction that's the transaction
> the request is associated with (or null)?

That's already there. Though it's on IDBTransactionEvent, which is the
event type fired for successful operations that involve transactions
(almost all of them except for the success event for opening the
database).

> Another option is to only allow
> one (top level) transaction per connection.  (I still think we should
> support open nested transactions.)

This is what the spec says right now. I don't really see what the
advantage is though since you can create several connections to the
same database which means that you basically have the exact same
situation. Only difference is that you now have several database
connections floating around as well.

>>> 8)   We can't leave deciding whether a cursor is pre-loaded up to UAs
>>> since people will code for their favorite UA and then access
>>> IDBCursorPreloadedRequest.count when some other UA does it as a
>>> non-preloaded request.  Even in the same UA this will cause problems when
>>> users have different datasets than developers are testing with.
>>
>> I think that you might have been confused by our wording there.  Sorry
>> about that!  IDBCursorPreloadedRequest is what you get if you pass sync=true
>> into openCursor or openObjectCursor.  Basically, sync cursors will give you
>> a count, whereas async ones will not.
>
>
> Ohhhhh.  I missed that parameter and I guess let my imagination run wild.
>  :-)
> I'm not sure I like the idea of offering sync cursors either since the UA
> will either need to load everything into memory before starting or risk
> blocking on disk IO for large data sets.  Thus I'm not sure I support the
> idea of synchronous cursors.  But, at the same time, I'm concerned about the
> overhead of firing one event per value with async cursors.  Which is why I
> was suggesting an interface where the common case (the data is in memory) is
> done synchronously but the uncommon case (we'd block if we had to respond
> synchronously) has to be handled since we guarantee that the first time will
> be forced to be asynchronous.
> Like I said, I'm not super happy with what I proposed, but I think some
> hybrid async/sync interface is really what we need.  Have you guys spent any
> time thinking about something like this?  How dead-set are you on
> synchronous cursors?

The idea is that synchronous cursors load all the required data into
memory, yes. I think it would help authors a lot to be able to load
small chunks of data into memory and read and write to it
synchronously. Dealing with asynchronous operations constantly is
certainly possible, but a bit of a pain for authors.

I don't think we should obsess too much about not keeping things in
memory, we already have things like canvas and the DOM which adds up
to non-trivial amounts of memory.

Just because data is loaded from a database doesn't mean it's huge.

I do note that you're not as concerned about getAll(), which actually
have worse memory characteristics than synchronous cursors since you
need to create the full JS object graph in memory.

/ Jonas

Received on Tuesday, 18 May 2010 19:35:17 UTC