Re: [IndexedDB] Current editor's draft

On Thu, Jul 22, 2010 at 8:39 PM, Pablo Castro <Pablo.Castro@microsoft.com>wrote:

>
> From: Jonas Sicking [mailto:jonas@sicking.cc]
> Sent: Thursday, July 22, 2010 5:25 PM
>
> >> >> Regarding deadlocks, that's right, the implementation cannot
> determine if
> >> >> a deadlock will occur ahead of time. Sophisticated implementations
> could
> >> >> track locks/owners and do deadlock detection, although a simple
> >> >> timeout-based mechanism is probably enough for IndexedDB.
> >> >
> >> > Simple implementations will not deadlock because they're only doing
> object
> >> > store level locking in a constant locking order.
>
> Well, it's not really simple vs sophisticated, but whether they do
> dynamically scoped transactions or not, isn't it? If you do dynamic
> transactions, then regardless of the granularity of your locks, code will
> grow the lock space in a way that you cannot predict so you can't use a
> well-known locking order, so deadlocks are not avoidable.
>

As I've mentioned before, you can simply not run more than one dynamic
transaction at a time (and only start locking for a static transaction when
all locks are available and doing the locking atomically) to implement the
dynamic transactions from an API perspective.


> >> >  Sophisticated implementations will be doing key level (IndexedDB's
> analog
> >> > to row level) locking with deadlock detection or using methods to
> completely
> >> > avoid it.  I'm not sure I'm comfortable with having one or two
> in-between
> >> > implementations relying on timeouts to resolve deadlocks.
>
> Deadlock detection is quite a bit to ask from the storage engine. From the
> developer's perspective, the difference between deadlock detection and
> timeouts for deadlocks is the fact that the timeout approach will take a bit
> longer, and the error won't be as definitive. I don't think this particular
> difference is enough to require deadlock detection.
>

This means that some web apps on some platforms will hang for seconds (or
minutes?) at a time in a hard to debug fashion.  I don't think this is
acceptable for a web standard.


> >> > Of course, if we're breaking deadlocks that means that web developers
> need
> >> > to handle this error case on every async request they make.  As such,
> I'd
> >> > rather that we require implementations to make deadlocks impossible.
>  This
> >> > means that they either need to be conservative about locking or to do
> MVCC
> >> > (or something similar) so that transactions can continue on even
> beyond the
> >> > point where we know they can't be serialized.  This would
> be consistent with
> >> > our usual policy of trying to put as much of the burden as is
> practical on
> >> > the browser developers rather than web developers.
>
> Same as above...MVCC is quite a bit to mandate from all implementations.
> For example, I'm not sure but from my basic understanding of SQLite I think
> it always does straight up locking and doesn't have support for versioning.
>

As I mentioned, there's a simpler behavior that implementations can
implement if they feel MVCC is too complicated.  If dynamic transactions are
included in v1 of the spec, this will almost certainly be what we do
initially in Chromium.

Of course, I'd rather we just take it out of v1 for reasons like what's
coming up in this thread.


>  >> >>
> >> >> As for locking only existing rows, that depends on how much isolation
> we
> >> >> want to provide. If we want "serializable", then we'd have to put in
> things
> >> >> such as range locks and locks on non-existing keys so reads are
> consistent
> >> >> w.r.t. newly created rows.
> >> >
> >> > For the record, I am completely against anything other than
> "serializable"
> >> > being the default.  Everything a web developer deals with follows run
> to
> >> > completion.  If you want to have optional modes that relax things in
> terms
> >> > of serializability, maybe we should start a new thread?
> >>
> >> Agreed.
> >>
> >> I was against dynamic transactions even when they used
> >> whole-objectStore locking. So I'm even more so now that people are
> >> proposing row-level locking. But I'd like to understand what people
> >> are proposing, and make sure that what is being proposed is a coherent
> >> solution, so that we can correctly evaluate it's risks versus
> >> benefits.
>
> The way I see the risk/benefit tradeoff of dynamic transactions: they bring
> better concurrency and more flexibility at the cost of new failure modes. I
> think that weighing them in those terms is more important than the specifics
> such as whether it's okay to have timeouts versus explicit deadlock errors.
>

I think we should only add additional failure modes when there are very
strong reasons why they're worth it.  And simplifying things for
implementors is not an acceptable reason to add (fairly complex,
non-deterministic) failure modes.

J

Received on Friday, 23 July 2010 01:13:00 UTC