Re: [IndexedDB] Current editor's draft

On Tue, Jul 27, 2010 at 12:22 AM, Jonas Sicking <jonas@sicking.cc> wrote:

> On Sat, Jul 24, 2010 at 8:29 AM, Jeremy Orlow <jorlow@chromium.org> wrote:
> >> >> And is it
> >> >> only possible to lock existing rows, or can you prevent new records
> >> >> from being created?
> >> >
> >> > There's no way to lock yet to be created rows since until a
> transaction
> >> > ends, its effects cannot be made visible to other transactions.
> >>
> >> So if you have an objectStore with auto-incrementing indexes, there is
> >> the possibility that two dynamic transactions both can add a row to
> >> said objectStore at the same time. Both transactions would then add a
> >> row with the same autogenerated id (one higher than the highest id in
> >> the table). Upon commit, how is this conflict resolved?
> >>
> >> What if the objectStore didn't use auto-incrementing indexes, but you
> >> still had two separate dynamic transactions which both insert a row
> >> with the same key. How is the conflict resolved?
> >
> > I believe a common trick to reconcile this is stipulating that if you add
> > 1000 "rows" the id's may not necessarily be 1000 sequential numbers.
>  This
> > allows transactions to increment the id and leave it incremented even if
> the
> > transaction fails.  Which also means that other transactions can be
> grabbing
> > an ID of their own as well.  And if a transaction fails, well, we've
> wasted
> > one possible ID.
>
> This does not answer the question what happens if two transactions add
> the same key value though?
>

If you're using optimistic transactions, whichever commits first succeeds.
 Not sure with the pessimistic/lock-based implementations.


>  >> >> And is it possible to only use read-locking for
> >> >> some rows, but write-locking for others, in the same objectStore?
> >> >
> >> > An implementation could use shared locks for read operations even
> though
> >> > the object store might have been opened in READ_WRITE mode, and later
> >> > upgrade the locks if the read data is being modified. However, I am
> not keen
> >> > to push for this as a specced behavior.
> >>
> >> What do you mean by "an implementation could"? Is this left
> >> intentionally undefined and left up to the implementation? Doesn't
> >> that mean that there is significant risk that code could work very
> >> well in a conservative implementation, but often cause race conditions
> >> in a implementation that uses narrower locks? Wouldn't this result in
> >> a "race to the bottom" where implementations are forced to eventually
> >> use very wide locks in order to work well in websites?
> >>
> >> In general, there are a lot of details that are unclear in the dynamic
> >> transactions proposals. I'm also not sure if these things are unclear
> >> to me because they are intentionally left undefined, or if you guys
> >> just haven't had time yet to define the details?
> >>
> >> As the spec is now, as an implementor I'd have no idea of how to
> >> implement dynamic transactions. And as a user I'd have no idea what
> >> level of protection to expect from implementations, nor what
> >> strategies to use to avoid bugs.
> >>
> >> In all the development I've done deadlocks and race conditions are
> >> generally unacceptable, and instead strategies are developed that
> >> avoids them, such as always grab locks in the same order, and always
> >> grab locks when using shared data. I currently have no idea what
> >> strategy to recommend in IndexedDB documentation to developers to
> >> allow them to avoid race conditions and deadlocks.
> >>
> >> To get clarity in these questions, I'd *really* *really* like to see a
> >> more detailed proposal.
> >
> > I think a detailed proposal would be a good thing (maybe from Pablo or
> > Nikunj since they're who are really pushing them at this point), but at
> the
> > same time, I think you're getting really bogged down in the details,
> Jonas.
> > What we should be concerned about and speccing is the behavior the user
> > sees.  For example, can any operation on data fail due to transient
> issues
> > (like deadlocks, serialization issues) or will the implementation shield
> web
> > developers from this?  And will we guarantee 100% serializable semantics?
> >  (I strongly believe we should on both counts.)  How things are
> implemented,
> > granularity of locks, or even if an implementation uses locks at all for
> > dynamic transactions should be explicitly out of scope for any spec.
>  After
> > all, it's only the behavior users care about.
>
> If we can guarantee no deadlocks and 100% serializable semantics, then
> I agree, it doesn't matter beyond that. However I don't think the
> current proposals for dynamic transactions guarantee that. In fact, a
> central point of the dynamic transactions proposal seems to be that
> the author can grow the lock space dynamically, in an author defined
> order. As long as that is the case you can't prevent deadlocks other
> than by forbidding multiple concurrent (dynamic) transactions.
>

There has been a lot of talk on list about how you could implement dynamic
transactions with locks and I agree that I don't see how what's been
mentioned can be guaranteed to be serializable and/or not create deadlocks.
 But my point is: why are we talking about this??  The spec should talk
about the semantics and behavior not the underlying engine, right?

I think we need to take a BIG step back here and talk about what the
behavior should be from the user point of view and leave implementation
details to the implementations.  Otherwise we're going to continue getting
nowhere in this discussion.

I guess the best way forward is for Nikunj and/or Pablo to come up with a
concrete proposal for the behavior/semantics (leave implementation out of
it, please) and then we can discuss the merits of that.  If we can't settle
on something soon, I suggest we take them out of the spec for the time being
since what's there is pretty half baked.

J

Received on Tuesday, 27 July 2010 10:15:32 UTC