- From: Jonas Sicking <jonas@sicking.cc>
- Date: Mon, 26 Jul 2010 16:22:20 -0700
- To: Jeremy Orlow <jorlow@chromium.org>
- Cc: Nikunj Mehta <nikunj@o-micron.com>, Pablo Castro <Pablo.Castro@microsoft.com>, Andrei Popescu <andreip@google.com>, public-webapps <public-webapps@w3.org>
On Sat, Jul 24, 2010 at 8:29 AM, Jeremy Orlow <jorlow@chromium.org> wrote: >> >> And is it >> >> only possible to lock existing rows, or can you prevent new records >> >> from being created? >> > >> > There's no way to lock yet to be created rows since until a transaction >> > ends, its effects cannot be made visible to other transactions. >> >> So if you have an objectStore with auto-incrementing indexes, there is >> the possibility that two dynamic transactions both can add a row to >> said objectStore at the same time. Both transactions would then add a >> row with the same autogenerated id (one higher than the highest id in >> the table). Upon commit, how is this conflict resolved? >> >> What if the objectStore didn't use auto-incrementing indexes, but you >> still had two separate dynamic transactions which both insert a row >> with the same key. How is the conflict resolved? > > I believe a common trick to reconcile this is stipulating that if you add > 1000 "rows" the id's may not necessarily be 1000 sequential numbers. This > allows transactions to increment the id and leave it incremented even if the > transaction fails. Which also means that other transactions can be grabbing > an ID of their own as well. And if a transaction fails, well, we've wasted > one possible ID. This does not answer the question what happens if two transactions add the same key value though? >> >> And is it possible to only use read-locking for >> >> some rows, but write-locking for others, in the same objectStore? >> > >> > An implementation could use shared locks for read operations even though >> > the object store might have been opened in READ_WRITE mode, and later >> > upgrade the locks if the read data is being modified. However, I am not keen >> > to push for this as a specced behavior. >> >> What do you mean by "an implementation could"? Is this left >> intentionally undefined and left up to the implementation? Doesn't >> that mean that there is significant risk that code could work very >> well in a conservative implementation, but often cause race conditions >> in a implementation that uses narrower locks? Wouldn't this result in >> a "race to the bottom" where implementations are forced to eventually >> use very wide locks in order to work well in websites? >> >> In general, there are a lot of details that are unclear in the dynamic >> transactions proposals. I'm also not sure if these things are unclear >> to me because they are intentionally left undefined, or if you guys >> just haven't had time yet to define the details? >> >> As the spec is now, as an implementor I'd have no idea of how to >> implement dynamic transactions. And as a user I'd have no idea what >> level of protection to expect from implementations, nor what >> strategies to use to avoid bugs. >> >> In all the development I've done deadlocks and race conditions are >> generally unacceptable, and instead strategies are developed that >> avoids them, such as always grab locks in the same order, and always >> grab locks when using shared data. I currently have no idea what >> strategy to recommend in IndexedDB documentation to developers to >> allow them to avoid race conditions and deadlocks. >> >> To get clarity in these questions, I'd *really* *really* like to see a >> more detailed proposal. > > I think a detailed proposal would be a good thing (maybe from Pablo or > Nikunj since they're who are really pushing them at this point), but at the > same time, I think you're getting really bogged down in the details, Jonas. > What we should be concerned about and speccing is the behavior the user > sees. For example, can any operation on data fail due to transient issues > (like deadlocks, serialization issues) or will the implementation shield web > developers from this? And will we guarantee 100% serializable semantics? > (I strongly believe we should on both counts.) How things are implemented, > granularity of locks, or even if an implementation uses locks at all for > dynamic transactions should be explicitly out of scope for any spec. After > all, it's only the behavior users care about. If we can guarantee no deadlocks and 100% serializable semantics, then I agree, it doesn't matter beyond that. However I don't think the current proposals for dynamic transactions guarantee that. In fact, a central point of the dynamic transactions proposal seems to be that the author can grow the lock space dynamically, in an author defined order. As long as that is the case you can't prevent deadlocks other than by forbidding multiple concurrent (dynamic) transactions. / Jonas
Received on Monday, 26 July 2010 23:23:13 UTC