[IndexDB] Proposal for async API changes

Hi All,

I, together with Ben Turner and Shawn Wilsher have been looking at the
asynchronous API defined in the IndexDB specification and have a set
of changes to propose. The main goal of these changes is to simplify
the API that we expose to authors, making it easier for them to work
with. Another goal has been to reduce the risk that authors misuse the
API and use long running transactions. Finally, it has been a goal to
reduce the risk of situations that can race.

It has explicitly not been a goal to simplify the implementation. In
some cases it is definitely harder to implement the proposed API.
However, we believe that the extra complexity in implementation is
outweighed by simplicity for users of the API.

The main changes are:

1. Once a database has been opened (a database connection has been
established) read access to meta-data, such as objectStore and index
names, is synchronous. Changes to such meta data, such as creating
objectStores and indexes, is still asynchronous.
2. You can only add "requests" to read and write data to a transaction
during a transaction callback. There is one exception to this rule
(more below).
3. Transactions are automatically committed. Once a request in a
transaction finishes and there are no more requests queued against the
transaction, the transaction is committed.
4. Cursors do not fire error events if a request to open a cursor
yields zero results or when iterating using a cursor reaches the end
of the found results. Instead, a success event is fired which
indicates that no more results are available.
5. All reads and writes are done through transactions. However in some
places the transaction is implicit (but defined).
6. Access to index objects are done through API on objectStore objects.
7. Separate functions for add/modify/add-or-modify.
8. Calling abort() on read request always cancels the request, even if
the implementation has already read the data and is ready to fire a
success event. The error event is always fired if abort() is called,
and the success event is suppressed.
9. IDBKeyRanges are created using functions on IndexedDatabaseRequest.
We couldn't figure out how the old API allowed you to create a range
object without first having a range object.
10. You are allowed to have multiple transactions per database
connection. However if they use overlapping tables, only the first one
will receive events until it is finished (with the usual exceptions of
allowing multiple readers of the same table).

A draft of the proposed API is here:

http://docs.google.com/View?id=dfs2skx2_4g3s5f857


You get a IDBDatabaseRequest as before, using:

var request = indexedDB.open("School", "My school database");
request.onsuccess = function(event) {
  var db = event.result;
  ...
}

Once you have a IDBDatabaseRequest object, things are however
different. You can read data using:

request = db.objectStore("students").get("Benny");
request.onsuccess = function(event) {
  displayStudent(event.result);
}

And write using:

request = db.objectStore("students").add({ name: "Benny", year: 8 });
request.onerror = function(event) {
  displayError("Writing Benny failed");
}


If you need to operate on multiple stores stores, you can use an
explicit transaction:

trans = db.transaction(["students", "classes"]);
trans.get("Benny").onsuccess = function(event) {
  trans.objectStore("classes").get(event.result.year).onsuccess = ...
}

This also shows the exception for when you are allowed to add requests
to a transaction outside of a callback. When the transaction()
function is called, this synchronously returns a transaction object.
You are allowed to immediately start making requests on this object
despite not being in a callback. In fact, no callbacks will happen
until you start making requests. However no reads or writes will be
performed until the implementation has managed to grab the correct
(read vs. write) lock on the specified tables, and thus no callbacks
will happen until that time.


Reading using an index is similar to reading from an objectStore directly.

request = db.objectStore("students").index("year").get(...);
request.onsuccess = ...
and
request = db.objectStore("students").index("year").getObject(...);
request.onsuccess = ...

Since indexes can return multiple entries for a given key, the above
functions use the first matching entry.

Cursors are, as before, available both on objectStores and indexes.
However using them is simpler since you don't have to listen for error
events for normal iteration. In the current spec draft, you need to
register error event handlers if you didn't know which was the last
result in a search, or if there was a risk that a search would result
in zero results. With our proposal you'll get a normal success event
once the end of a search is reached, but the event will have a null
result property. An empty result set is treated just as a result where
you've immediately reached the end.

myResults = [];
db.objectStore("students").openCursor(range);
request.onsuccess = function(event) {
  cursor = event.result;
  if (!cursor) {
    // This could happen on the first callback
    displayResult(myResults);
  }
  myResults.push(cursor.value);
  cursor.continue();
}


For the above use case, we have however added a convenience function.
The following will do the same thing:

db.objectStore("students").getAll(range);
request.onsuccess = function(event) {
  displayResult(e.result);
}

Similarly, on indexes you can do

db.objectStore("students").index("year").getObjectAll(range);
request.onsuccess = function(event) {
  displayResult(e.result);
}


One thing to note is that in none of these examples call
transaction.commit(). Instead transactions are automatically committed
as soon as there are no more requests on them. This has the advantage
that it strongly discourages long-running transactions. I.e. a web
author can't easily keep a transaction open while waiting for input
from the user. Instead all needed data need to be accumulated before
the transaction is initiated. This is the same model as the
WebSQLDatabase spec uses, and it seems to have worked there based on
current deployment experience.

We've created some examples of what using this proposed API would look like:

http://docs.google.com/document/pub?id=1I__XnwvvSwyjvxi-FAAE0ecnUDhk5DF7L2GI6O31o18

we've also implemented the same examples using the currently drafted API:

http://docs.google.com/document/pub?id=1KKMAg_oHLeBvFUWND5km6FJtKi4jWxwKR0paKfZc8vU


We have a few open issues:
1. What should happen when IDBRequest.abort() is called on a write
request, such as modify()? The data might have already been written to
the database. And additional data might have been written on top of it
using a different request. A simple solution is to make abort() on
write requests throw.
2. Do we need to add support for temporary objectStores. I.e. stores
with a lifetime as long as a transaction that are only used to
implement a complex query. If so, we can add a createObjectStore
function on IDBTransactionRequest which synchronously returns a
nameless newly created objectStore.
3. Should an error in a read or write always result in the full
transaction getting rolled back? Or should we simply fire an error
event on the failed request? Or something inbetwee, such as firing an
error event and make the default action to roll back the transaction
(i.e. if the page doesn't want rollback to happen it has to call
event.preventDefault).

/ Jonas

Received on Tuesday, 18 May 2010 01:16:55 UTC