[whatwg] Asynchronous database API feedback

On Dec 9, 2007, at 1:29 AM, Aaron Boodman wrote:

> I like the new Database API design a lot, but I wish there was an
> option for synchronous DB access.

[...snip...]

>
> To be clear, I understand that the asynchronous database API was
> motivated by a desire to avoid another mistake like XMLHttpRequest's
> sync mode. But I think this situation is different because:
>
> a) Disk access is typically going to be a lot faster than network  
> access

I think this assumption is, if not exactly incorrect, somewhat  
misleading.

For users on network home directories, disk access /is/ network  
access. This is a common setup at large corporations and educational  
institutions. We have specific experience with WebKit that doing  
sqlite database access from the UI thread resulted in frequent long UI  
stalls for Apple's users on network home directories.(*)

Now, you might argue that network home directories are not "typical"  
and this is true, but the web application has no way to know when it  
might hit the atypical case and thereby block the user's UI, just as  
with synchronous XMLHttpRequest you have no way to know when the user  
is on a slow network connection.

So I continue to believe that it's not safe to do synchronous I/O from  
the UI thread.


Another important consideration: even ignoring distributed  
filesystems, how do web application developers decide when the writes  
they are doing are definitely small enough that it's safe to use the  
sync API?

Your test shows 3KB written in a tenth of a second, but datastores  
could easily be much larger than that. If the time scales linearly,  
then even a modest 300KB of data could take 10 seconds to write,  
clearly an unacceptable amount of time to stall the UI (I hope it  
doesn't scale linearly because that would be alarmingly slow, but  
clearly at some size it gets slow).

Given how wildly hardware varies, I'm not sure how web developers can  
safely make this choice. It seems likely that they'd choose whatever  
seems to work for them in simple cases, and not test at all on slower  
filesystems. If the queries they do vary in size, they may not test at  
extreme sizes. These are the same kinds of cases where synchronous XHR  
creates surprising problems - it seems ok on the developer's fast  
local network, so why expect that end users will see a problem?

Regards,
Maciej


* - For users with local home directories we only saw this for large  
transactions, but it was still a problem. The specific case I'm  
talking about is our favicon database, and we ultimately put all the  
database access on a background thread to solve these problems. In any  
case I do not think it likely that typical web developers will do a  
better job of navigating the synchronous database I/O pitfall.

Received on Sunday, 9 December 2007 01:59:49 UTC