W3C home > Mailing lists > Public > public-webapps@w3.org > January to March 2012

Re: Colliding FileWriters

From: Glenn Maynard <glenn@zewt.org>
Date: Tue, 28 Feb 2012 18:56:25 -0600
Message-ID: <CABirCh8HReQtUO3pZ2eg2SeJ7Wy=7gRkNGOzODRswFQARyj9aQ@mail.gmail.com>
To: Jonas Sicking <jonas@sicking.cc>
Cc: Eric U <ericu@google.com>, Webapps WG <public-webapps@w3.org>, Jian Li <jianli@chromium.org>
On Mon, Feb 27, 2012 at 6:40 PM, Jonas Sicking <jonas@sicking.cc> wrote:

>  To do the locking without requiring calls to .close() or relying on GC
> we use a similar setup to IndexedDB transactions. I.e. you get an
> object which represents a locked file. As long as you use that lock to
> read from and write to the file the lock keeps being held. However as
> soon as you return to the event loop from the last progress
> notification from the last read/write operation, the lock is
> automatically released.
>

This sounds a lot like "microtasks", described here:
http://lists.w3.org/Archives/Public/public-webapps/2011JulSep/1622.html.  I
don't know where it's described in IndexedDB, but it seems like this is a
notion that keeps coming up again and again.  It seems like this should be
introduced as a consistent concept in the event model.

Actually, it looks like this was just discussed on IRC:
http://krijnhoetmer.nl/irc-logs/whatwg/20120228#l-39.

I was a little confused at the above explanation.  I think what you mean is
that the lock is held so long as a FileRequest object is active (eg. has
yet to dispatch a success or error event).  More concretely, at the end of
each microtask (if you want to use that terminology), all LockedFiles
without any active FileRequests are released.  That's sort of like the
"release when the LockedFile is GC'd" approach, except it's deterministic
and doesn't expose GC.

(I think that's equivalent to what you said later, but I want to make sure
I'm following correctly.)


> One downside of this is that it means that if you're doing a bunch of
> separate read/write operations in separate locks, each lock is held
> until we've had a chance to fire the final success event for the
> operation. So if you queue up a ton of small write operations you can
> end up mostly sitting waiting for the main thread to finish posting
> events.
>

It'd only slow things down if you attach an expensive, long-running event
handler to a load/loadend event, which is an inherently bad idea if you're
doing lots of tiny operations.  Is that actually a problem?

(If those events were run from a queued task then it could be a problem,
since it would have to wait for the event loop to get around to running
those tasks, but they're fired directly from the algorithm itself.)

By the way, readAsText and readAsArrayBuffer don't seem to fire load and
loadend events at the end, like readAsDataURL does.  It looks like an
oversight--they're fired in the error path.

-- 
Glenn Maynard
Received on Wednesday, 29 February 2012 00:56:53 GMT

This archive was generated by hypermail 2.3.1 : Tuesday, 26 March 2013 18:49:50 GMT