W3C home > Mailing lists > Public > public-webapps@w3.org > October to December 2012

Re: Colliding FileWriters

From: Jan Varga <jan.varga@gmail.com>
Date: Thu, 29 Nov 2012 18:28:07 +0100
Message-ID: <CAB1sDKwoRCDd=DmcUH80ebVmfrupvks4Ve3dV+=Rip9xGyXeeg@mail.gmail.com>
To: David Bruant <bruant.d@gmail.com>
Cc: public-webapps@w3.org, Jonas Sicking <jonas@sicking.cc>
On Wed, Nov 28, 2012 at 4:10 PM, David Bruant <bruant.d@gmail.com> wrote:
> >> One downside of this is that it means that if you're doing a bunch of
> >> separate read/write operations in separate locks, each lock is held
> >> until we've had a chance to fire the final success event for the
> >> operation. So if you queue up a ton of small write operations you can
> >> end up mostly sitting waiting for the main thread to finish posting
> >> events.
> >
> > It'd only slow things down if you attach an expensive, long-running event
> > handler to a load/loadend event, which is an inherently bad idea if you're
> > doing lots of tiny operations.  Is that actually a problem?
> No, that's not correct.
> Most likely the implementation of this will use two threads. The main
> thread which runs the JS code running in the window or worker and an
> IO thread which does the file reading/writing. The main thread is also
> where event handlers run. Every time a read/write is requested by the
> main thread, data about this operation is sent to the IO thread
> allowing the main thread to continue.
> If the main thread creates two separate locks which perform two small
> write operations (...)
>  I'd like to stop here for a minute, because I'm not entirely clear as to
> what this assumptions means.
> Does this part mean that during the same event loop turn, some JS code
> would open 2 separate locks for the same file? If so, that sounds like such
> an edge case it should just be forbidden like throwing an ALREADY_LOCKED
> error when asking a second lock.
> Since the first lock is released at the end of turn, the code asking the
> second lock can recover by try/catching the error and deciding to lock and
> play with the file in another later turn (using setTimeout( , 0) or
> whatev's)
> Preventing 2 locks for the same file in the same turn saves a lot of
> complications it seems.
> What ended up happening for Firefox implementation?

The name "lock" is a bit misleading here. It's more like the transaction in
You can get more than one "lock" or rather "LockedFile" objects for the
same file.
Operations for these objects can be processed in parallel or serialized,
depending on the type of LockedFile objects (readonly or readwrite).
Operations for LockedFile objects which can't be processed (for example if
there's already a readwrite active LockedFile) are just queued.

Received on Thursday, 29 November 2012 18:21:43 UTC

This archive was generated by hypermail 2.3.1 : Friday, 27 October 2017 07:26:50 UTC