Re: Colliding FileWriters

Hi,

Sorry for the archaeologically late response. I'm currently documenting 
FileHandle and am trying to understand how it works, so I have a couple 
of questions.

I don't think this particular message received a response and I have an 
unanswered interogation about it.

> On Wed, Feb 29, 2012 at 1:56 AM, Glenn Maynard <glenn@zewt.org  <mailto:glenn@zewt.org?Subject=Re%3A%20Colliding%20FileWriters&In-Reply-To=%253CCA%2Bc2ei-f7fiBSZeBR0j6EpzQnUCTyyUF3bhXK86eC5Xo1_NVkw%40mail.gmail.com%253E&References=%253CCA%2Bc2ei-f7fiBSZeBR0j6EpzQnUCTyyUF3bhXK86eC5Xo1_NVkw%40mail.gmail.com%253E>> wrote:
> > On Mon, Feb 27, 2012 at 6:40 PM, Jonas Sicking <jonas@sicking.cc  <mailto:jonas@sicking.cc?Subject=Re%3A%20Colliding%20FileWriters&In-Reply-To=%253CCA%2Bc2ei-f7fiBSZeBR0j6EpzQnUCTyyUF3bhXK86eC5Xo1_NVkw%40mail.gmail.com%253E&References=%253CCA%2Bc2ei-f7fiBSZeBR0j6EpzQnUCTyyUF3bhXK86eC5Xo1_NVkw%40mail.gmail.com%253E>> wrote:
> >>
> >> To do the locking without requiring calls to .close() or relying on GC
> >> we use a similar setup to IndexedDB transactions. I.e. you get an
> >> object which represents a locked file. As long as you use that lock to
> >> read from and write to the file the lock keeps being held. However as
> >> soon as you return to the event loop from the last progress
> >> notification from the last read/write operation, the lock is
> >> automatically released.
I LOVE THIS SO MUCH! (that's not my question, just a reaction :-) )

> >> One downside of this is that it means that if you're doing a bunch of
> >> separate read/write operations in separate locks, each lock is held
> >> until we've had a chance to fire the final success event for the
> >> operation. So if you queue up a ton of small write operations you can
> >> end up mostly sitting waiting for the main thread to finish posting
> >> events.
> >
> > It'd only slow things down if you attach an expensive, long-running event
> > handler to a load/loadend event, which is an inherently bad idea if you're
> > doing lots of tiny operations.  Is that actually a problem?
>
> No, that's not correct.
>
> Most likely the implementation of this will use two threads. The main
> thread which runs the JS code running in the window or worker and an
> IO thread which does the file reading/writing. The main thread is also
> where event handlers run. Every time a read/write is requested by the
> main thread, data about this operation is sent to the IO thread
> allowing the main thread to continue.
>
> If the main thread creates two separate locks which perform two small
> write operations (...)
I'd like to stop here for a minute, because I'm not entirely clear as to 
what this assumptions means.
Does this part mean that during the same event loop turn, some JS code 
would open 2 separate locks for the same file? If so, that sounds like 
such an edge case it should just be forbidden like throwing an 
ALREADY_LOCKED error when asking a second lock.
Since the first lock is released at the end of turn, the code asking the 
second lock can recover by try/catching the error and deciding to lock 
and play with the file in another later turn (using setTimeout( , 0) or 
whatev's)

Preventing 2 locks for the same file in the same turn saves a lot of 
complications it seems.
What ended up happening for Firefox implementation?

David

Received on Wednesday, 28 November 2012 15:10:50 UTC