Re: Colliding FileWriters

On Mon, Feb 27, 2012 at 4:36 PM, Eric U <ericu@google.com> wrote:

> I like exclusive-by-default.  Of course, that means that by default
> you have to remember to call close() or depend on GC, but that's
> probably OK.


This sounds bad, because the platform universally goes to careful lengths
to avoid exposing GC behavior.  I'd recommend very careful consideration
here.  It seems like vendor buy-in is a bit slow already for FileWriter; if
it starts requiring GC exposure, it might get even slower.  GC exposure can
also turn into interop bugs, if an implementation's GC chooses to pause
collection when memory pressure is low.

Also, what's the behavior when there's already an exclusive lock, and
> you call createFileWriter?  Should it just not call you until the
> lock's free?  Do we need a trylock that fails fast, calling
> errorCallback?  I think the former's probably more useful than the
> latter, and you can always use a timer to give up if it takes too
> long, but there's no way to cancel a request, and you might get a call
> far later, when you've forgotten that you requested it.
>

Waiting is much better.  Having it act as a trylock by default would lead
to subtle bugs in user code.  The vast majority of the time it'll always
succeed, and people will (knowingly or not) depend on that, resulting in
code that subtly fails if the lock is already taken.  If there's a trylock
version, it should be a non-default option, so it's only used when it's
really what's wanted.

 > However this brings up another problem, which is how to support
> > clients that want to mix read and write operations. Currently this is
> > supported, but as far as I can tell it's pretty awkward. Every time
> > you want to read you have to nest two asynchronous function calls.
> > First one to get a File reference, and then one to do the actual read
> > using a FileReader object. You can reuse the File reference, but only
> > if you are doing multiple reads in a row with no writing in between.
>
> I thought about this for a while, and realized that I had no good
> suggestion because I couldn't picture the use cases.  Do you have some
> handy that would help me think about it?
>

FWIW, I'm not too concerned about this.  Async APIs just tend to get ugly
when you start using them this much.  I think that's acceptable, as long as
the *synchronous* (worker) API is nice and clean.  That way, if you need to
do complex things like this, you do it in a worker where it can be done
linearly and much more cleanly.


>  We could go either way on this.  A File could be produced that was a
> snapshot of the current state, or that worked until another write
> happened, and then went invalid.  Write locks and readers can be
> independent.
>

Snapshotting sounds tricky, particularly for non-sandboxed files, where the
implementation can't play file format tricks to avoid having to make
"backups" every time data is written.  Even for sandboxed files, it'd
probably result in ugly file fragmentation at best.  Invalidating File if
the data is modified has been proposed before (by you, maybe?); it seems
much more sane, and integrates much more cleanly with native file access.

That assumes that write access always grants read access, which need
> not necessarily be true.  The FileSaver API was designed assuming the
> opposite.  Of course, nobody's implemented that yet, but that's
> another thing we need to talk about, and another thread.
>

I can't think of any case where a browser would want to allow write access
to a file without read access.  I think it's a reasonable constraint to
require that write access implies read access.  It's exceptionally rare
that write-only files are useful, even in native applications.

-- 
Glenn Maynard

Received on Monday, 27 February 2012 23:59:00 UTC