W3C home > Mailing lists > Public > public-webapps@w3.org > April to June 2011

Re: [File API: FileSystem] Path restrictions and case-sensitivity

From: Jonas Sicking <jonas@sicking.cc>
Date: Sun, 8 May 2011 17:54:00 -0700
Message-ID: <BANLkTikFgCnYiUppWvLgRyHimgZq1tP+jw@mail.gmail.com>
To: Glenn Maynard <glenn@zewt.org>
Cc: timeless <timeless@gmail.com>, Eric U <ericu@google.com>, Web Applications Working Group WG <public-webapps@w3.org>, Charles Pritchard <chuck@jumis.com>, Kinuko Yasuda <kinuko@google.com>
On Sun, May 8, 2011 at 5:32 PM, Glenn Maynard <glenn@zewt.org> wrote:
> I wonder if Blob and IndexedDB implementations will mature enough to
> efficiently handle downloading and saving large blocks of data.  For
> example, a game installer should be able to download arbitrarily large game
> data files.  In principle this can be done efficiently: just download the
> file into a Blob and pass it to IndexedDB.  The browser should scratch large
> Blobs to disk transparently.  However, making the second part efficient is
> harder: saving the Blob to IndexedDB without a second on-disk copy (possibly
> totalling several GB) being made, copying from the Blob scratch space to
> IndexedDB.

For what it's worth, we're planning on experimenting with this in
Firefox in a near future. I don't see that it would be hard to make
what you're asking for work. Also note that the IndexedDB spec already
calls for this to work since it uses the structured clone algorithm
which supports Blobs. Though of course the IndexedDB spec itself
doesn't have requirements on how many file copies the implementation
does, that is an implementation quality issue.

In firefox we're not planning on storing the actual Blobs in the
backend database. Rather, Blobs and Files will be stored as
stand-alone files on the filesystem, and only the filename is stored
in the database. This would be fully transparent to the web page,
which simply gets a reference to the Blob as a result of a database
read.

> Another issue that comes to mind: a game installation page (which handles
> downloading and storing game data) would want to write data incrementally.
> If it's downloading a 100 MB video file, and the download is interrupted, it
> will want to resume where it left off.  With FileWriter and Filesystem API
> that's straightforward, but with Blobs and IndexedDB it's much trickier.  I
> suppose, in principle, you could store the file in 1MB slices to the
> database as it's downloading, and combine them within the database when it
> completes.  This seems hard or impossible for implementations to handle
> efficiently, though.
>
> It'll be great if IndexedDB/Blob implementations are pushed this far, but
> I'm not holding my breath.

A combination of FileWriter and IndexedDB should be able to handle
this without problem. This would go beyond what is currently in the
IndexedDB spec, but it's this part that we're planning on
experimenting with.

The way I have envisioned it to work is to add a function called
createFileEntry somewhere, for example the IDBFactory interface. This
would return a fileEntry which you could then write to using
FileWriter as well as store in the database using normal database
operations.

/ Jonas
Received on Monday, 9 May 2011 00:54:57 GMT

This archive was generated by hypermail 2.3.1 : Tuesday, 26 March 2013 18:49:45 GMT