W3C home > Mailing lists > Public > whatwg@whatwg.org > March 2010

[whatwg] Storage quota introspection and modification

From: Ian Fette <ifette@google.com>
Date: Thu, 11 Mar 2010 06:38:56 -0800
Message-ID: <bbeaa26f1003110638v3649b358oa0b6e41f7b97d64d@mail.gmail.com>
Am 10. M?rz 2010 16:11 schrieb Mike Shaver <mike.shaver at gmail.com>:

> 2010/3/10 Ian Fette (????????) <ifette at google.com>:
> > As I talk with more application developers (both within Google and at
> > large), one thing that consistently gets pointed out to me as a problem
> is
> > the notion of the opaqueness of storage quotas in all of the new storage
> > mechanisms (Local Storage, Web SQL Database, Web Indexed Database, the
> > Filesystem API being worked on in DAP, etc). First, without being able to
> > know how large your quota currently is and how much headroom you are
> using,
> > it is very difficult to plan in an efficient manner. For instance, if you
> > are trying to sync email, I think it is reasonable to ask "how much space
> do
> > I have," as opposed to just getting halfway through an update and finding
> > out that you hit your quota, rolling back the transaction, trying again
> with
> > a smaller subset, realizing you still hit your quota, etc.
>
> It generally seems that "desktop" mail clients behave in the
> undesirable way you describe, in that I've never seen one warn me
> about available disk space, and I've had several choke on a disk being
> surprisingly full.  And yet, I don't think it causes a lot of problems
> for users.  One reason for that is likely that most users don't
> operate in the red zone of their disk capacity; a reason for THAT
> might be that the OS tells them that they're getting close, and that
> many of their apps start to fail when they get full, so they are more
> conditioned to react appropriately when they're warned.  (Also,
> today's disks are gigantic, so if you fill one up it's usually a WTF
> sort of moment.)
>
> Part of that is also helped by the fact that they're managing a single
> quota, effectively, which might point to a useful simplification: when
> the disk gets close to full, and there's "a lot" of data in the
> storage cache, the UA could prompt the user to do some cleanup.  Just
> as with cleaning their disk, they would look for stuff they had
> forgotten was still on there ("I haven't used Google Reader in ages!")
> or didn't know was taking up so much space ("Flickr is caching *how*
> much image data locally?").  The browser could provide a unified
> interface for setting a limit, forbidding any storage, compressing to
> trade space for perf; on the desktop users need to configure those
> things per-application, if such are configurable at all.  If I really
> don't like an app's disk space usage on the desktop, I can uninstall
> it, for which the web storage analogue would perhaps be setting a
> small/zero quota, or just not going there.
>
> One thing that could help users make better quota decisions is a way
> for apps to opt in to sub-quotas: gmail might have quotas for contact
> data, search indexing, message bodies, and attachments.  I could
> decide that on my netbook I want message bodies and contact data, but
> will be OK with slow search and missing attachments.  An app like
> Remember The Milk might just use one quota for simplicity, but with
> the ability to expose distinct storage types to the UA, more complex
> web applications could get sophisticated storage management "for
> free".
>
> So I guess my position is this: I think it's reasonable for apps to
> run into their quota, and to that end they should probably synchronize
> data in priority order where they can distinguish (and if they were
> going to make some decision based on the result of a quota check,
> presumably they can).  User agents should seek to make quota
> management as straightforward as possible for users.  One reasonable
> approach, IMO, is to assume that if there is space available on the
> disk, then an app they've "offlined" can use it.  If it hurts, don't
> go back to that site, or put it in a quota box when you get the
> "achievement unlocked: 1GB of offline data" pop-up.
>
> Mike
>

Mike -

I think apps will have to deal with hitting quota as you describe, however
with a normal desktop app you usually have a giant disk relative to what the
user actually needs. When we're talking about shipping something with a 5mb
or 50mb default quota, that's a very different story than my grandfather
having a 1tb disk that he is never going to use. Even with 50mb (which is
about as much freebie quota as I think I am comfortable giving at the
moment), you will blow through that quite quickly if you want to sync your
email. The thing that makes this worse is that you will blow through it at
some random point (as there is no natural "installation" point from the APIs
we have. You just get some freebie appcache, web *** database quota etc.)

You seem to propose "if the user has offlined the app, set the default quota
to be unlimited and provide better ways for the user and application to
manage this when there is pressure on disk space." I would personally be in
favor of this approach, if only we had a good way to define what it meant to
"offline the app". Right now, appcache, database, everything is advisory.
The browser runs across an appcache manifest and magically makes it
available offline. The browser gets a request to store a new database and
the assumption in the spec seems to be that there is some freebie quota, and
then when you hit it some UA magic happens. There is no real way in the spec
for the user to tell the browser "I actually want to use this site offline."
E.g. there's no "installation" analogue. I've been asking for that for a
while but have been shot down enough times that I've stopped asking :)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.whatwg.org/pipermail/whatwg-whatwg.org/attachments/20100311/16bb0e8a/attachment-0001.htm>
Received on Thursday, 11 March 2010 06:38:56 UTC

This archive was generated by hypermail 2.4.0 : Wednesday, 22 January 2020 16:59:21 UTC