W3C home > Mailing lists > Public > www-tag@w3.org > June 2012

Client-side storage

From: Robin Berjon <robin@berjon.com>
Date: Fri, 8 Jun 2012 11:58:55 +0200
Message-Id: <7E633CD6-1640-4990-B7EB-35A9D4B73A20@berjon.com>
To: "www-tag@w3.org List" <www-tag@w3.org>
Dear all,

I was given ACTION-647 to draft a product page for client-side storage that would hopefully make everyone happy with the direction in which to take this work. However since a product page would include a bunch of information that I cannot conjure out of thin air (schedule, assignees, etc.) and since the contention is in the scope and goals, I thought that I would start the discussion about the latter here, and if consensus is reached I can then paste it into some HTML.


There are multiple problematic facets relating to local storage, and rather than addressing them in bulk, given their disparate natures, they are here split into separate objectives that may be covered by different deliverables.

That being said, a core architectural concern ties these issues together. The Web has traditionally operated as a relatively typical client-server system. With the advent of offline Web applications, local storage, and the History API, clients have now gained the ability to operate largely on their own and with no access to the server, save for the initial content acquisition. This has the potential to introduce a number of architectural issues heretofore unplanned. The upcoming addition of peer-to-peer functionality in Web user agents can only strengthen this tendency.

The goal of this work is therefore to identify pitfalls in the making and identify an architecture supportive of these changes. This may require restating established architectural principles, though ideally the number of changes would remain minimal.

 Distributed Minting of URIs

When a user creates a new resource in an offline application, stored locally and identified by a URI that is exposed to the user (through the History API), multiple issues may arise. Until synchronisation takes place, the URI is strictly private to the user's agent. And when it does, the server may reject its creation for a variety of reasons (permissions, conflicts, etc.). What impact does this have on the stability of identifiers and what recommendations may be made to maintain core characteristics of Web architecture in this context?

 Storage Synchronisation

See http://www.w3.org/mid/DB00A26C-A268-4096-BA92-9432AE77793B@berjon.com.

 Vendor Lock-in

When data is stored locally, the only way in which the user can switch to another user agent without losing it is if the data can be exported somehow (possibly through server synchronisation, but at times that is not desirable). What can be done to avoid locking users in with their data?


With local storage, the client-side code will have to handle the case in which the storage schema evolves over time and different versions have to be handled. IndexedDB has built-in hooks to handle incremental migrations. Are these good enough? Are there specific concerns that need to be raised?


I suspect that there may be requests for additional topics, so this might not be the whole story. I may also have forgotten stuff or missed some discussions in trawling through my archives (or misrepresented some proposed positions). It's open for comments!

Robin Berjon - http://berjon.com/ - @robinberjon
Received on Friday, 8 June 2012 09:59:25 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 22:56:45 UTC