- From: Tim Caswell <tim@creationix.com>
- Date: Tue, 5 Nov 2013 13:45:34 -0600
- To: Anne van Kesteren <annevk@annevk.nl>
- Cc: Brian Stell <bstell@google.com>, public-webapps <public-webapps@w3.org>
Received on Wednesday, 6 November 2013 09:29:01 UTC
If the backend implementation used something like git's data store then duplicate data would automatically be stored only once without any security implications. The keys are the literal sha1 of the values. If two websites had the same file tree containing the same files, it would be the same tree object in the storage. But only sites who have a reference to the hash would have access to it. Also I like the level of fs support that git's filesystem has. There are trees, files, executable files, and symlinks. (there are also gitlinks used for submodules, but let's ignore those for now) On Tue, Nov 5, 2013 at 12:19 PM, Anne van Kesteren <annevk@annevk.nl> wrote: > On Thu, Oct 31, 2013 at 2:12 AM, Brian Stell <bstell@google.com> wrote: > > There could be *dozens* of copies of exactly the same a Javascript > library, > > shared CSS, or web font in the FileSystem. > > Check out the cache part of > https://github.com/slightlyoff/ServiceWorker/ Combined with a smart > implementation that will do exactly what you want. And avoid all the > issues of an actual cross-origin file system API. > > > -- > http://annevankesteren.nl/ > >
Received on Wednesday, 6 November 2013 09:29:01 UTC