- From: Jonas Sicking <jonas@sicking.cc>
- Date: Wed, 6 Nov 2013 01:50:18 -0800
- To: Tim Caswell <tim@creationix.com>
- Cc: Anne van Kesteren <annevk@annevk.nl>, Brian Stell <bstell@google.com>, public-webapps <public-webapps@w3.org>
On Tue, Nov 5, 2013 at 11:45 AM, Tim Caswell <tim@creationix.com> wrote: > If the backend implementation used something like git's data store then > duplicate data would automatically be stored only once without any security > implications. The keys are the literal sha1 of the values. If two websites > had the same file tree containing the same files, it would be the same tree > object in the storage. But only sites who have a reference to the hash > would have access to it. > > Also I like the level of fs support that git's filesystem has. There are > trees, files, executable files, and symlinks. (there are also gitlinks used > for submodules, but let's ignore those for now) Sounds like IndexedDB is a better fit than a filesystem for this use case. Note that the use case for the filesystem API isn't "storing files", IDB is perfectly capable of doing that. The use case for the filesystem API is to satisfy people that want a true filesystem with directories etc so that they can: * Sync to a server-side file system. For example when doing web development and want to deploy a website. * Use hierarchical filesystem: URLs. * Support in-place editing of large files. * Because filesystems are familiar. A simple key-value storage, where the values happen to be files, doesn't need a filesystem API. / Jonas
Received on Wednesday, 6 November 2013 09:51:16 UTC