Re: Polished FileSystem API proposal

I like Git's model :-)

This would de-dup the file storage but won't it require downloading it for
every domain (when the data is not lingering in HTTP cache)?




On Tue, Nov 5, 2013 at 11:45 AM, Tim Caswell <tim@creationix.com> wrote:

> If the backend implementation used something like git's data store then
> duplicate data would automatically be stored only once without any security
> implications.  The keys are the literal sha1 of the values.  If two
> websites had the same file tree containing the same files, it would be the
> same tree object in the storage.  But only sites who have a reference to
> the hash would have access to it.
>
> Also I like the level of fs support that git's filesystem has.  There are
> trees, files, executable files, and symlinks. (there are also gitlinks used
> for submodules, but let's ignore those for now)
>
>
> On Tue, Nov 5, 2013 at 12:19 PM, Anne van Kesteren <annevk@annevk.nl>wrote:
>
>> On Thu, Oct 31, 2013 at 2:12 AM, Brian Stell <bstell@google.com> wrote:
>> > There could be *dozens* of copies of exactly the same a Javascript
>> library,
>> > shared CSS, or web font in the FileSystem.
>>
>> Check out the cache part of
>> https://github.com/slightlyoff/ServiceWorker/ Combined with a smart
>> implementation that will do exactly what you want. And avoid all the
>> issues of an actual cross-origin file system API.
>>
>>
>> --
>> http://annevankesteren.nl/
>>
>>
>

Received on Tuesday, 5 November 2013 22:50:31 UTC