W3C home > Mailing lists > Public > public-script-coord@w3.org > July to September 2013

RE: Request for feedback: Filesystem API

From: Domenic Denicola <domenic@domenicdenicola.com>
Date: Sun, 11 Aug 2013 00:57:41 +0000
To: Jonas Sicking <jonas@sicking.cc>
CC: "public-script-coord@w3.org" <public-script-coord@w3.org>
Message-ID: <B4AE8F4E86E26C47AC407D49872F6F9F8D8AC8E4@BY2PRD0510MB354.namprd05.prod.outlook.com>
From: Jonas Sicking [mailto:jonas@sicking.cc]

> Like Brendan points out, what is considered the "low-level capabilities" isn't always obvious.

I think a good guide is whether it's "atomic" or not. As such "move" is definitely atomic, whereas the others are not as much, so my apologies for including it there. A new concern, which this time I'll phrase as a question---is moving, or removing, a directory atomic?

The atomicity is more important than you might think, because of how it impacts error-handling, parallel-versus-serial operations, and incremental progress. Fran├žois gets at this in his response, when he says:

> On the other hand, the other functions suffer from huge design challenges (what to do in case of conflict? are hidden files copied too? what happens if only one file is corrupted?) and I would probably leave then out, too. Librairies can fill the gap and we can learn from experiments before standardising.

For example, when copying, what happens in the case of a transient filesystem error or corrupted sections of a file? What is your retry strategy? Do you copy all that you can, and leave the rest of the file filled with "XXX"? (Might make sense for text files!) When moving or copying or removing a directory, which I *think* are non-atomic operations, what happens if only one file can't be moved/copied/removed? Do you retry? Do you fail the whole process? Do you do a rollback? Do you continue on as best you can? How important is deterministic order in such batch operations---e.g., do you try to remove all files in a directory in sequence, or in parallel? You can imagine other such issues.

Exposing an AsyncLock-like primitive to help users create their own strategy for atomic operations might be pretty helpful indeed. Libraries will quickly spring up for the relative derivative operations, and we can copy those into v2 of the spec as desired, once we know exactly which strategies are popular in user-space. Indeed, if spec authors themselves wrote such libraries, that would help immensely in their explaining and understanding the complexities involved, forcing them to face head-on issues like retry strategy etc.!

On specific issues:

- If you're going to keep copy, I'd keep file-copying only, not directory copying. (Notably, Node.js doesn't have any copy APIs, and I actually have never felt a need for them or seen other people bemoaning their lack. I could easily be missing those discussions though.)
- removeDeep over remove seems pretty reasonable; thanks for explaining how each can be done in terms of the other. (Drawing on Node.js experience, I know the rimraf package *is* very much used.)
- enumerateDeep, I am not so sure on. It seems to be fraught with the same type of potential problems as above (what do you do if one directory can't be enumerated? What order are the files touched in? What is the retry strategy? etc.). Maybe it could be put high up on the want-to-add-ASAP list, and prototyped in JS quickly?
- On BlockStorage vs. FS: no, we don't need to reinvent a FS on top of Block Storage. Filesystems are appropriately low-level enough that we should definitely be exposing them to web developers! Just, y'know, the low-level parts. One of those might be an AsyncLock so that they can build higher-level parts themselves!

Received on Sunday, 11 August 2013 00:58:13 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:17 UTC