RE: Request for feedback: Filesystem API

// sorry if this is a duplicate, the first one may have been eaten because it shows up empty on lists.w3.org

± > For example, when copying, what happens in the case of a transient
± filesystem error or corrupted sections of a file? What is your retry strategy?
± Do you copy all that you can, and leave the rest of the file filled with "XXX"?
± (Might make sense for text files!) When moving or copying or removing a
± directory, which I *think* are non-atomic operations, what happens if only
± one file can't be moved/copied/removed? Do you retry? Do you fail the
± whole process? Do you do a rollback? Do you continue on as best you can?
± How important is deterministic order in such batch operations---e.g., do you
± try to remove all files in a directory in sequence, or in parallel? You can
± imagine other such issues.
± 
± My initial naive answer would be: Stop at first error and report the name of
± the file where the error happened. No rollbacks!

This is terrible. That means the application has absolutely no way of restoring its filesystem in a valid state after that. This is exactly the reason why we don't want rushed high-level api.

I really believe we should not allow any operation that is not providing the good error-recovery procedure and the right one may depends on the context so much it's impossible to set an empiric default and expect applications to work around that.

If we propose a way to perform non-atomic operations, we should do so in a way that allows the operating application to react on errors as they appear, and decide how to continue the operations.

My personal take would be:

- FileSystem.Node : {
    
    canDeleteAtomically: true if the current object can be deleted atomically by the file system (usually: file-yes, empty-dir-yes, full-dir-no)
    canRenameAtomically: true if the current object can be renamed atomically by the file system (usually: yes)
    canMoveAtomically(newLocation): true if the current object can be moved atomically by the file system (usually file-yes in same fs, empty-dir-yes, full-dir-yes in same fs)
    canCopyAtomically(newLocation): true if the current object can be copied atomically by the file system (usually: yes)
    
    deleteAtomically(): returns a promise which erros if the operation fails; a non-atomic operation always fails
    renameAtomically(newName): idem
    moveAtomically(newLocation): idem
    copyAtomically(newLocation): idem // note: directories are not deep copied, usually only their name is preserved

    initDelete(): return as FileSystem.DeepDeleteOperation object for the operation
    initRename(newName): returns a FileSystem.DeepMoveOperation object for the operation
    initMove(newLocation): returns a FileSystem.DeepMoveOperation object for the operation
    initCopy(newLocation): returns a FileSystem.DeepCopyOperation object for the operation
    
}

- FileSystem.DeepOperation : EventTarget, {
    onSuccess:
        event fired at the end of the operation if all the error that occurred were handled
    onError:
        event fired at the end of the operation if an error occurred and was not handled
        the errors field contains an array of all errors encountered during the operation
        by default, the start promise will error after this event, call preventDefault to indicate you handled the errors.
    isStarted:
        returns true if the start() promise was already generated
    isFinished:
        returns true if the start() promise was already resolved
    start():
        initiate the current operation (must be overloaded in child classes)
        returns a promise which resolve to null on success, errors on failure
    registerErrorHandler(promise):
        delays the resolving/erroring of the start() promise until the given promise resolves.
        if any of the given promises errors, the start() promise will error
        if all of the given promises succeed, the start() promise will only error if some error was not handled.
}    

- FileSystem.DeepDeleteOperation : DeepOperation, {
    onDeleteError: 
        event fired when a file or an empty directory cannot be deleted atomically
        by default, this error is ignored, call event.target.abort() to abort the operation
        by default, this error will be reported via the Error event at the end of the operation, call preventDefault() to specify you handled it already
    onDeleteSuccess:
        event fired when a file or empty directory is deleted atomically with success
}


- FileSystem.DeepCopyOperation : DeepOperation, {
    onCopyError: 
        event fired when a file or an empty directory cannot be copied atomically (could be a conflict, we have to define the kind of errors that may happen)
        by default, this error is ignored, call event.target.abort() to abort the operation
        by default, this error will be reported via the Error event at the end of the operation, call preventDefault() to specify you handled it already
    onCopySuccess:
        event fired when a file or a directory is copied atomically (in case of a directory, the content still has to be copied)
}

- FileSystem.DeepMoveOperation : DeepCopyOperation, DeepDeleteOperation, {
    // please note that for any file/dir whose copy errored and for which preventDefault() wasn't called, no delete operation is performed
    // for handled errors for which a promise was specified, the delete operation is delayed until after the handler promise success (and cancelled definitely if the handler promise errors)
}

This allows the simple but risky "myDirectory.startCopy(anotherDirectory).then(...)" but it also allows complex things like handling conflicts by showing some user interface (because you can handle errors using another promise with registerErrorHandler).

It also allows very transparently non-atomic files that contain multiple sub files (in most file systems, like NTFS, a file can have multiple alternate streams so you may want to access them via an API) or many different extensions to the file system API because it is very robust and do not let the author manage himself what he cannot control, while giving him the option to handle what it can handle.

On top of that, people will certainly define default strategies like (erase older content, do not copy if files are the same, ...) + expose them via higher level API and it will be easy for them to do so; that means we will have succeed to be extensible while still giving the kind of shop-and-go experience to the developers who do not need more.

Received on Sunday, 18 August 2013 05:32:10 UTC