- From: Jason Crawford <nn683849@smallcue.com>
- Date: Mon, 10 Mar 2003 01:25:10 -0500
- To: "Lisa Dusseault" <lisa@xythos.com>
- Cc: acl@webdav.org, "'Julian Reschke'" <julian.reschke@gmx.de>, "'WebDAV'" <w3c-dist-auth@w3.org>
On Sunday, 03/09/2003 at 09:18 PST, "Lisa Dusseault" <nnlisa___at___xythos.com@smallcue.com> wrote: > > > But back to the atomic vs. non-atomic move, the permissions > > example was > > > just an example. You still have to recursively check > > privileges on all > > > descendent _collections_ if one accepts your permissions > > model. And you > > > still have to recursively check locks, etc. The basic argument for > > > non-atomic move was that in some situations, the server will have to > > > implement it as a super-user COPY-->DELETE. > > > > Again... even in situations where it has to implement it under the > > covers as a DELETE(dest)/COPY/DELETE(src), it can still be > > atomic or nearly so because it can be backed out up to and including > > the atomic remap. After that it's committed and the WebDAV MOVE's > > response's > > status code is not sensitive to how the subsequent cleanup goes. > > OK, now we're getting to pinpoint our different assumptions. Do you > assume it's better for the server to do the atomic remap, return > "success", and worry about cleanup later? Yes/No. It's an option that I find appealing since it lets the protocol stay away from allocation issues which are invisible at the 2518 level. But I will concede that not all servers can do that. For those servers, I'm saying that you put the resources in the destination storage system (if necessary) and can get to the point where you've confirmed that all the problems that might go wrong are eliminated and that you can now commit the operation. Everything is ready to be remapped from a WebDAV perspective. If a given server really has to do a reclamation check at source and dest, it can be done next, immediately before the commit/remap. After that, the actual irreversible reclamation can occur. > That's valid for some > servers, certainly. If there are "orphaned" resources, the administrator > can clean them up later. And perhaps users can also use something like a trash can. Allocated and inaccessable resources are already put in trash cans in many OS's. It is a familiar enough concept to most users. But even if you can't do the trash can, I'd hope that you could do the reclamation check I mentioned above. If your server needs to do that check, you'd have to do later anyway, so do it just before the commit instead. > However, that assumption isn't valid for all servers. For a server like > Sharemation, we count each file stored against some user's quota. If a > collection MOVE or DELETE fails, and we can't free up that quota, we > need to let the user know so they can fix it up. It's quite possible > the user will be able to fix up the operation by removing locks, getting > permissions changed, or simply trying again. Yup. Atomic or not, we do need to make sure that we provide reasonably descriptive response codes so that the client can figure out why the operation failed or was only partially completed. > Finally, the server needs to consider if rollback is feasible or > desirable. In theory, it would be reasonable to say that servers SHOULD > succeed 100%, or failing that SHOULD rollback 100%, or only if both > those fail should the server report a partial success. That's not the > model WebDAV has had so far, however. So that might be a reasonable > position for the bindings specification to take but I'm not sure people > would support changing RFC2518 in that manner. Right. I don't think anyone is going to say that servers MUST act atomically. It's too difficult to know with certainty that there is no situation where it's necessary to leave the implemented DELETE half finished. We might still invent one. But because I'm afraid server vendors will be quick to decide that they can't support atomic DELETE, I have a tendency to speak up to point out options they seem to have to allow them to support it.
Received on Monday, 10 March 2003 01:30:54 UTC