- From: <ccjason@us.ibm.com>
- Date: Fri, 29 Oct 1999 15:18:50 -0400
- To: jamsden@us.ibm.com
- cc: w3c-dist-auth@w3.org
Sure, I can agree with all of this too. What I was attempting to point out though is that lock tokens by themselves don't do anything unless the concurrent client applications do something with them. That is, having two lock tokens by the same principal using concurrent applications is like having no lock token at all. Either application, having a shared lock token can do its update and overwrite the other. So can totally different principals. This argument is not unique to the issue of the same principal holding multiple locks on the same resource. I believe that was the issue we were trying to resolve. More in a sec... Its the one lock token scenario that works, but only if the client applications are aware of how they got the lock token, so they know there is a possibility that some other application has one too. Actually not much really works for shared *write* locks. Not without the clients identifying each other to insure that they get along with the other lockers, cooperating with the other lockers, and coordinating via another mechanism. I consider shared write locks to be a poor example of what shared locks can do but some other folks might find they offer an opportunity to develop their own cooperative locking schemes. As I said, I personally don't value shared write locks, but if someone values them between separate principal's, then they should also value them between indepedent applications using the same authentication credentials. There is no difference. I feel shared *read* locks are more valuable, and allowing independent parts of the same prinicple create separate locks works very well for shared read locks and saves client code that would be necessary to work around this if the server limitted things to one lock per principal. And if some servers adopt Geoff's redirected locatable lock proposal (and even if they don't). A separate locatable lock class could be invented whose only purpose is to track the location of a resource. These would also work very well if multiple independent applications using the same credentials were allowed to create locks. And without allowing multiple locks by the same principal, they couldn't work reliably. I don't know what other types of locks might be invented in the future, but the general rules seems to be that multiple independent applications using the same credentials should be allowed to create *shared* locks on the same resource. <tangent>Although authentication is useful for ACL's, the authentication part of the locking support at this point only seems to be to insure that someone isn't submitting someone else's token. And in fact this whole locking system could work just fine with everyone using the *same* credentials if no one shared their tokens and the server didn't reveal them within the lock discovery process. (Servers appear to have this option in 2518.)</tangent> <tangent 2> BTW, the locatable lock class I mentioned above is kind of interesting for depth locks. It gives you a way, with URI protection, to declare that no resources can be removed from a collection, but folks can add resources to a collection. This is unlike write locking the collection because locking a collection blocks both the addition and the removal of resources. And depending if the depth locking is dynamic or not also adds an interesting twist to the semantics. If dynamic, once added, it can't be removed by other folks. If static. other principals can remove resources they've added. </tangent>
Received on Friday, 29 October 1999 15:18:09 UTC