a critique of webdav-protocol

This is a long response to the draft-ietf-webdav-protocol-09.txt.
I realize it borders on ranting; I apologize in advance for
any toes I step on. I regret I haven't gotten to this sooner.

The upshot is that I think the draft is irretrievably flawed in
conception and execution, and should not advance in the standards
process in anything approaching the form it is in now.
I realize most WG members will disagree with this conclusion;
those who might agree probably abandoned the group long ago.
Perhaps some of my comments will inspire minor improvements
in the likely event that you decide to advance it.

My comments are interspersed with quoted text from the draft,
in draft order. I'm using these annotations:
CLARITY - I don't understand it, or I think someone else won't.
OBJECTION - An objection to what it says.

>Abstract
>
>   This document specifies a set of methods, headers, and content-types
>   ancillary to HTTP/1.1 for the management of resource properties,
>   creation and management of resource collections, namespace
>   manipulation, and resource locking (collision avoidance).

[OBJECTION] "collision avoidance"?
I thought this was hashed out on the list months ago.
Locking in an SCM system has nothing to do with lost updates.
It isn't like locks in a programming language to protect shared
memory. Exclusive locks are for purposes of access control, and
advisory locks are for notification.
I could do this:
   lock
   put
   unlock
   lock
   put
   unlock
And that is no safer than:
   put
   put
In both cases, the server should ensure that a PUT is atomic
and independent (in case simultaneous PUT operations hit the
server). The server can certainly not rely on client-side
locking for that; it requires its own.

In both cases, the server may simply overwrite the previous
value of the resource, or may automatically increase a version.
In both cases, the server may support a protocol extension whereby
a PUT is sent with a header specifying a particular version, and
the server may reject the PUT if that version isn't still the most
recent (or the "default"). But that server extension might be
supported in either case, and has nothing to do with (client-side)
locking.

>   Properties: The ability to create, remove, and query information
>   about Web pages, such as their authors, creation dates, etc. Also,
>   the ability to link pages of any media type to related pages.

[CLARITY] What is the import of the phrase "of any media type"?
Also, must we use the term "page"? It shows the document-centric
bias of the authors. Does the last sentence of the paragraph
above really just mean "the ability declare a relation between
any two resources"?

>   Collections: The ability to create sets of related documents and to
>   retrieve a hierarchical membership listing (like a directory listing
>   in a file system).

[CLARITY] I would suggest removing the word "related", as that suggests
more than what is supported (for example, maintenance of consistency).
And in the case of a DMS which has a single top-level collection
(http://dmsgateway.mycompany.com/dms/$docid), the resources are not 
necessarily related in any meaningful way.

Also, if the draft specifies how to retrieve a hierarchical membership
listing, I missed it (see my suggestion for OPTIONS below).

>   Locking: The ability to keep more than one person from working on a
>   document at the same time. This prevents the "lost update problem,"
>   in which modifications are lost as first one author then another
>   writes changes without merging the other author's changes.

[OBJECTION] See above regarding "collision avoidance".

>   Requirements and rationale for these operations are described in a
>   companion document, "Requirements for a Distributed Authoring and
>   Versioning Protocol for the World Wide Web" [RFC2291].

[CLARITY] It would be helpful to indicate that this document does
not discuss versioning.

>   In HTTP/1.1, method parameter information was exclusively encoded in
>   HTTP headers. Unlike HTTP/1.1, WebDAV encodes method parameter
>   information either in an Extensible Markup Language (XML) [REC-XML]
>   request entity body, or in an HTTP header.  The use of XML to encode
>   method parameters was motivated by the ability to add extra XML
>   elements to existing structures, providing extensibility; and by
>   XML's ability to encode information in ISO 10646 character sets,
>   providing internationalization support. As a rule of thumb,
>   parameters are encoded in XML entity bodies when they have unbounded
>   length, or when they may be shown to a human user and hence require
>   encoding in an ISO 10646 character set.  Otherwise, parameters are
>   encoded within HTTP headers.  Section 9 describes the new HTTP
>   headers used with WebDAV methods.

[CLARITY] There should be some minimal discussion as to why
webdav should be an HTTP extension at all, rather than its own
protocol. I don't think this is a difficult case to make, but
it should be made (the key issue being whether this is a true
extension, or mere protocol tunneling).

There should be more precise guidelines for whether a parameter
is placed in a http header or body. The considerations given in
the draft are character set, and size. It is perfectly possible to
encode richer character sets in http headers (see rfc2047); that is
irrelevant. Size is a valid consideration. Another vital
consideration is who is expected to look at it: if proxies need
to look at it, or if a web server needs to look at it in early
stages of request processing (for access control!), and if CGI
programs will not be interested, then a http header is a good
place for it. Sometimes a strict subset of those criteria
obtains, and then it is a hard decision. There should be some
indication as to whether the preferred location is the header or
the body. Some consideration should be given to a server being
flexible about a parameter appearing in either location (or both).

[OBJECTION] There should be guidelines for the syntax of webdav parameters
placed in http headers. The protocol as stated violates the guidelines in
  http://www.ics.uci.edu/pub/ietf/http/draft-frystyk-http-extensions-00.txt
and in
  http://www.w3.org/TR/WD-http-pep
The interrelationship between the IETF and W3C standards processes
constantly bewilders me -- particularly in cases such as this where
the same author is involved -- but it does seem like acknowledgment
of at least one of them is merited. Both site webdav after all.
The two differ; the ietf draft is simpler (and so more attractive)
while the w3 draft seems to have thought through better how
convergence into later http base versions will work. How to do
extensions is a particularly important issue for webdav, what with
its scattering of related efforts (dasl, versioning, acl, etc.) --
it needs an explicit strategy for how protocol extension and
interop will occur. The proposal should also explicitly state a
connection (if not identity) between a versioned URI appearing
as part of an XML xmlns declaration, and a URI that might occur
as part of a PEP-related HTTP header.

[OBJECTION] The document should state up front what general guidelines
should be used in the structuring of the XML request bodies, that
hold for all protocol requests in webdav. It should state whether
the body is to be interpreted as a single parameter value, a subset of
the parameter values (the rest being in the header), or the entire
parameter set.

It should not be the case that request and response formats are
restricted only by what is expressible by a DTD; there should be
some articulation of the underlying thought behind their structure
to guide further extension and implementation. The abstract model
behind what is specified in Appendix 1 escapes me. At a glance,
it looks ad hoc, and so would have to be special-cased and
hard-wired by a server implementation, probably per method.

>3  Terminology
>
>   URI/URL - As defined in [RFC2396].
>
>   Collection - A resource that contains member resources and meets the
>   requirements in section 5 of this specification.
>
>   Member Resource - A resource contained by a collection.
>
>   Internal Member Resource - A member resource of a collection whose
>   URI is immediately relative to the URI of the collection.

[CLARITY] You should also define "Resource" (perhaps also with reference
to that rfc).

[OBJECTION] The document is consistently muddled throughout about
resources (the semantic entities) versus their various identifiers (URI).
Take even the phrase "whose URI" above -- implying that there always is
such a unique mapping (this is usually taken to be the case only for URNs).
More on this later, concerning section 5.

>   Property - A name/value pair that contains descriptive information
>   about a resource.

[CLARITY] A "pair"? So is this some novel entity, which is not itself a 
resource? Is it URI-addressable?

>   Live Property - A property whose semantics and syntax are enforced
>   by the server.  For example, the live "getcontentlength" property
>   has its value, the length of the entity returned by a GET request,
>   automatically calculated by the server.
>
>   Dead Property - A property whose semantics and syntax are not
>   enforced by the server.  The server only records the value of a dead
>   property; the client is responsible for maintaining the consistency
>   of the syntax and semantics of a dead property.

[CLARITY] "Dead" has connotations of a need for garbage collection.
How about "automatic" or "dynamic" as compared to "passive" or "static"?

[OBJECTION] This is just a halfway stab at addressing the more
fundamental issue of how to represent metadata about properties:
what are their accepted values, and so on. Do you expect every
server implementation to just keep a compiled-in table 
of which properties are live or dead? There needs to be
a representation of "properties of properties" that is web-addressable
and automatically parseable. This is one reason why properties
in RDF (what they used to call "PropertyTypes") are resources.

>   Null Resource - A resource which responds with a 404 (Not Found) to
>   any HTTP/1.1 or DAV method except for PUT, MKCOL, OPTIONS and LOCK.
>   A NULL resource MUST NOT appear as a member of its parent
>   collection.

[CLARITY] Huh? I assume this has something to do with the
gobbledygook in section 5.

>   Participants of the 1996 Metadata II Workshop in Warwick, UK
>   [Lagoze, 1996], noted that "new metadata sets will develop as the
>   networked infrastructure matures" and "different communities will
>   propose, design, and be responsible for different types of
>   metadata."

[CLARITY] This whole section (4.2) is weak. 
It is insufficient to merely quote some putative authority who
once said something that might be interpreted to sanction your
dismissal of antecedent work. Instead, there should at least
be some adumbration of a critique. 

The question here isn't one of defining a new set of 
properties, it is one of defining a whole new framework
for representing properties. (I suspect the quote above is
really just in regards to the first question.) Proposals
such as RDF are in *direct* conflict with this one. 
It would be as if the Dublin Core group decided not only
on their 15 properties, but decided to go make up their
own property representation as well (instead of using RDF).
We need another property representation like a hole in the head
(or like another style sheet proposal...).

[OBJECTION] The property proposal here should be extracted
and made to stand on its own, if it can. The property mechanism
used is, or should be, orthogonal to the portent of the rest of
the draft. 

Properties are of general utility, beyond webdav.
There are several emerging protocols, such as iCalendar, SWAP,
and RVP (not to mention HTCPCP!) which have need for property
syntax, manipulation ("PROPPATCH") and query ("PROPFIND").
Regardless of your opinion of those example protocols, I hope
you agree that it is neither necessary nor desirable to have
a suite of contradictory property mechanisms specific to
particular http extensions. In fact, webdav-versioning,
ACP, and DASL -- all "in the family" -- have all three already
found a need to extend the property proposal contained in
webdav-protocol. It would be a worthwhile exercise to make
that property proposal stand on its own just for the webdav
program alone.

Getting beyond the requirements of other protocols, if we ever
want to get to the point of allowing independent communities
to define new property sets (as described in the quote above),
we need to have the confidence that the property mechanism is
sufficiently accommodating.

A detailed comparison to other property framework proposals,
outlining the pros and cons of different approaches, would
also be worthwhile because however much thought has gone into
the properties framework in webdav, more has gone into RDF and
some of the other proposals.

It is as yet unclear whether RDF will "stick". While it is
quite capable in expressive power, it is becoming ever more
obtuse with every revision. Furthermore, RDF is to a large
extent an unfulfilled promise in that as yet it has no more
than intimated how properties will be maintained or queried
over protocol.  However, there does seem to be some force
behind it. In particular, that great spawn of a protocol,
P3P, may get accelerated for political reasons, and P3P relies
on RDF.

Regardless of who emerges the victor, I hope that there
is only one. It makes no sense for authors, programmers,
web server vendors, and client applications to all have
to contend with multiple property protocols for essentially
the same purpose.

The webdav-protocol draft could be written in a manner
independent of the particular property framework,
subject to some stated requirements from that service.
This would allow the webdav protocol to be considered
independently of the merits of the particular property
framework to which it is currently joined at the hip, and
would allow it to survive if for reasons political or
technical some other property proposal ends up holding sway.

>   The XML namespace mechanism, which is based on URIs [RFC2396], is
>   used to name properties because it prevents namespace collisions and
>   provides for varying degrees of administrative control.

[CLARITY] So is a property name a URI or not? 
Can it have a URI associated with it, or is there just
the URI associated with the entire namespace?

>   Finally, it is not possible to define the same property twice on a
>   single resource, as this would cause a collision in the resource's
>   property namespace.

[CLARITY] Huh? LDAP can have multiple values for a single attribute
(like "telephoneNumber") without that happening. Or maybe you mean
something else?

>4.6 Media Independent Links
>   Although HTML resources support links to other resources, the Web
>   needs more general support for links between resources of any media
>   type.  WebDAV provides such links. A WebDAV link is a special type
>   of property value, formally defined in section 12.4, that allows
>   typed connections to be established between resources of any media
>   type.

[CLARITY] "Media type" was also mentioned in the introduction.
What is this thing with "media"?

>5  Collections of Web Resources

[CLARITY, OBJECTION] The entire section 5 is impenetrable.
It seems to needlessly complicate something that should not
be that obscure. Presumably this is a legacy from when the
proposal was different. It is so bad that I'm going to
annotate this as an OBJECTION as well. 

Let me attempt to articulate the issues which section 5
is apparently trying to address.

In the real world, URLs on web sites are not unique
identifiers for underlying resources. A resource typically has
multiple URLs, for reasons which include replication,
compatibility with external links (bookmarks, other apps,
etc), and user navigation.  This is over and beyond any
"morning star"/"evening star" issues, such as "today.html" and
"30oct1998.html" identifying the same resource.

Furthermore, in many web sites the accessible URLs return
documents which are generated from multiple underlying
resources -- which are typically *not* accessible to random
users, and perhaps to no one (over http). The simplest case is
just document conversion (doc/txt/pdf/html) from one
underlying document. But in general the result of a GET can be
from a combination of such things as an html template file,
queries to various resource managers, and program logic (whose
source code is probably stored in files).

Now, when it comes to authoring of resources, it is generally
not appropriate -- perhaps not even meaningful -- to think
of this as being able to perform a PUT on anything that
a GET can succeed on. This is tantamount to the "updating
a view" situation in RDBMS's.

Rather, the author typically wants to update the "underlying"
resources -- the ones that may not even be accessible to end
users via GET. In fact, the authorable resources will often
reside on an entirely different server machine from the one which
is "live" (there may even be multiple such deployed servers
reliant on the master repository). The authorable resources will
have URLs which likely appear under a different URL root, which
might be a gateway to an SCM system or DMS. The URL hierarchy for
the authorable resources may also be significantly different from
that used for the public GET-able URLs, and in fact may have an
entirely flat "hierarchy".

Particularly where these "source resources" are managed by a
SCM/DMS they will have URL's that function as identifiers.
These identifiers will persist through the lifetime of the
resource, even as it is changed, and they will be unique
(individual versions will also be addressable).

The concept of a "URN" (rfc2396) is highly relevant here.
While the identifiers for these "source resources" may
not have the same level of persistence, uniqueness, and
location independence of an ISBN number, it is a difference
more of degree than of type.

Declaration and discovery of the association between 
GET-able public resources and authorable underlying resources
is therefore important but is mostly uncharted territory.
We propose the use of "source link" properties to declare 
the mapping from a derived resource to one or more
"source resources". The act of finding the resource manager
for a URN is called "resolution" (see rfc2276 and rfc2169).

We need to be able to handle the scenarios above which hold for
larger web sites -- and for larger projects generally, as
sometimes the distributed authoring exercise may have nothing to
do with web site publishing.

However, we also need to handle the simplest scenario
which might occur for example in a corporate intranet,
where users want to GET and PUT in the same URL hierarchy.

It is up to the server to enforce any constraints on
what methods are allowed on what URLs. A client can
determine this proactively using the OPTIONS method
and Allow http header field, as defined in HTTP 1.1 
and extended here.

Depending on the user and the URL, either, both, or
neither of a GET and a PUT might be allowed.

In cases where both a GET and a PUT are allowed on a URL, the
intended semantic of the PUT is certainly that if
no other changes occur, an ensuing GET will recover the
same resource that was sent with a preceding PUT.
This is not required, but that is the usual semantic.
Similarly, a GET following a DELETE should fail.

Authoring methods should generally be forbidden by the
server on the URLs for "read-only" derived resources.
When a PUT or DELETE is allowed, the expectation is
that it will have the normal consequences for those
other derived resources (at least eventually).
On the other hand, the expectation is that a PUT or
DELETE on a particular resource will *not* have any
side-effected consequences on unrelated resource --
authorable or derived, in the same URL hierarchy or not.

There is nothing to prevent a server from doing something
else. This is all just to state that "DELETE" means delete,
and "PUT" means put. Multiple GETs of the same URL should
generally return the same thing (if no changes are made).
And so on.

None of this is really new with webdav; it is
the situation with HTTP 1.1 as it stands.

With this proposal, some http methods, both existing (like
DELETE) and new (like MOVE) have an additional semantic
concerning the URL hierarchy: they take a Depth
parameter. Depth 0 means that the method is intended to apply
to just the resource identified by the URL; Depth 1 includes
its immediate children; and Depth "infinity" includes all its
descendents. The intent with this proposal is that the "child"
hierarchy of resources is directly tied to the slash-separated
URL hierarchy of their identifiers. Other proposals may extend
this proposal to act on other hierarchies or graphs of
resources which do not correspond to the URL hierarchy and
rely on some other declaration mechanism.

If a server accepts a MOVE or DELETE request with a particular
depth on a URL, then the expectation is that following the
successful operation, none of the resources addressable via a
URL which is within that depth of its "slash" hierarchy will
exist -- in the sense that later methods on those URLs will
fail. Similarly, the expectation after a MOVE or COPY is that
resources formerly addressable within the specified depth in
the "from" URL will subsequently be addressable via the
corresponding descendent URL under the "to" URL.

However, a server may choose to accept such a "recursive"
request, and still not actually make it apply to all resources
that are within the specified URL scope. It might return an
error for some of those resources (say, for access control
reasons), and it might just silently not do it for some (say,
because some "child" resources are "invisible" to the
particular user). Again, an OPTIONS request will allow a
client to determine what capabilities are available ahead of
time, and on what children.

We refer to resources that are addressable with slash-terminated
URLs as "collections". The intention is that a method with
Depth > 0 is only meaningful for collections. We refer to a resource
that is URL-addressable with a URL below a collection in the
"slash hierarchy" as its "member" or "descendent". A descendent
resource is "immediate" or "a child" of a collection resource if
its URL has just one more path element.

We refer to resources whose URLs have no trailing slash (and
hence no children) as being "simple resources". There can be such
a thing as an empty collection, which is not considered to be a
"simple resource".  All resources are either collections or
simple resources, corresponding to the spelling of their URL.

This is all purely terminology based on URLs, regardless of
whether a server implements webdav or not. Modifyable resources
(ones which allow PUT, DELETE, etc.) should have unique URLs
and in particular should not have a URL both as a collection
(slash-terminated) and as a simple resource. 

(Deep breath.)

Returning to the existing section 5, none of the introduced
notions of "compliance", "consistency", or "non-null resource"
seems to add anything -- not that I fully understand what is intended
by any of those terms. I also prefer my terminology of "immediate"
rather than "internal" and "simple resource" rather than "non-collection".

I see no reason for the notion of a "compliant resource".  A
server may implement webdav or not. Its implementation might
be said to be compliant with a spec or not.  For resources, a
method on any URL might be allowed by a server or not. An
OPTIONS request (which I hope can take a Depth parameter) can
be used to determine ahead of time what operations will be
allowed. The response to an OPTIONS Depth:1 request might
indicate that some of the immediate members allow only GET,
while others allow other methods.

It would be worthwhile to construct a MUST/SHOULD table
indicating the interrelationships among methods over time,
assuming that only one client is talking to the server. For
example, if a server responds with an OPTIONS response
indicating that children can be MOVE'd, then an ensuing MOVE
SHOULD succeed. A GET after a successful DELETE MUST fail. 
A Depth > 1 method SHOULD behave the same as if the client had
carried out the operation as separate requests working up from
the bottom. And so on. This table might be extended to convert
some of the "SHOULD"s to "MUST"s in the event that a resource
has some specified property (via the independent property
spec) which indicates that the server signs up to some greater
commitment.

A *server* can be said to be "compliant" or not according
to whether it implements the rules in that table.

It would be easy to belabor the semantic portent being attached to
the "URL namespace", attempting to specify precisely that there are
no duplicates, no infinite URLs, no cycles, and so on, with some
sort of suitably mathematical language.  This is probably not
necessary beyond the MUST/SHOULD table suggested above. (It would
in fact be quite a feat to successfully specify the "no duplicates"
criterion precisely. Unfortunately Frege, Wittgenstein and Kripke
didn't write RFCs.)

As for "collections", it should just be as simple as what
I specified above: collections are resources addressable with
slash-terminated URLs. Period. Regardless of webdav.

Just to belabor a couple of salient paragraphs from section 5....

>   Any given internal member MUST only belong to the collection once,
>   i.e., it is illegal to have multiple instances of the same URI in a
>   collection.

[CLARITY] This is an example of the kind of muddle about resources
versus URI's that pervades the spec. Is a collection a resource
or a URL? If it is a resource, then we shouldn't talk about a URI
being in a collection. And what does "multiple instances of the
same URI" mean? Does that mean multiple resources addressable
by the very same URI?

>   For all WebDAV compliant resources A and B for which B is the parent
>   of A in the HTTP URL namespace hierarchy, B MUST be a collection
>   which has A as an internal member. So, if http://foo.com/bar/blah is
>   WebDAV compliant and if http://foo.com/bar/ is WebDAV compliant then
>   http://foo.com/bar/ must be a collection and must contain
>   http://foo.com/bar/blah as an internal member.

[CLARITY] If the definition of collection is purely one of
addressability within a URL hierarchy, then this is almost tautological.
If the definition is something else, then the significance of this
paragraph escapes me.

>   In HTTP/1.1, the PUT method is defined to store the request body at
>   the location specified by the Request-URI.  While a description
>   format for a collection can readily be constructed for use with PUT,
>   the implications of sending such a description to the server are
>   undesirable.  For example, if a description of a collection that
>   omitted some existing resources were PUT to a server, this might be
>   interpreted as a command to remove those members.  This would extend
>   PUT to perform DELETE functionality, which is undesirable since it
>   changes the semantics of PUT, and makes it difficult to control
>   DELETE functionality with an access control scheme based on methods.

[CLARITY] I don't understand this argument. What is it about a
PUT that entails a delete? How is this resolved with MKCOL?
Are PUT and MKCOL thought to be methods which create only one
resource (which might be a collection)? Or are you saying that
one or both of them is able to also start populating the
collection with children as part of the single request?

>   Note that
>   the value of a source link is not guaranteed to point to the correct
>   source.  Source links may break or incorrect values may be entered.
>   Also note that not all servers will allow the client to set the
>   source link value.  For example a server which generates source
>   links on the fly for its CGI files will most likely not allow a
>   client to set the source link value.

[CLARITY] I'm not certain why the draft is so hesitant about
requiring source links to be reliable. Presumably if they are
provided, that represents some level of commitment? Since they
point to the underlying source resources, which as discussed
above can be nearly tantamount to URNs, I see no reason to be
so lax about their being unreliable.

Links and pointers are an important aspect of a full-fledged
property proposal; most of the properties discussed in the
draft are on only a single resource. Again, the webdav
property framework should be isolated into its own proposal,
and all this stuff about "dead properties" and "links" and so
on can be fleshed out there.

As stated above in my attempt to re-articulate section 5, the
ability to map to source resources is an important issue. It
deserves to be highlighted more in the draft, rather than
mentioned here and then buried in section 13.10. In particular,
so far as I can make out, this is the only property which is
intended to be set on non-authorable (derived) resources.
It would aid the reader to draw that out.

>6  Locking
>
>   The ability to lock a resource provides a mechanism for serializing
>   access to that resource.  Using a lock, an authoring client can
>   provide a reasonable guarantee that another principal will not
>   modify a resource while it is being edited.  In this way, a client
>   can prevent the "lost update" problem.

[OBJECTION] There you go again with the "lost update".

>   This specification allows locks to vary over two client-specified
>   parameters, the number of principals involved (exclusive vs. shared)
>   and the type of access to be granted. This document defines locking
>   for only one access type, write. However, the syntax is extensible,
>   and permits the eventual specification of locking for other access
>   types.

[OBJECTION] Locking is not critical to basic authoring, as I
have outlined earlier. Locking has much greater affiliation
with versioning and with ACL, and so should be proposed either
as part of those drafts, or on its own, not here. Actually, I
suspect that locking may not need to be proposed at all, as a
suitably rich pair of versioning and ACL proposals would not
require a special LOCK method and protocol which distinguishes
that property from general properties which might be
associated with a resource and manipulated. However, I don't
have the time or space to explore that direction right now.

>6.1 Exclusive Vs. Shared Locks
>   The most basic form of lock is an exclusive lock.  This is a lock
>   where the access right in question is only granted to a single
>   principal.

[CLARITY] The term "principal" is not one that appears in the HTTP 1.1
spec, and needs to be explicitly defined. Is a "principal" intended
to be a single user? A role? A particular user agent? How does HTTP
authentication relate to it?

[OBJECTION] Obviously all this has to do with ACLs, which
reinforces my suggestion that locking should be deferred to an
ACL or versioning proposal anyway. In particular, it is
senseless to suggest a locking protocol until there is a
supporting mechanism for administrative actions (lock
override, etc.), and such a mechanism can not be effectively
articulated except as part of an ACL proposal.

>   The need for this arbitration results from a desire to
>   avoid having to merge results.

[CLARITY] Note that an exclusive lock doesn't rule out the eventual
need for merging, because the exclusive lock may be in an SCM branch.

>   However, there are times when the goal of a lock is not to exclude
>   others from exercising an access right but rather to provide a
>   mechanism for principals to indicate that they intend to exercise
>   their access rights.  Shared locks are provided for this case.  A
>   shared lock allows multiple principals to receive a lock.  Hence any
>   principal with appropriate access can get the lock.

[OBJECTION] It makes sense to me to have exclusive and advisory
locks (aka pessimistic and optimistic). That distinction has a
long tradition in SCM and elsewhere. However, these "shared
locks" are different. They seem to want to provide the
notification capability, as well as accomplish an ACL: a "shared
lock" is essentially owned by a group, not a user, and can be
used by any of them. I would instead prefer to see pure advisory
locks -- which are associated with a particular user -- and leave
to an ACL proposal the configuration of what subset of users are
allowed to do what. Again, locking needs to await a coherent ACL
proposal.

>   Of the small number who do have write
>   access, some principals may decide to guarantee their edits are free
>   from overwrite conflicts by using exclusive write locks.  Others may
>   decide they trust their collaborators will not overwrite their work

[OBJECTION] Ok, now you are just trying to hurt me.

>   For example, some repositories only support shared write locks while
>   others only provide support for exclusive write locks while yet
>   others use no locking at all.

[CLARITY] Not shared locks as you have defined them.

>6.3 Lock Tokens
>
>   A lock token is a type of state token, represented as a URI, which
>   identifies a particular lock.  A lock token is returned by every
>   successful LOCK operation in the lockdiscovery property in the
>   response body, and can also be found through lock discovery on a
>   resource.

[OBJECTION] Locks certainly need a URI, but I'm not convinced that
the supposed benefit of carrying this cookie around in the protocol
(the "If" header, etc.) outweighs the hassles in implementation and
(probably) eventual usage. SCM systems have gotten along just fine
without this. A user needs to be able to perform authoring from
multiple client machines and UA's, and so needs to be able to
recover these tokens (so they can be sent back). With that, there
will still be ample room for users to shoot themselves in the foot.
This is mostly a UI issue. A user can certainly authenticate as
a different identity, or an ACL proposal might allow for a single
identity to authenticate in "readonly" mode.

>6.4.1     Node Field Generation Without the IEEE 802 Address

[CLARITY] Surely this can go in an appendix.

>6.5 Lock Capability Discovery
>
>   Since server lock support is optional, a client trying to lock a
>   resource on a server can either try the lock and hope for the best,
>   or perform some form of discovery to determine what lock
>   capabilities the server supports.  This is known as lock capability
>   discovery.  Lock capability discovery differs from discovery of
>   supported access control types, since there may be access control
>   types without corresponding lock types.  A client can determine what
>   lock types the server supports by retrieving the supportedlock
>   property.
>
>   Any DAV compliant resource that supports the LOCK method MUST
>   support the supportedlock property.

[CLARITY] So which is it: the server, or the resource, that is
the subject of "supports"? And what does "supports" mean: that the
user is allowed (according to ACL) to do it on this particular resource,
or that the server is in theory capable of it?

[OBJECTION] It appears that in this property framework, that only
lock properties can have queriable metadata (supportedlock,
lockscope, locktype)? Is supportedlock the only property with a
documented nested xml structure? It would be easier to answer
these questions if the property proposal were spelled out more,
so that the reader is not forced to rely so heavily on the
implications of examples.

>6.7 Usage Considerations
>
>   Although the locking mechanisms specified here provide some help in
>   preventing lost updates, they cannot guarantee that updates will
>   never be lost.  Consider the following scenario:

[OBJECTION] Again, "lost updates" are a red herring, entirely
orthogonal to SCM locking.

>7  Write Lock
>
>   This section describes the semantics specific to the write lock
>   type.  The write lock is a specific instance of a lock type, and is
>   the only lock type described in this specification.
>
>7.1 Methods Restricted by Write Locks
>
>   A write lock MUST prevent a principal without the lock from
>   successfully executing a PUT, POST, PROPPATCH, LOCK, UNLOCK, MOVE,
>   DELETE, or MKCOL on the locked resource.  All other current methods,
>   GET in particular, function independently of the lock.

[OBJECTION] Again, locking belongs with ACL. When complemented with
a full ACL specification, the specific method set restricted by a lock
need not be hard-wired into the protocol. We needn't invent a fixed
set of lock types (write, etc.) with particular fixed semantics;
an ACL rule could specify what the restrictions are. (And, as
intimated earlier, a rich ACL proposal could probably do away
with a distinguished lock property altogether.)

>7.4 Write Locks and Null Resources

[CLARITY] I didn't understand null resources earlier, and I certainly
don't understand "lock-null resources".

>7.5 Write Locks and Collections
>
>   A write lock on a collection, whether created by a "Depth: 0" or
>   "Depth: infinity" lock request, prevents the addition or removal of
>   members of the collection by non-lock owners.

[CLARITY] This is the first mention of "Depth" in the draft.
Some definition/introduction is merited.

>7.5 Write Locks and Collections

[OBJECTION] Again, I'd much prefer a proposal that didn't special case
locks so much, and considered such issues as properties on collections
as a general case.

>7.6 Write Locks and the If Request Header

[OBJECTION] As indicated above, I think this is overkill for the
problem, if it even helps at all. Consider all the scenarios,
*including* the case where the user *wants* Program B to take over
the work.

>7.7 Write Locks and COPY/MOVE
>
>   A COPY method invocation MUST NOT duplicate any write locks active
>   on the source.

[OBJECTION] COPY is a bit of an oddball, particularly in light of previous
strictures concerning a lack of "duplicates" in the namespace. The actual
semantics of a COPY might vary considerably: an SCM branch, an SCM snapshot,
replication, backup, deployment, and so on. I'm not comfortable with
a blanket statement on how locks behave until those possible semantics
are fleshed out (which may have to await other drafts). This is
true not just for locks but properties, links, pointers, and so on.
(And again, I think locks deserve little or no special treatment as
compared to properties generally).

>7.8 Refreshing Write Locks

[OBJECTION] Lock refreshes seem motivated by a desire to accommodate unstable
or ephemeral clients (such as java applets), so that they are forced to
maintain a "dead man switch". However, just because I've lost my work
on my client doesn't mean I want all my server-side state wiped out
as well. It should be possible for an unstable client to be brought back
up and start up again where it left off. Furthermore, it is far from
clear why this complication needs to be added to the protocol anyway,
as compared to just allowing administrators to set up automatic sweeps
on a periodic basis -- achieving the same end, if that end is in fact
desirable.

>8.1 PROPFIND

[OBJECTION] Again, I believe that the property proposal needs to be
extracted and re-thought, addressing at least the issues raised earlier.

I'm not going to get into a detailed critique of the syntax and
semantics of the property proposal here. 

>   <D:propfind xmlns:D="DAV:">

[CLARITY] There are a few places in the draft where "dav:" is used
instead of "DAV:".

[OBJECTION] It should probably be called "DA:" or "DAP:", since no
versioning is provided, unless there actually is some (unspecified)
conception for how this proposal will be extended to include versioning.

>   >>Request
>
>   PROPPATCH /bar.html HTTP/1.1
>   Host: www.foo.com
>   Content-Type: text/xml; charset="utf-8"
>   Content-Length: xxxx
>
>   <?xml version="1.0" encoding="utf-8" ?>
>   <D:propertyupdate xmlns:D="DAV:"
>   xmlns:Z="http://www.w3.com/standards/z39.50/">
>     <D:set>
>          <D:prop>
>               <Z:authors>
>                    <Z:Author>Jim Whitehead</Z:Author>
>                    <Z:Author>Roy Fielding</Z:Author>
>               </Z:authors>
>          </D:prop>
>     </D:set>
>     <D:remove>
>          <D:prop><Z:Copyright-Owner/></D:prop>
>     </D:remove>
>   </D:propertyupdate>
>
>   >>Response
>
>   HTTP/1.1 207 Multi-Status
>   Content-Type: text/xml; charset="utf-8"
>   Content-Length: xxxxx
>
>   <?xml version="1.0" encoding="utf-8" ?>
>   <D:multistatus xmlns:D="DAV:"
>   xmlns:Z="http://www.w3.com/standards/z39.50">

[OBJECTION] I agree that some sort of structured response is
required to express the result of methods with a Depth parameter.
However, I do not see cause for the specific baroque 4-level
multi-status structure.

I also do not think that the existence of a multi-status response
structure should open the door to piling in multiple unrelated
operations into a single request, as is done here with "set" and
"remove". If optimization of communication is desired, that
should be left to lower layers (as in fact HTTP 1.1 begins to
address). If transactions are desired -- and that is not the
case here, since by definition the "multi-status" allows each to
fail -- then that mechanism should be proposed as a separate
draft, after due consideration of TIP.
 
>   A MKCOL request message may contain a message body.  The behavior of
>   a MKCOL request when the body is present is limited to creating
>   collections, members of a collection, bodies of members and
>   properties on the collections or members.  If the server receives a
>   MKCOL request entity type it does not support or understand it MUST
>   respond with a 415 (Unsupported Media Type) status code.  The exact
>   behavior of MKCOL for various request media types is undefined in
>   this document, and will be specified in separate documents.

[OBJECTION] Explain to me again why PUT won't work, perhaps extended
as suggested by PEP, as an "M-PUT"? If it is possible to express
what PUT does with a single resource, what is so difficult that prevents
you from specifying how to provide multiple child resources as part
of a MKCOL?

>8.4 GET, HEAD for Collections

[OBJECTION] While it might be possible to extend GET via M-GET on
a collection to have a precise response format, I agree that 
refining GET behavior for collections is probably not a good idea.
However, we need *some* way to easily and simply query
about children. There is no discussion in the spec about what
OPTIONS does with collections. I would suggest that OPTIONS
take a Depth parameter. It is either that, or a new "DIR" method.
I prefer extending OPTIONS.

>8.6 DELETE
>
>8.6.1     DELETE for Non-Collection Resources
>
>   If the DELETE method is issued to a non-collection resource which is
>   an internal member of a collection, then during DELETE processing a
>   server MUST remove the Request-URI from its parent collection.

[CLARITY] Another "sense and reference" confusion, where URI's
are somehow in collections.

>8.6.2     DELETE for Collections

[CLARITY] As with COPY (see above), some discussion is merited about
the possible semantics of DELETE. For example, SCM systems sometimes offer
both "delete" and "destroy". There should be some suggestion of how
the capability of a server to carry out these different semantics
should be detected and requested.

>8.7 PUT

[CLARITY] The methods MKCOL (8.3), POST (8.5), and PUT (8.7) should
all be discussed in succession.

>   A PUT that would result in the creation of a resource without an
>   appropriately scoped parent collection MUST fail with a 409
>   (Conflict).

[CLARITY] I suggest that all these statement like "delete means
delete" and "put means put", and how the methods are related to
the URL hierarchy, be gathered together in a table, as described
earlier.

I have no idea what an "appropriately scoped parent collection"
is. Also, I have no idea in general what it means exactly to
state that a resource does or does not exist. I *can* state what
operations on a particular URL will fail.

[CLARITY] It would be worthwhile to discuss the situations of PUT
meaning "create", and PUT meaning "update". An extension which allows
a client to restrict behavior is worth considering.

>8.7.2     PUT for Collections
>
>   As defined in the HTTP/1.1 specification [RFC2068], the "PUT method
>   requests that the enclosed entity be stored under the supplied
>   Request-URI."  Since submission of an entity representing a
>   collection would implicitly encode creation and deletion of
>   resources, this specification intentionally does not define a
>   transmission format for creating a collection using PUT.  Instead,
>   the MKCOL method is defined to create collections.

[CLARITY] This paragraph seems redundant with the second paragraph
of section 5.3. My comments above to that earlier paragraph
apply here as well.

>   When the PUT operation creates a new non-collection resource all
>   ancestors MUST already exist.  If all ancestors do not exist, the
>   method MUST fail with a 409 (Conflict) status code.  For example, if
>   resource /a/b/c/d.html is to be created and /a/b/c/ does not exist,
>   then the request must fail.

[CLARITY] This should be moved into 8.7.1, or deleted as being redundant.

>8.8 COPY Method

[CLARITY] Section 7.7 should be merged in here. See my general comments
regarding COPY at 7.7.

>   Live properties SHOULD be duplicated as identically behaving live
>   properties at the destination resource.  If a property cannot be
>   copied live, then its value MUST be duplicated, octet-for-octet, in
>   an identically named, dead property on the destination resource
>   subject to the effects of the propertybehavior XML element.

[CLARITY] This whole conception of "live" and "dead" properties escapes
me. Maybe when the property proposal is fleshed out in its own
document, it will become clearer.

>8.8.3     COPY for Collections

[CLARITY] There are a number of important issues this glosses over.
What about treatment of resources that act like "symbolic links" (MKREF
or whatever)? Should they be copied as resources, omitted, or copied
as links? Can I specify my preference?

What should happen if my request and destination URI's overlap in
the URL hierarchy?

How should the server treat simultaneous requests, for example if
there is one COPY going into a tree that another request is COPY-ing
out of? What about if two COPY requests go into the same tree, and
then one of the requests fails? Must the server implement isolation?
Similar questions apply to simultaneous combinations of MOVE, COPY,
and DELETE.

If a request with Overwrite "T" fails, is full rollback mandatory?

There needs to be more discussion of errors that apply to the
entire request and appear in the http header (501, etc.), and
status responses in the body. If there is a 5xx error on
some of the children, should the entire response have a 5xx
status code in its http header? Or only if the entire operation
was unsuccessful? The client needs some way, per failed child,
to easily distinguish errors due to a server problem out of
the user's control, and errors such as from an ACL. Would
this mean a 4xx error vs. a 5xx error associated with each
child in the response? How is the client to distinguish
whether a status code in an http header is in reference
to a request URI or a destination URI? (These are all general
issues with the authoring methods, not specific to COPY.)

[OBJECTION] Although I hesitate to ever add complexity, this does
not seem powerful enough.

"Depth" seems too impoverished; typically I will want to specify
that the method should apply to resources with a particular
property, or matching some pattern. But we have no facility for
expressing such a scope, even as part of some other method.

It is too rigid to specify a fixed error handling behavior.
I should like to specify the behavior as "all-or-nothing"
or "do-your-best".

I might want to specify certain properties (access rights, certainly)
that apply to the new destination resources. How that might
be done awaits full proposals for properties and for ACL.
I also may want richer control of the property copying semantics
described in 8.8.2.

I may want the resources to undergo some URL renaming during
a COPY of a collection. I can do this when I COPY a simple resource;
I may want the same capability for Depth > 0, so that the destination
children get a URL name which is spelled differently than that
of their matching source.

I may want to specify what the server should do when it discovers
that a destination resource exists already, or not (as with PUT,
see above).

>   502 (Bad Gateway) - This may occur when the destination is on
>   another server and the destination server refuses to accept the
>   resource.

[OBJECTION] This request to another server is done with what
proxy credentials? What user credentials? What protocol is used
for this server-to-server operation? A MKCOL? 

>8.9 MOVE Method

[OBJECTION] Similar comments apply to MOVE as to COPY. Since MOVE
is so similar to COPY, I wonder if we really need two methods
(that is, add a parameter to a single method which indicates whether
a following delete should occur).

>   If: (<opaquelocktoken:fe184f2e-6eec-41d0-c765-01adc56e6bb4>)
>       (<opaquelocktoken:e454f3f3-acdc-452a-56c7-00a5c91e4b77>)

[CLARITY] Is one of these for the request URI, and the other one
for the destination?

[OBJECTION] See above for my objections regarding this "cookie"
approach to locking, and to the way HTTP header fields are
added (Overwrite) or changed (If) without any acknowledgment of PEP.

>8.10 LOCK Method

[OBJECTION] See above for my objections to treating locks 
so differently from other properties.

>8.10.3    Locking Replicated Resources
>
>   A resource may be made available through more than one URI. However
>   locks apply to resources, not URIs. Therefore a LOCK request on a
>   resource MUST NOT succeed if can not be honored by all the URIs
>   through which the resource is addressable.

[CLARITY] I'm not certain what it means for a URI to honor a request.
There is a typo there, "succeed if can".

[OBJECTION] See my comments on COPY -- there are a variety of
reasons that resources may be duplicated. The semantics of those
cases for properties and locks may vary. For example, if I want
to carry out a snapshot, then I want it "disconnected" from later
operations on the original resources.

[OBJECTION] Does this section imply that a server has to
consult with other servers to check on locks?
What protocol is that done with?

>9.4 If Header

[OBJECTION] See my comments above on the If header and
opaquelocktoken.  This all seems rather complicated for the
scenarios it is apparently intended to address.  Yet it still
restricts itself to ETags and locks, and so is not sufficient for
the kind of scoping that might be useful for COPY and other
methods, as discussed above.

In part, its complexity seems to arise from a desire to
accomplish what really should be left to an ACL configuration
("it either has to have this kind of lock and a weak tag, or have
a strong tag", etc.).

>   If: (Not <locktoken:write1> <locktoken:write2>)

[OBJECTION] As they say, if you leave academics alone long enough,
they turn everything into Lisp :).

>9.5 Lock-Token Header

[CLARITY] Not that I agree with either header, but when are
tokens used in If, and when in Lock-Token?

>9.6 Overwrite Header
>
>   Overwrite = "Overwrite" ":" ("T" | "F")

[OBJECTION] "T" and "F" is needlessly English-specific, and needlessly
introduces capitalization worries. Use 1 and 0.

As discussed above in COPY, the client needs to have richer
control over the semantics of authoring methods than is allowed for
here. In an alternative proposal, this lone boolean header may be supplanted,
hence conveniently obviating the question of how to spell boolean values.

>9.8 Timeout Request Header

[OBJECTION] See discussion with section 7.8.

>10.1 102 Processing

[OBJECTION] This has a bit of the flavor of a congressional
"continuing resolution". This is clearly orthogonal to anything
else in the proposal.

If it is worthwhile, it should be separately proposed, presumably
with a full discussion of: consequences for proxies; whether there
is a way for the server to know what the client's timeout is;
how a client might indicate whether it wants any 102 responses,
just one, or a periodic update; and how multiple and final
notifications can be accomplished.

>10.2 207 Multi-Status
>
>   The 207 (Multi-Status) status code provides status for multiple
>   independent operations (see section 11 for more information).

[CLARITY] See my discussion under COPY (8.8.3) concerning the interrelation
between http status and response body status. This needs more
explanation.

>10.3 422 Unprocessable Entity

[CLARITY] So would this include any case of an ACL violation,
whereby a client attempts an operation that would not have been reported
as allowed in a preceding OPTIONS response on that URI?
Could you give some examples?

>10.4 423 Locked

[OBJECTION] Again, I don't think locks should be so specially
considered. Presumably the ACL proposal will supply a suitable
set of errors, and even allow for custom error messages to be
specified with ACL rules.


(Skipping ahead a bit past the details in sections 11-14....)


>15 DAV Compliance Classes

[OBJECTION] Again, this notion of "compliant resource" does not
seem necessary. Servers may or may not implement an extension;
that can be determined via PEP. A URL may or may not permit certain
operations; that can be determined via an OPTIONS request and Allow
(no need for a "DAV" http header for this).

If you pull locking out of this proposal, then you don't even
need to get into "class 1" and "class 2". It just devolves to
the general question of discovering what of various extensions
are implemented by a server (and at what version).

>   Since interoperation of clients and servers does not require locale
>   information, this specification does not specify any mechanism for
>   transmission of this information.

[CLARITY] If it isn't required, then don't make it a MUST. But presumably
clients could benefit from a server knowing what their locale is?
Not that this proposal is the place to put such a requirement.

----------

To summarize:
- The property framework needs to be pulled out of this, and
either abandoned in favor of another or proposed on its own.
- Locking needs to be deferred to a full proposal for ACL
(and possibly versioning), and may not need to be specially
handled at all.
- That leaves a proposal for COPY, MOVE, and MKCOL --
one which is both imprecise and ill-conceived.


Mark D. Anderson
mda@discerning.com
October 30, 1998

Received on Friday, 30 October 1998 22:48:13 UTC